Hacker News

429

TurboQuant: Redefining AI efficiency with extreme compression

by ray__1774414854120 comments
This is a great development for KV cache compression. I did notice a missing citation in the related works regarding the core mathematical mechanism, though. The foundational technique of applying a geometric rotation prior to extreme quantization, specifically for managing the high-dimensional geometry and enabling proper bias correction, was introduced in our NeurIPS 2021 paper, "DRIVE" (https://proceedings.neurips.cc/paper/2021/hash/0397758f8990c...). We used this exact rotational approach and a similar bias correction mechanism to achieve optimal distributed mean estimation. I also presented this work and subsequent papers in a private invited talk at Google shortly after publication. Given the strong theoretical overlap with the mechanisms in TurboQuant and PolarQuant, I hope to see this prior art acknowledged in the upcoming camera-ready versions.
by amitport1774424848
Can someone ELI5 these two concepts please, which make no sense to me:

  > "TurboQuant starts by randomly rotating the data vectors. This clever step simplifies the data's geometry"
I don't understand how taking a series of data and applying a random rotation could mathemetically lead every time to "simpler" geometry.

If I throw a bunch of shapes on the ground, tightly packed and touching each other, then rotate all of them, you can't guarantee that the new conglomerate shape is any more/less "simple" than before, right?

  > "Johnson-Lindenstrauss Transform to shrink complex, high-dimensional data while preserving the essential distances and relationships between data points. It reduces each resulting vector number to a single sign bit (+1 or -1)."
How can a boolean value preserve all of the relational and positional information between data points?
by gavinray1774445582
Someone implementing it on llamacpp already https://github.com/mudler/llama.cpp/commit/dee102db1bfd723c9...
by akhenakh1774442779
And a group has published an independent working implementation today, nice to see:

https://github.com/tonbistudio/turboquant-pytorch

by pstoll1774439330
This is the worst lay-people explanation of an AI component I have seen in a long time. It doesn't even seem AI generated.
by benob1774422178
Compression research keeps producing surprisingly practical results. The interesting parallel in image formats — AVIF and JPEG XL both came from video codec research (AV1 and JPEG committee respectively), and the compression gains translated almost directly. Makes me wonder how much of the current AI quantization work will eventually land in production inference the same way.
by Serhii-Set1774450974
It seems like most breakthroughs I see are for efficiency? What are the most importsnt breakthroughs from the past two or three years for intelligence?
by bilsbie1774442273
I did not understand what polarQuant is.

Is is something like pattern based compression where the algorithm finds repeating patterns and creates an index of those common symbols or numbers?

by bluequbit1774420953
Is this a tradeoff between GPU-computation-expense vs accuracy? ie: you could quantize into segments or grids on the unit circle/sphere/etc, but that's too expensive so it's better to just quantize to a Cartesian grid because the GPU can decompress cheaper?
by mmastrac1774446281
I am guessing as Google is vertically integrated and "actually pays" for AI infra (compared to OpenAI & Anthropic that receives hardware as partnerships) they have a more urgent incentive to reduce model sizes. Also, Google and Apple will be the first to gain from running model on-device
by iddan1774444758
The gap between how this is described in the paper vs the blog post is pretty wide. Would be nice to see more accessible writing from research teams — not everyone reading is a ML engineer
by zeeshana07x1774432000
For my grug brain can somebody translate this to ELIgrug terms?

Does this mean I would be able to run 500b model on my 48gb macbook without loosing quality?

by ssijak1774438359
"TurboQuant proved it can quantize the key-value cache to just 3 bits without requiring training or fine-tuning and causing any compromise in model accuracy" -- what do each 3 bits correspond to? Hardly individual keys or values, since it would limit each of them to 8 different vectors.
by macleginn1774437279
I'm somewhat at a loss here other than understanding the fundamentals. Can someone tell me how the compression impact performance?
by maurelius21774424768
Aren’t polar coordinates still n-1 + 1 for radius for n-dim vector? If so I understand that angles can be quantized better but when radius r is big the error is large for highly quantized angles right? What am I missing?
by moktonar1774423931
Will this help us run models locally?
by lwhi1774442552
Sounds like Multi-Head Latent Attention (MLA) from DeepSeek
by lucrbvi1774428656
has the word "advanced", gotta be good
by _s_a_m_1774444313
This sounds great! TurboQuant does KV cache compression using quantization via rotations, and ParoQuant [1] does weight compression using quantization via rotations! So we can get 4-bit weights that match bf16 precision, the KV cache goes down to 3 bits per key. This brings larger models and long contexts into the range of "possibly runnable" on beefy consumer hardware.

[1] https://github.com/z-lab/paroquant

by naasking1774446454
[dead]
by Yanko_111774461736
[dead]
by maxothex1774454913
[flagged]
by diablevv1774447911
[dead]
by ryguz1774452881
[dead]
by pugchat1774437386
[dead]
by leontloveless1774440172
[dead]
by wei032881774446026
[dead]
by QubridAI1774451817
[dead]
by veunes1774430480
[dead]
by aledevv1774431466
[flagged]
by rsmtjohn1774427561
[dead]
by mohsen11774425646
[dead]
by dev_tools_lab1774433150
[dead]
by hikaru_ai1774422501
[dead]
by paxrel_ai1774443570
[flagged]
by vaildegraff1774439938
Pied Piper vibes. As far as I can tell, this algorithm is hardly compatible with modern GPU architectures. My guess is that’s why the paper reports accuracy-vs-space, but conveniently avoids reporting inference wall-clock time. The baseline numbers also look seriously underreported. “several orders of magnitude” speedups for vector search? Really? anyone has actually reproduced these results?
by mskkm1774429550