Typical entropy of bfloat16 values seen in weights (and activations) are about 10-12 bits (only 65-75% or so of the value range is used in practice). Sign and mantissa bits tend to be incompressible noise.
This has been exploited several times before in the context of both classical HPC and AI, with lossless compression work from Martin Burtscher's lab (https://userweb.cs.txstate.edu/~burtscher/), fpzip from LLNL (https://computing.llnl.gov/projects/fpzip) and my library dietgpu from 2021 (https://github.com/facebookresearch/dietgpu) which we used to speed training on a large GPU cluster by about 10% wall clock time overall by losslessly compressing all data prior to send and decompressing upon receive (e.g., gradients, weights from backup, etc), which is still computing the same thing as it did before as it is lossless.
Also, rANS is more efficient and easier to implement in SIMD-like instruction sets than Huffman coding. It would reduce the performance latency/throughput penalties as well with DFloat11 (since we have to decompress before we do the arithmetic).
As we know, quantizations are a critical tool for local LLM runners; RAM is typically the gating factor. Are you aware of other better lossless compression of BF16 weights out there?
The reason I ask is this Dfloat11 seems relatively easy to plug in to existing quantization workflows, but you seem dismissive of the paper -- I presume it's my gap in understanding, and I'd like to understand.
I doubt that very much. Thing is that inputs are multiplied with weights and added together in a neural network layer, and then the output becomes the input of the next layer in a cycle that can repeat up to a hundred times or more. When you get to the final output layer that 10^6 factor has been applied so many times that it has snowballed to a 10^600 factor.
Using DFloat11, tokens/sec was higher only when compared relative to running inference with some layers offloaded to CPU.
Classic comp sci tradeoff between space and speed, no free lunch, etc.
Would it be more efficient to calculate some kind of per-model or per-layer mean, and then only specify standard deviations, maybe by fp8 or smaller?
Or let one of the neoclouds take care of the infrastructure costs and rent it out from them. Disclosure: I run one of them.
Some unsolicited feedback: I would suggest reworking your landing page so that the language is always from your customers' perspective. Your customers want to solve a real internal problem that they have. Talking about how great your company is will always have less impact than talking about how you know what that problem is and how you intend to solve it.
Your mission is relevant to you and your investors, not to your customers. They care about themselves.
Your "quick start" should be an interactive form. I shouldn't have to remember what to put in an email to reach out to you. Make it easy for me. Also move that to the front page, provide a few "standard" packages and a custom one. Reduce the friction to clicking the CTA.
Since your pricing is transparent, you should be able to tell me what that price will be before I even submit a request. I assume you're cheaper than the competition (otherwise why would I not go with them?) so make that obvious. Check out Backblaze's website for an example page: https://www.backblaze.com/cloud-storage/pricing
Shell out a few grand and hire a designer to make your page look more professional. Something like https://oxide.computer/ but with the points above, as they also make the same mistake of making their home page read like a pitch deck.
Website is intended to be more like documentation instead of a pitch deck or useless splash with a contact us form. I dislike sites like Oxide, I scroll past and don't read or ingest any of the fancy parts. Of course, you're right, this probably needs to be less about me. =)
Friction definitely needs to be improved. That part is being worked on right now. Our intention is to be fully self-service, so that you don't have to talk to us at all, unless you want to. Credit card and go.
We recently lowered our prices to be competitive with the rest of the market vs. focusing on people who care more about what we offer. We weren't trying to be cheaper than everyone else, we were trying to offer a better service. Lesson learned and pricing adjusted. Streisand effect, I don't like to mention the other players much.
Again, thanks!
For anyone else who hadn't heard of this term:
> Neoclouds are startups specializing in AI-specific cloud computing. Unlike their larger competitors, they don’t develop proprietary chips. Instead, they rely heavily on Nvidia’s cutting-edge GPUs to power their operations. By focusing solely on AI workloads, these companies offer specialized solutions tailored to AI developers’ needs.
from https://www.tlciscreative.com/the-rise-of-neoclouds-shaping-...
https://semianalysis.com/2024/10/03/ai-neocloud-playbook-and...
It is novel equipment that few have ever used before outside of a relatively small HPC community. It regularly breaks and has issues (bugs) that need industry relationships to manage properly. We've had one server down for over a month now cause SMCI can't get their sh/t together to fix it. That's a $250k+ 350lbs paperweight. Good luck to any other small company that wants to negotiate that relationship.
We are offering a very valuable service by enabling easy access to some of the most powerful compute available today. How many people do you think have a good grasp of what it takes to configure rocev2 & 8x400G across a cluster of servers? Good luck trying to hire talent that can set that up, they already have jobs.
The capex / opex / complexity involved with deploying this level of gear is huge and only getting larger as the industry shifts to bigger/better/faster (ie: air cooling is dead). Things are moving so quickly, that equipment you purchased a year ago is now already out of date (H100 -> H200 is a great example). You're going to have to have a pretty impressive depreciation model to deploy this yourself.
I wouldn't just dismiss this as moving costs around.
If you live in a glass house, you won’t throw stones. No one in the LLM space wants to be litigious
It’s an open secret that DeepSeek used a ton of OpenAI continuations both in pre training and in the distillation. That totally violates openAI TOS. No one cares.
Except for OpenAI.
Floating point is just an inefficient use of bits (due to excessive dynamic range), especially during training, so it will always be welcome there. Extreme quantization techniques (some of the <= 4-bit methods, say) also tend to increase entropy in the weights limiting the applicability of lossless compression, so lossless and lossy compression (e.g., quantization) sometimes go against each other.
If you have billions in dollars in inference devices, even reducing the number of devices you need for a given workload by 5% is very useful.
MI300x is 192GB HMB3, MI325x is 256 HMB3e, MI355x should be 288 HBM3e (and support FP4/6).
Nvidia about to release blackwell ultra with 288GB. Go back to maybe 2018 and max was 16gb if memory serves.
DeepSeek recently release a 670 gb model. A couple years ago Falcon's 180gb seemed huge.
We've been stuck with the same general caps on standard GPU memory since then though. Perhaps limited in part because of the generational upgrades happening in the bandwidth of the memory, rather than the capacity.
A one time effective 30% reduction in model size simply isn't going to be some massive unlocker, in theory or in practice.
2 weeks? Two months? Two days? Two minutes?
All of the above are true sometimes! Exciting times indeed.
* I work with xmad.ai
The context length alone probably makes it worthwhile even if your models fit in memory, but I'm curious if it improves tokens/sec even all on GPU, since in my very amateur understanding LLMs tend to be constrained by memory bandwidth?
Other lossless codecs can hit 600 GB/s on the same hardware, so there should be some room for improvement. But A100’s raw memory bandwidth is 1.6 TB/s
I see it mentioned but can’t understand if it’s based on it or different/better…
If you add an LZ-type compressor and have this be in the critical path for inference, then decompression will be a lot slower. It would be best to fuse decompression with the compute kernels (e.g., a GEMM that performs decompression on each tile before the arithmetic), and the simpler the decompression routine, the easier this will be.
/s I'll show myself out
In fact, there's this funny occurrence where Q4 models on occasion perform better than their fp16 counterparts on benchmarks ran with top_k=1 since the outputs are slightly more random and they can less deterministically blunder past the local maximum into a more correct solution.
The effects of model quantization are usually qualified in terms of performance on benchmaxxed tasks with strong logit probabilities, temp 0, and a "right" answer the model has to pick. Or even worse they'll be measured on metrics that don't map to anything except themselves like perplexity (https://arxiv.org/pdf/2407.09141)
I agree Q8 is strong but I also think the effects of quantization are constantly being underappreciated. People are often talking about how these models perform while fundamentally using 10+ variants of a single model with distinct performance profiles.
Even knowing the bits per weight used isn't enough to know how exactly a given quant method is affecting the model: https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-ggufs
The problem is not that people are making quants "haphazardly", it's that people keep parroting that various quants are "practically lossless" when they actually have absolutely no clue how lossy they are given how application specific the concept is for something as multidimensional as an LLM.
The moment anyone tries a little harder to quantify how lossy they are, we repeatedly find that the answer is "not any reasonably definition of lossless". Even in their example where Q4 is <1% away in MMLU 5-shot is probably massively helped by a calibration dataset that maps to MMLU-style tasks really well, just like constantly using WikiText massively helps models that were trained on... tons of text from Wikipedia.
So unless you're doing your own calibrated quantization with your own dataset (which is not impossible, but also not near common), even their "non-haphazard" method could have a noticeable impact on performance.
You are saying that people are using quantized models haphazardly and talking about them haphazardly. I'll grant it's not the exact same thing as making them haphazardly, but I think you took the point.
The terms shouldn't be used here. They aren't helpful. You are either getting good results or you are not. It shouldn't be treated differently from further training on dataset d. The weights changed - how much better or worse at task Y did it just get?
- quality for your specific application
- time to first token
- inter-token latency
- memory usage (varies even for a given bits per weight)
- generation of hardware required to run
Of those the hardest to measure is consistently "quality for your specific application".
It's so hard to measure robustly that many will take significantly worse performance on the other fronts just to not have to try to measure it... which is how you end up with full precision deployments of a 405b parameter model: https://openrouter.ai/meta-llama/llama-3.1-405b-instruct/pro...
When people are paying multiples more for compute to side-step a problem, language and technology that allows you to erase it from the equation is valid.
Some have the capability to figure it and can do it for both full precision and quantized. Most don't and cannot.
The big take away, in my opinion, is that their technique for LUTs etc could also be applied to lossy quants as well. Say maybe you get 5bit accuracy in size of 4bit?
I don’t know, but maybe? Also their two stage design might make current quantized you kernal designs better.
And, maybe the methods stack for those willing to trade both costs for the smallest representation.
Somewhere between 8 and 192 cores I'm sure there's enough AVX512 to get the job done. And we've managed to reinvent Intel's Larrabee / Knights concept.
Sadly, the highly optimized AVX512 kernels of llamafile don't support these exotic floats yet as far as I know.
Yes, energy efficiency per query will be terrible compared to a hyperscaler. However privacy will be perfect. Flexibility will be higher than other options - as running on the CPU is almost always possible. Even with new algorithms and experimental models.
Also when we ran experiments for streaming with the current kernels, we were median ~1.3x slower at inference
>achieving near information-optimal compression without any loss of precision
So perhaps more lossless as in didn't lose perplexity/benchmarks?
In my mind lossless is precisely zero bits lost along the way.