1x MI300x has 192GB HBM3.
1x MI325x has 256GB HBM3e.
They cost less, you can fit more into a rack and you can buy/deploy at least the 300's today and 325's early next year. AMD and library software performance for AI is improving daily [0].I'm still trying to wrap my head around how these companies think they are going to do well in this market without more memory.
Cerebras and Groq provide the fastest (by an order of magnitude) inference. This is very useful for certain workflows, which require low-latency feedback: audio chat with LLM, robotics, etc.
Outside that narrow niche, AMD stuff seems to be the only contender to NVIDIA, at the moment.
Only on smaller models, their numbers are all 70b in the article.
Those numbers also need to be adjusted for the comparable amounts of capex+opex costs. If the costs are so high that they have to subsidize the usage/results, then they are just going to run out of money, fast.
No, they are 5x-10x faster for all the model sizes (because it's all just running from SRAM and they have more of it than NVIDIA/AMD), even though they benchmarked just up to 70B.
> Those numbers also need to be adjusted for the comparable amounts of capex+opex costs. If the costs are so high that they have to subsidize the usage/results, then they are just going to run out of money, fast.
True. Although, for some workloads, fast enough inference is a strict prerequisite and GPUs just don't cut it.
They'll probably still have to do advanced packaging putting HBM on top to save things.
They could maybe enable some cool real time inference stuff like VR SORA, but that doesn't seem like much of a product market for the cost yet.
Maybe something heavy on inference iteration like an o1 style model that trades training time for more inference, used to process earnings reports the fastest or something zero sum latency war like that will be a viable market. A real time use case that may be viable with cerebras first could be with flexible robotics in ad hoc latency sensitive environments, maybe warfare.
If models keep lasting ~year timescales could we ever see people going with ROM chips for the weights instead of memory? Has density and speed kept up there? Lots of stuff uses identical elements to help make the masks more cheaply, so I don't think you could use something like EUV for ROM where every few um^2 of die is distinct.
This is where the interesting wafer-size packaging TSMC does for the Dojo D1 supercomputer comes in - Cerebras has demonstrated what can be a superior process for inter-element bandwidth, because connections can be denser than they are with an interposer, but the ability to have different elements coming from different processes is also important, and used on the D1 slab. Stacking HBM modules on top of a Cerebras wafer might help with that. I'm sure the smart people there are not sleeping on these ideas.
For ultra low latency uses such as robotics or military applications, I believe a more integrated approach similar to the Telum processors from IBM is better - putting the inference accelerator on the same die as the CPUs gives them that, and they are also much smaller than a Cerebras wafer (and it's cooling).
Gene Amdahl would have loved to see them.
Before ROM, there's a step where HBM for weights is replaced with Flash or Optane (but still high bandwidth, on top of the chip) and KV cache lives in SRAM - for small batch sizes, that would actually be decently cheap. In this case, even if weights change weekly, it's not a big deal at all.
It takes 4 nodes to serve one 70B model or $6.24m. It is unclear how many requests they can serve concurrently for this, they only report on tokens.
A cluster of 128 MI300x, which has a combined 24,576GB and can serve up a whole ton of models and users, 4 racks total, is in the ~$5m range, if you don't go big on networking/disk/ram (which you don't need for inference anyway).
While speed might be an issue here, I don't think people are going to be able to justify the price for the speed (always the tradeoff) unless they can get their costs down significantly.
https://cloud.google.com/blog/products/compute/updates-to-ai...
From https://cloud.google.com/tpu/pricing and https://cloud.google.com/vertex-ai/pricing#prediction-prices (search for ct5lp-hightpu-8t on the page) the cost for that appears to be $11.04/hr which is just under $100k for a year. Or half that on a 3-year commit.
That seems like a better deal than millions for a few CS-3 nodes.
And they've just announced the v6 TPU:
Compared to TPU v5e, Trillium delivers:
Over 4x improvement in training performance
Up to 3x increase in inference throughput
A 67% increase in energy efficiency
An impressive 4.7x increase in peak compute performance per chip
Double the High Bandwidth Memory (HBM) capacity
Double the Interchip Interconnect (ICI) bandwidth
https://cloud.google.com/blog/products/compute/trillium-sixt...CS-1 had 18G of SRAM, CS-2 extended it to 40G and CS-3 has 44G of SRAM. None of these is sufficient to run the inference of Llama 70B and much less so of even larger models.
Chain of thought type operations is in this "niche".
Also anything where the value is in the follow up chat not the one shot.
They also have a very limited use case... if things ever shift away from LLM's and into another form of engineering that their hardware does not support, what are they going to do? Just keep deploying hardware?
Slippery slope.
As always, it is about TCO, not who can make the biggest monster chip.
Trains has no other sensible interpretation in the context of LLM models. My impression was that they trained the models to be better than the models trained by GPUs, presumably because they trained faster and managed to train for longer than Meta, but this interpretation was far from the content.
Also interesting to see the ommission of deepinfra from the price table, presumably because it would be cheaper than Cerebras, though I didnt even bother to check at that point because I hate these cheap clickbaity pieces that attempt to enrich some player at the cost of everyone’s time or money.
Good luck with their IPO. We need competition but we dont need confusion.
^ the entirety of it