Groq press release: https://groq.com/newsroom/groq-and-nvidia-enter-non-exclusiv...

> Today, Groq announced that it has entered into a non-exclusive licensing agreement with Nvidia for Groq’s inference technology. The agreement reflects a shared focus on expanding access to high-performance, low cost inference.

> As part of this agreement, Jonathan Ross, Groq’s Founder, Sunny Madra, Groq’s President, and other members of the Groq team will join Nvidia to help advance and scale the licensed technology.

> Groq will continue to operate as an independent company with Simon Edwards stepping into the role of Chief Executive Officer.

> GroqCloud will continue to operate without interruption.

Another example of the growing trend of buying out key parts of a company to avoid any actual acquisition?

I wonder if equity holding employees get anything from the deal or indeed if all the investors will be seeing a return from this?

  • jbkkd
  • ·
  • 1 day ago
  • ·
  • [ - ]
I have a friend who worked in a company that got "not acquired" in a similar deal.

She didn't see a dime out of it, and was let off (together with a big chunk of people) within 6 months.

As this gets more common, I think it will eventually lead to startups having a hard time attracting talent with lucrative equity compensation. It will be interesting to see how long it takes until this catches on among employees, but I already wouldn't take any positions in startups with a significant payment in equity anymore. The chances are slim that this pays out anyways, but now even when you are successful, noone will stop some megacorp from just buying the product and key employees and leaving everyone else with their stake in the dust.
At my last job search I didn’t consider any equity based startups seriously because of this trend. It was already such a tenuous path as it stood, but now with the norm established it seems like it’s become impossible for a rank and file employee to get paid out.

I’m more curious how angel investors are being treated in these exits. If _they_ dry up the whole pipeline goes away

Investors with enough into the deal to fight it in court get enough to not fight it. Key employees needed by the 'not acquirer' get compensation sufficient to retain them, although increasingly much of this is under a deferred vesting arrangement to ensure they stay with the 'not acquirer'.

Non-essential employees and small investors without the incentive or pockets to fund a legal fight get offered as little as possible. This structure also provides lots of flexibility to the 'not acquirer' when it comes to paying off existing debts, leases, contracts, etc. Basically, this is the end of being an early employee or small angel investor potentially resulting in a lucrative payoff. You have to remain central and 'key' all the way through the 'not acquisition'. I expect smaller early stage investors will start demanding special terms to guarantee a certain level of payout in a 'not acquisition'. I also expect this to create some very unfortunate situations because an asset sale (as they used to be done), could be a useful and appropriate mechanism to preserve the products and some jobs of a failing (but not yet fully failed) company - which was better for customers and some employees than a complete smoking crater.

that is a great point. it’s one thing to occasionally rugpull employees, who are still at least paid for their services and robbed only of their EV on their options (i say “only”, though i find this increasingly common practice to be absolutely deplorable, to be clear). but how could investors possibly be happy with this becoming the new normal? will it get to the point where these sorts of faux acquisitions also involve paying out investors and only shafting employees? at that point you are only really even getting like a 20% discount over acquiring the company outright, which hardly seems worth it. which is to say that your point is very astute: the investors are definitely the linchpin here.
The company still got $20B of cash(?) in its books, it can pay dividends to its shareholders (investors) and they get their payment. The company can go down the drain afterwards. If it can still make money with its remaining assets that's only a nice small bonus.

So the only ones getting shafted are the employees.

I suppose the firm could simply roll the 20 billion into a long term asset. It’s not a big deal to anyone except employees if the asset never pays out. Departed employees would not be privy to how the money is eventually exited from the now shell company 20 years hence.
20% of 20b isn’t exactly loose change, even for a megacorp.
> will it get to the point where these sorts of faux acquisitions also involve paying out investors and only shafting employees?

Yes, correct

It's already happening. You need a good lawyer to read equity terms to make sure you aren't going to get rug pulled by a founder later on. Even so I still consider equity to generally be worth zero unless the founder is someone I trust fully, since there are so many ways for them to legally not give you anything.
>As this gets more common

Boy, it would be so nice if a major correction were to drain these massive companies' warchests so that it doesn't become more common.

The equity in almost all startups has already been a bait and switch for more than a decade. Most will refuse to answer you about % of equity share anyways, but if they did it's tiny tiny amounts, and in the end half the time it's up to the acquiring entity just how seriously they end up taking it. If you landed at an entity like Google (as I did from the place I was working 15+ years ago) you could be treated well. Elsewhere, not great.

During boom times it made more financial sense to go straight to a FAANG if you could.

If you ask me there has been a major shift into trying to make "startups" into just another form of corporation. It started years ago when I started seeing things like "Founder Engineer - 0.5% equity" in jobs here.
[dead]
  • oblio
  • ·
  • 23 hours ago
  • ·
  • [ - ]
With the job market being in the state it is in, there will always be people wanting to take their chances.

Let's face it and accept that the golden days of people working in tech startup (and soon large companies) are over.

RIP 1980 - 2023.

  • oblio
  • ·
  • 16 hours ago
  • ·
  • [ - ]
LOL@Americans waking up.

I guess you'll have to face the music at some point.

Looking at GDP, the golden age is still right here.

https://data.worldbank.org/indicator/NY.GDP.MKTP.CD?location...

/s

Different kind of gold raining down on us now though
You should have just bought gold
Naa, just wait, its dribbling down.
it always dribbles down after it dribbles up and then it dribbles down again…
Though (very) unevenly distributed
Need this plotted against cumulative American debt lol
Can you say more about why mechanically she didn't get anything?

If you exercise your options you have real stock in the company, so I don't see how you can get shafted here.

Did investors do some sort of dividend cash out before employees were able to exercise their options? (Obviously shady, but more about investors/leadership being unethical than the deal structure).

Would love to know more about how this played out.

Multiple share classes are the norm even before the new acquisition types we see here. It’s extremely common in an acquisition for employee shares to be worth nothing while investor and founder shares are paid out.

But these new “acquisitions” aren’t even that. They are not acquisitions at all. They just hire the talent directly with perhaps an ip rights agreement thrown in as a fig leaf.

I'm well aware of dual class shares, but preferences are typically 1x, and none of the deals were for less than the amount raised, so they're not relevant here.

The fact that these are not really acquisitions doesn't change the fact that Groq the entity now has $20b.

  • depr
  • ·
  • 14 hours ago
  • ·
  • [ - ]
Groq doesn't keep that money, it goes to VCs. They claim the company is "pivoting", not "selling" and avoid the payout trigger.
Money can't just "go" somewhere, it needs a reason first, at least for book-keeping. I mean, VCs can get their invested capital back but on top of that, how would that money be transfered? $20B is a lot and for sure the VCs will not just write an invoice of $18B for consulting services.
There have been at least a half dozen of these deals in the past 1-2 years including Google “licensing” CharacterAI to pull their founders back into Google as valued employees.

In the deal mentioned above: my guess is that preferred class shareholders and common shares got paid out but the common shareholders had such a low payout that it rounded down to zero for most employees.

This can happen even in a regular acquisition because of the equity capital stack of who gets paid first. Investors typically require a 1x liquidation preference (they get their investment back first no matter what).

Liquidation preferences are typically 1x these days, so they only matter when companies are sold at fire sale prices where basically nobody is making any money.

The deals are all weird so it's hard to really know what's happening, but if Groq gets $20b, I don't see how common stock holders don't get paid.

Special dividend to priority class and retain the rest to grow the remaining sham company?

I've seen some discussion that paying out normal employees might look more like an acquisition on paper which they may want to avoid for ftc reasons. I've also seen some discussion that this is a quid pro quo to the trump family to get Nvidia back into China (jr. bought in at the September financing round..).

Lots of speculation in general, including why nvda chose to spend 20bil on this.

Do you actually know this is what happened?

Dividends to only one class seems crazy. I would be kind of shocked if that was legal.

No, I have no visibility. I'm saying speculation is rampant is all.
[dead]
I wonder if such deals will create employee lawsuits. I'd certainly be looking at legal options if I was one of the founding employees.
It should. Look at what happened at Windsurf when Google did something like this

https://news.ycombinator.com/item?id=44673296

>> one of the founding employees

If you were an employee, you were not a founder. A founding-employee would be someone who explicitly "invested" time/money into a company without compensation. If you are also an employee earning a wage you better have a written agreement stating what amount was "investment" and what amount was compensated wage.

Startups typically offer employees, particularly early employees, substantial equity compensation. If the employer is offering this compensation in bad faith, or otherwise preferring one equity holder over another without an explicit contract - then they are at the very least a crappy business partner. A founding engineer with a 2% stake could be missing out on 5-10 million of this transaction.

As an aside, most founders are paid during the entire project. It’s not hard to raise a preseed round to get yourself paid for 6-24 months to work on an idea. If a founder chose to bootstrap - that’s all fine, but let’s not pretend that the employees aren’t taking massive career risks vs “standard” employers.

> If the employer is offering this compensation in bad faith, or otherwise preferring one equity holder over another without an explicit contract - then they are at the very least a crappy business partner.

I don’t know about you, but every company I’ve ever worked at is a shitty business partner if that’s the metric. The standard has always been I get what we agreed to if I was lucky, and otherwise I got full “I’ve altered the deal, pray I don’t alter further” and dared you to defend your rights.

I actually have called their bluff a few times and gotten some money out of it, but it was always a year long event or more to resolution and involved risking even more money on lawyers.

  • crote
  • ·
  • 12 hours ago
  • ·
  • [ - ]
Just one slight problem: people need to eat, and food costs money.

Your startup won't succeed when its founder starves to death. It's why the founder will usually get a bunch of cash during investment rounds [0]: they can't focus on the company if they are constantly worried about cash in their personal life. Unless the founder is already independently wealthy, it is a guarantee that they'll be employed by the company and being paid a living wage. Heck, in some countries this is even legally required!

According to your logic, no successful startup will ever have a founder, as any form of pay instantly degrades them to regular employee, and any kind risk taken and below-market salary is completely irrelevant. Never mind the fact that they are taking home a minimum-wage salary while working 100 hours a week - they are earning a wage so they can't possibly be a founder.

So if this logic already breaks down for the founder, why couldn't it also break down for early employees whose compensation is mostly in stock options? How is their situation any different from the founder's?

[0]: https://www.stefantheard.com/silicon-valleys-best-kept-secre...

>> one of the founding employees

If you were an employee, you were not a founder. A founding-employee would be someone who explicitly "invested" uncompensated time/money into a company without compensation and also worked as an employee. If you are also an employee earning a wage you better have a written agreement stating what amount was "investment" and what amount was compensated wage.

  • ·
  • 12 hours ago
  • ·
  • [ - ]
  • wmf
  • ·
  • 1 day ago
  • ·
  • [ - ]
The employees are getting paid twice.
The employees are getting paid zero times.
do they make a salary
  • crote
  • ·
  • 1 day ago
  • ·
  • [ - ]
Startups often pay a shitty salary in exchange for a decent chunk of stock options, with the implicit promise that you'll make bank if you work hard and make the company successful.

Getting screwed out of your payout by such a totally-not-an-acquisition is wage theft. It's like promising a sales-related bonus at the beginning of the year, and then in December changing the metric to "AI-related sales to the CEO's golf buddies".

Startup options are worthless. The only value most people will ever extract from a startup is the experience they had working there, and the salary that was put in their bank account.

I understand that a lot of inexperienced people (like in this thread) think they're going to get rich though.

No, it is not "wage theft" to not get rich when the company exits (by whatever means).

  • crote
  • ·
  • 13 hours ago
  • ·
  • [ - ]
Nobody expects to get rich off working for a startup. The risks are massive, and very few exit with billion-dollar deals. This is taken into account by the people who work there and accept those stock options: 99.xx% chance of being worth essentially zero, but a tiny yet nonzero chance of being able to retire early when it does a billion-dollar exit. It's a lottery ticket, not a promise - every startup employee understands that.

Groq is now changing the deal after the fact by making those stock options worthless 100% of the time. It's like you participate in a lottery, and then the organizer decides to just not do a draw and keep all the proceeds for themselves. Sorry, but that's theft.

Don't intend to pay out in the unlikely event that you hit it big? Then don't offer stock options to your employees and pay market-rate salaries - plus of course a decent premium for the fact that (unlike an established company) your startup can go bust at any time and doesn't offer stable employment. You can't have it both ways.

Startup options are usually worthless, yes, because very few startups end up getting to a position where the options are worth something.

> No, it is not "wage theft" to not get rich when the company exits

I don't think anyone in this thread thinks they're gonna get rich by working for a startup. There's a hope that they will, that's why they are working, but there's no expectation. Maybe there's an expectation of getting a nice tidy sum after an exit (in the 5 or 6 figures) but not in the 7 or 8 figures, at least not if they're just employees and not founders.

What's being discussed is a startup exiting for billions of dollars and the employees with equity seeing zero of it.

Working for a startup usually means lower wages and longer hours, for the chance at striking it rich if the company succeeds. If employees don't see anything when the company succeeds, there's literally no upside to working for a startup.

I recall having to sit through many trainings on how to value employee equity. My experience is that most startup employers try to BS what it means to convince people to value their equity at a significantly higher price than they otherwise should.

If the employer is explicitly making the employee options worthless, then they should be obligated to disclose this. Otherwise it’s trivial to engineer a corporate entity which pays the employees while “licensing” the technology from an IP holding firm. Later they can simply sell the IP holding firm without owing employees a dollar.

It is absolutely wage theft. Equity is part of the deal. Abusing some legal loophole to deprive employees of ownership and liquidity is not okay.
  • tgma
  • ·
  • 22 hours ago
  • ·
  • [ - ]
The implicit promise is only partially true. Very rarely you can find a proven talent that will actually forego significant salary. Often time when that happens the person is close to founders and will have a significant role in shaping the startup and will get quasi-acquired too.

This promise may have been more true before 2010s where public companies were not paying as much in liquid cash and private companies were not valued so aggressively. Fact is most employees take the startup offer because they don't actually have a liquid offer that's super competitive at that moment, or they are just kind of bored and taking a break of the corporate job that does not give them too many responsibilities, i.e. they are compensated via the title, not just the promise of making bank.

That just means you’re pulling from the lower end of the talent pool. There is nothing wrong with this, but usually talent is correlated with outcome. Most hot startups which are going places are near impossible to get into even for folks with good offers.
If part of their remuneration is in shares, they have a legitimate interest in the value of those shares.
wdym?
  • wmf
  • ·
  • 18 hours ago
  • ·
  • [ - ]
They get a share of the $20B plus now they get to work for Nvidia.
  • exac
  • ·
  • 6 hours ago
  • ·
  • [ - ]
In my career I've seen startups "shut down" and lay off the NA team.

I've seen venture capital acquire startups for essentially nothing laying off the entire product team aside from one DevOps engineer to keep everything running. I've seen startups go public and have their shares plummet to zero before the rank-and-file employees could sell any shares (but of course the executives were able to cash out immediately). I've seen startups acquired for essentially nothing from the lead investor.

In none of these scenarios did any of the Engineers receive anything for their shares.

Yet every day people negotiate comp where shares are valued as anything more than funny money.

If I had to guess I'd say investors get their returns but non exec employees mostly get screwed.
I was involved in a (obviously smaller) situation with an acquisition that went to a top consumer CPU maker (you can guess). The investors got nothing as the buyout money was used to fund new pivots in the existing company. So no options or shares were monetized and investors maintained their existing stake that had technically the save value, just most of the value was temporarily all cash. The only people to make out were the ones who went with the asset sale (retention bonus stuff) and the leadership that stayed (raises, etc.)
Is it related to the FTC’s “anti-monopoly” stance with Khan? It’s continue under the Trump admin since her successor supposedly approved of her work
It’s yet another way for investors to screw early employees whose face doesn’t fit.
[dead]
So it is not structured as an acquisition to avoid anti trust but effectively it probably is.
Yes I'm sure that "non exclusive" partnership is exactly that, wink wink!
Indeed, as justincormack comments: ”It is not structured as an outright acquisition to avoid US Gov't anti trust scrutiny, but effectively it probably is”. “Non-exclusive” ? Ummmm, yeah, right, sure. You can probably bet there is an private understanding that Groq will no longer offer it's “top of the line” best technology to competitors of Nvidia. Some may see this as a clever, “slight of the hand” attempt for Nvidia to maintain it's perceived dominance & lead in GPU-TPU development. “Non-exclusive” does not in any form or fashion spell out that all Nvidia's competitors can and will obtain the very top, cutting edge Groq technology as Nvidia will obtain . . .
  • biggc
  • ·
  • 1 day ago
  • ·
  • [ - ]
What generated this comment?
Probably a good old fashioned Mk 1 Human Brain given the use of "slight of hand"
Grok... with a k
GPT1 Nano
  • deaux
  • ·
  • 1 day ago
  • ·
  • [ - ]
Looks like a normal comment to me, what makes you think otherwise? It has pretty much no hallmarks of being generated, and plenty that point towards the opposite.
Quoting the user it’s replying to in third person, and then hallucinating words inside the quote.

When I have asked LLMs to read/dictate a linked text, the output is usually not a clean read but something reinterpreted with its own style.

Starting the comment pointing out the name of the user you're replying to, and quoting the exact comment you're replying to, does sound really strange.
I think it's intended as a response to a sibling comment.
> As part of this agreement, Jonathan Ross, Groq’s Founder, Sunny Madra, Groq’s President, and other members of the Groq team will join Nvidia to help advance and scale the licensed technology.

A really strange agreement where top executives of a company "join" another company for the benefit of the other company.

If it quacks like a duck...

This is exactly what Google did with Windsurf and similar to what Meta did with Scale AI. Seems like a rising trend,
Remember ex3dfx.com setup by former employees?

This is exactly what nvidia tried to do with 3dfx 25 years ago. They have experience of screwing people over!

In the current political climate you only need to slightly pretend to care about the appearance of being a monopoly, just enough to give plausible deniability in a sound bite. Anything else isn't worth it.
This seems a lot like something where the acquirer avoids paying for equity. With key leaders gone what do employees of Groq get? Their company isn’t being acquired really so they just stay illiquid?
  • ·
  • 1 day ago
  • ·
  • [ - ]
Are they buying them to try and slow down open source models and protect the massive amounts of money they make from OpenAI, Anthropic, Meta ect?

It quite obvious that open source models are catching up to closed source models very fast they about 3-4 months behind right now, and yeah they are trained on Nvidia chips, but as the open source models become more usable, and closer to closed source models they will eat into Nvidia profit as these companies aren't spending tens of billion dollars on chips to train and run inference. These are smaller models trained on fewer GPUs and they are performing as good as the pervious OpenAI and Anthropic models.

So obviously open source models are a direct threat to Nvidia, and they only thing open source models struggle at is scaling inference and this is where Groq and Cerberus come into the picture as they provide the fastest inference for open source models that make them even more usable than SOTA models.

Maybe I'm way off on this.

Shy of an algo breakthrough, open source isn't going to catch up with SOTA, their main trick for model improvement is distilling the SOTA models. That's why they they have perpetually been "right behind".
They don't need to catch up. They just need to be good enough and fast as fuck. Vast majority of useful tasks of LLMs has nothing to do with how smart they are.

GPT-5 models have been the most useless models out of any model released this year despite being SOTA, and it because it slow as fuck.

For coding I don’t use any of the previous gen models anymore.

Ideally I would have both fast and SOTA; if I would have to pick one I’d go with SOTA.

There a report by OpenRouter on what folks tend to pay for it; it generally is SOTA in the coding domain. Folks are still paying a premium for them today.

There is a question if there is a bar where coding models are “good enough”; for myself I always want smarter / SOTA.

  • wyre
  • ·
  • 1 day ago
  • ·
  • [ - ]
FWIW coding is one of the largest usages for LLM's where SOTA quality matters.

I think the bar for when coding models are "good enough" will be a tradeoff between performance and price. I could be using Cerebras Code and saving $50 a month, but Opus 4.5 is fast enough and I value the piece-of-mind I have knowing it's quality is higher than Cerebras' open source models to spend the extra money. It might take a while for this gap to close, and what is considered "good enough" will be different for every developer, but certainly this gap cannot exist forever.

> just need to be good enough and fast as fuck

Hard disagree. There are very few scenarios where I'd pick speed (quantity) over intelligence (quality) for anything remotely to do with building systems.

If you thought a human working on something will benefit from being "agile" (building fast, shipping quickly, iterating, getting feedback, improving), why should it be any different from AI models?

Implicit in your claim are specific assumptions about how expensive/untenable it is to build systemic guardrails and human feedback, and specific cost/benefit ratio of approximate goal attainment instead of perfect goal attainment. Rest assured that there is a whole portfolio of situations where different design points make most sense.

  • nkmnz
  • ·
  • 1 day ago
  • ·
  • [ - ]
> why should it be any different from AI models?

1. law of diminishing returns - AI is already much, much faster at many tasks than humans, especially at spitting out text, so becoming even faster doesn’t always make that much of a difference. 2. theory of constraints - throughput of a system is mostly limited by the „weakest link“ or slowest part, which might not be the LLM, but some human-in-the-loop, which might be reduced only by smarter AI, not by faster AI. 3. Intelligence is an emergent property of a system, not a property of its parts - with other words: intelligent behaviour is created through interactions. More powerful LLMs enable new levels of interaction that are just not available with less capable models. You don’t want to bring a knife, not even the quickest one in town, to a massive war of nukes.

I agree with you for many use cases, but for the use case I'm focused on (Voice AI) speed is absolutely everything. Every millisecond counts for voice, and most voice use cases don't require anything close to "deep thinking. E.g., for inbound customer support use cases, we really just want the voice agent to be fast and follow the SOP.
  • nkmnz
  • ·
  • 1 day ago
  • ·
  • [ - ]
If you have a SOP, most of the decision logic can be encoded and strictly enforced. There is zero intelligence involved in this process, it’s just if/else. The key part is understanding the customer request and mapping it to the cases encoded in the SOP - and for that part, intelligence is absolutely required or your customers will not feel „supported“ at all, but be better off with a simple form.
As a customer when confronted with such a system I hang up and never ever do business with that company again. Regardless of polish, they're useless.
  • nkmnz
  • ·
  • 18 hours ago
  • ·
  • [ - ]
What do you mean by "such a system"? One that uses AI to funnel your natural language request into their system of SOP? Or one that uses SOPs to handle cases in general? SOP are great, they drastically reduce errors, since the total error is the square root of the sum of squares of random error and bias – while bias still occurs, the random error can and should be reduced by SOPs, whenever possible. The problem is that SOPs can be really bad: "Wait, I will speak to my manager" -> probably bad SOP. "Wait, I will get my manager so that you can speak to them" -> might be a better SOP, depending on the circumstances.
It never works. You always just get the digital equivalent of a runaround and there simply isn't a human in the loop to take over when the AI botches it (again). So I gave up trying, this crap should not be deployed unless it works at least as good as a person. You can't force people to put up with junk implementations of otherwise good ideas in the hope that one day you'll get it right, customer service should be a service because on the other end of the line is someone with a very high probability of being already dissatisfied with your company and/or your product. For me this is not negotiable, if my time is less valuable to you, the company, than it is to actually put someone on to help then my money will go somewhere else.
  • nkmnz
  • ·
  • 17 hours ago
  • ·
  • [ - ]
I'm still not sure if you're speaking of SOP in general or AI-interfaces to them. Why don't you answer that simple question before ranting on?
Speed is great for UI iteration or any case where a human must be in the loop.
As long as the faster tech is reliable and I understand its quirks, I can work with it.
> They don't need to catch up. They just need to be good enough

The current SOTA models are impressive but still far from what I’d consider good enough to not be a constant exercise in frustration. When the SOTA models still have a long way to go, the open weights models have an even further gap distance to catch up.

I'd prefer a 30 minute response from GPT-5 over a 10 minute Response from {Claude/Google} <whatever their SOTA model is> (yes, even gemini 3)

Reason is: while these models look promising in benchmarks and seem very capable at an affordable price, I *strongly* felt that OpenAI models perform better most of the times. I had to cleanup Gemini mess or Claude mess after vibe coding too much. OpenAI models are just much more reliable with large scale tasks, organizing, chomping tasks one by one etc. That takes its time but the results are 100% worth it.

  • nl
  • ·
  • 1 day ago
  • ·
  • [ - ]
GPT 5 Codex is great - the best coding model around except maybe for Opus.

I'd like more speed but prefer more quality than more speed.

I get GPT 5.2 responses on copilot faster than for any other model, almost instantly. Are you sure they’re slow as fuck?
Confused. Is ‘fuck’ fast or slow? Or both at the same time? Is there a sort of quantum superposition of fuck?
It's an intensifier
Wasn't that supposed to be 'ass'
Then how would double intensifier look like?
  • ·
  • 1 day ago
  • ·
  • [ - ]
well, it's not slow as fuck! it's quick as lightning and speedy as hell
This. You can distill a foundation model into open source. The Chinese will be doing this for us for a long time.

We should be glad that the foundation model companies are stuck running on treadmills. Runaway success would be bad for everyone else in the market.

Let them sweat.

Bullseye.
We trust in our lord and savior China and Zuck to keep the peasants fed.
> their main trick for model improvement is distilling the SOTA models

Could you elaborate? How is this done and what does this mean?

I am by no means an expert, but I think it is a process that allows training LLMs from other LLMs without needing as much compute or nearly as much data as training from scratch. I think this was the thing deepseek pioneered. Don’t quote me on any of that though.
No, distillation is far older than deepseek. Deepseek was impressive because of algorithmic improvements that allowed them to train a model of that size with vastly less compute than anyone expected, even using distillation.

I also haven’t seen any hard data on how much they do use distillation like techniques. They for sure used a bunch of synthetic generated data to get better at reasoning, something that is now commonplace.

Thanks it seems I conflated.
Yes. They bounced millions of queries off of ChatGPT to teach/form/train their DeepSeek model. This bot-like querying was the "distillation."
They definitely didn't. They demonstrated their stuff long before OAI and the models were nothing like each other.
Why would OpenAI allow someone to do that?
They didn't, but how do you stop it? Presuming the scale that OpenAI is running at?
Too bad, so sad for the Mister Krabs secret recipe-pilled labs. Shy of something fundamental changing, it will always be possible to make a distillation that is 98% as good as a frontier model for ~1% of the cost of training the SOTA model. Some technology just wants to be free :)
  • stx5
  • ·
  • 1 day ago
  • ·
  • [ - ]
[dead]
>Are they buying them to try and slow down open source models

The opposite, I think.

Why do you think that local models are a direct threat to Nvidia?

Why would Nvidia let a few of their large customers have more leverage by not diversifying to consumers? Openai decided to eat into Nvidia's manufacturing supply by buying DRAM; that's concretely threatening behavior from one of Nvidia's larger customers.

If Groq sells technology that allows for local models to be used better, why would that /not/ be a profit source for Nvidia to incorporate? Nvidia owes a lot of their success on the consumer market. This is a pattern in the history of computer tech development. Intel forgot this. AMD knows this. See where everyone is now.

Besides, there are going to be more Groqs in the future. Is it worth spending ~20B for each of them to continue to choke-hold the consumer market? Nvidia can afford to look further.

It'd be a lot harder to assume good faith if Openai ended up buying Groq. Maybe Nvidia knows this.

  • deaux
  • ·
  • 1 day ago
  • ·
  • [ - ]
> Besides, there are going to be more Groqs in the future.

And likely some of them are going to be in countries that won't let them sell out to Nvidia.

  • nl
  • ·
  • 1 day ago
  • ·
  • [ - ]
NVIDIA release some of the best open source models around.

Almost all open source models are trained and mostly run on NVIDIA hardware.

Open source is great for NVIDIA. They want more open source, not less.

Commoditize your complement is business 101.

Then why are they spending $20 billion dollars to handicap an inference company that giving open source models a major advantage over closed source models?
Realistically groq is a great solution but has near impossible requirements for deployment. Just look at how many adapters you need to meet the memory requirements of a small llm. SRAM is fast but small.

I would guess their interconnect technology is what NVIDIA wants. You need something like 75 adapters for an 8b parameter model they had some really interesting tech to make the accelerator to accelerator communication work and scale. They were able to do that well before nvl 72 and they scale to hundreds of adapters since large models require more adapters still.

We will know in a few months.

> to handicap an inference company

That's a non-charitable interpretation of what happened. The are not "spending $20 billion to handicap Groq". They are handing Groq $20 billion to do whatever they want with it. Groq can take this money and build more chips, do more R&D, hire more people. $20 billion is truly a lot of money. It's quite hard to "handicap" someone by giving them $20 billion.

  • wmf
  • ·
  • 1 day ago
  • ·
  • [ - ]
Groq doesn't have any employees. They can't do R&D because there's no one to do it. The $20B goes to Groq's investors.
From the article:

  > Groq added that it will continue as an “independent company,” led by finance chief Simon Edwards as CEO. 
The $20B does not go to Groq's investors. It goes to Groq. You can say that Groq is owned by its investors, and this is the same thing, but it's not. In order for the money to go to the investors, Groq needs to disburse a dividend, or to buy back shares. There is no indication that this will happen. And what's more, the investors don't even need this to happen. I'm sure any investor that wants to sell their shares in Groq will now find plenty of buyers at a very advantageous price.
  • wmf
  • ·
  • 1 day ago
  • ·
  • [ - ]
Let's bet on this shit. Where's the Polymarket.
  • p1esk
  • ·
  • 1 day ago
  • ·
  • [ - ]
they spending $20 billion dollars to handicap an inference company

Inference hardware company

  • nl
  • ·
  • 1 day ago
  • ·
  • [ - ]
> handicap

Your words.

Because it's very good tech for inference?

It doesn't even do training.

And most inference providers for Open Source models use NVIDIA eg Fireworks, Basten, TogetherAI etc.

Most NVIDIA sales go to training clusters. That is changing but it'd be an interesting strategy to differentiate the training and inference lines.

You still need hardware to run open source models. It might eat into OpenAI profit but I doubt it will eat into NVIDIA's

If anything more companies in making models business the higher NVIDIA chip demand will be, till we get some proper competition at least. We badly need some open CUDA equivalent so moving off to competition isn't a problem

Nvidia's dream would be for everyone to buy a personal DGX H100 for private local inference. That's where open source could lead. Datacenters are much more efficient in their use of chips.
  • xnx
  • ·
  • 21 hours ago
  • ·
  • [ - ]
Exactly. Efficiency use of their chips is the enemy of Nvidia.
  • dTal
  • ·
  • 19 hours ago
  • ·
  • [ - ]
Yes, you are way off, because Groq doesn't make open source models. Groq makes innovative AI accelerator chips that are significantly faster than Nvidia's.
> Groq makes innovative AI accelerator chips that are significantly faster than Nvidia's.

Yeah I'm disappointed by this, this is clearly to move them out of the market. Still, that leaves a vacuum for someone else to fill. I was extremely impressed by Groq last I messed about with it, the inference speed was bonkers.

  • wmf
  • ·
  • 1 day ago
  • ·
  • [ - ]
If it's that good Nvidia can just keep selling it.
more like now Nvidia wants to release their own ASIC to combat google
Umm... no one tell them, okay?
For inference, but yes. Many hundreds of tokens per second of output is the norm, in my experience. I don't recall the prompt processing figures but I think it was somewhere in the low hundreds of tokens per second (so slightly slower than inference).
Nvidia just released their Nemotron models, and in my testing, they are the best performing models on low-end consumer hardware in both terms of speed and accuracy.
  • ymck
  • ·
  • 1 day ago
  • ·
  • [ - ]
I'd say that it's probably not a play against open source, but more trying to remove/change the bottlenecks in the current chip production cycle. Nvidia likely doesn't care who wins, they just want to sell their chips. They literally can't make enough to meet current demand. If they split off the inference business (and now own one of the only purchasable alternatives) they can spin up more production.

That said, it's completely anti-competitive. Nvidia could design a inference chip themselves, but instead the are locking down one of the only real independents. But... Nobody was saying Groq was making any real money. This might just be a rescue mission.

With RAM/memory price this high, open source is not going to catch up with closed source.

The open source economy relies on the wisdom of crowds. But that implies and equal access to experimentation platforms. The democratization of PC and consumer hardware brings the previous open source era that we all love, I am afraid the tech mongols had identified the chokehold of LLM ecosystem and found ways to successfully monopolized it

They need to vertically integrate the entire stack or they die. All of the big players are already making plans for their own chips/hardware. They see everyone else competing for the exact same vendor’s chips and need to diversify.
I don't see where is the benefit for Nvidia to limit the open source models.

The more competition, the more shovels they sell.

It's like saying that Intel would've benefited if only Dell and few others sold servers because they brought in multiple billions per year.

  • ramoz
  • ·
  • 1 day ago
  • ·
  • [ - ]
They acquired in order to have an ASICs competitor to Google TPU.
More like they’re trying to snuff out potential competitors. Why work as hard to push your own products if NVIDIA gave you money to retire for the rest of your life?
Show me an affordable open source coding model thats closet to GPT-5.2-codex capabilities. Note: I do not have tons of HBM lying around
The constant threat of open source (and other competitors) is what keeps the big fish from getting complacent. It’s why they’re spending trillions on new data centers, and that benefits Nvidia. When there’s an arms-race on it’s good to be an arms dealer.
> It quite obvious that open source models are catching up to closed source models very fast they about 3-4 months behind

> Maybe I'm way off on this.

If by open source, you mean downloadable from huggingface and SOTA you mean opus 4.5, yes you are way off.

Idk- cheaper inference seems to be a huge industry secret and providing the best inference tech that only works with nvidia seems like a good plan. Makes nvidia the absolute king of compute against AWS/AMD/Intel seems like a no brainer.
China may take over the open source part. That is the only country with exposure to hardware, software and political might.
How does this work considering the Nemotron models?
Your way off, this reads more like anti capitalist political rhetoric than real reasoning.

Look at Nvidia nemotron series. They hav become a leading open source training lab themselves and they’re releasing the best training data, training tooling, and models at this point.

I don’t see how this isn’t anti trust but knowing this political climate, this deal will go through.
Good of them to make a list themselves, isn't it? It'll be useful in the future.
as useful as it was before this administration when big tech was sucking up to whomever was running the country (e.g. “macho man” Zuck was getting ready to tattoo DEI on his forehead couple of years ago) or just now it’ll be magically useful?
You miss my point. This is a list of people engaging in something flat-out corrupt. The ballroom is an inherently corrupt project.

It will prove to be simple corruption.

Why is that relevant if there is no one willing to prosecute and convict?
  • wyre
  • ·
  • 1 day ago
  • ·
  • [ - ]
A forest can still exist despite people choosing to not see or look at it.
so corruption exists, that’s the pitch? learned something new today…
it is completely irrelevant but people still waste internet bandwidth with nonsense :)
whats the punishment for corruption (especially when you have 100’s of billions of dollars) I wonder…
If justice is served it'll be knocked down by the next admin, if it is ever built.
  • deaux
  • ·
  • 1 day ago
  • ·
  • [ - ]
Why would it have to take 4 years? It sure hasn't taken the current admin 4 years to disappear people into Salvadorian torture prisons.
Destroying things and outsourcing to already-built prisons is easy. Building things is not.

All they have is a demolition site. There's no final design. Trump keeps changing his vision of his mausoleum. They don't have an architect since the previous one quit.

They have less than a week to submit construction plans[1], and they're clearly missing that deadline. It is of course not the end, but it's a sign of things to come, about half a year in.

Trump is personally running the project instead of delegating it and as we all know he's ruled by whims and disorganized plus rapidly mentally deteriorating at 79 years of age. He's talking about getting into heaven and desperately slapping his name on random physical things because he's obsessed with leaving a grandiose "legacy", any kind of mark on history. He will, but it'll rather be as a seditionist and corrupt ravager of civil institutions and the rule of law -- a pitiful despoiler.

There's no section about the ball room in Project 2025, and no one else but Trump cares about this pet project.

[1]: https://www.cbsnews.com/news/judge-denies-request-to-tempora...

Are you sure this is how he'll be remembered? Half the US thought him preferable to AOC and Hillary Clinton. It's hard to conclude in any other way than that the perception of his legacy will be equally divided.
  • crote
  • ·
  • 23 hours ago
  • ·
  • [ - ]
Most of his actions are, to the majority of the population, merely transient actions. A few letters on an arts center are trivial to remove, a cancelled wind turbine farm easy to forget. The CECOT stuff deeply impacts only a small part of the population, so it'll at most be a few lines in a history book.

But demolishing a third of the White House? That'll be clearly visible in every single aerial shot of the building during every single political event for years. It is, quite literally, a scar on the political face of the country.

It's like turning the Pentagon into a Square, or blowing Washington's face off Mount Rushmore, or selling Alaska back to Russia: you're not going to forget when you are constantly being reminded of it.

actions might be transient but, like or not (I certainly do not) will be the President that is remembered and talked about more than just about all of previous ones combined
> Half the US thought him preferable to AOC and Hillary Clinton

What do those people have to do with anything besides being popular right-wing targets?

His approval rating is currently around 42%.

what were his predecessors approval ratings? 42-45% is as good as you’ll ever going to get in America outside or extreme situations like post 9/11.
  • zeryx
  • ·
  • 1 day ago
  • ·
  • [ - ]
The problem is, if everyone knows it going to curry favour and you're the odd man out - are you in Violation of your fiduciary duty to your shareholders?
An American Oligarchy.
> this deal will go through

It should be noted that Don Jr. is one of the investors who will benefit greatly if/when this goes through.

  • dagmx
  • ·
  • 1 day ago
  • ·
  • [ - ]
It’ll go through. It’s not an acquisition, it’s an exclusive licensing deal. Same end result, but it lets them runaround the usual regulatory approvals for acquisitions.
  • ·
  • 1 day ago
  • ·
  • [ - ]
The price is 40x their target revenue. That's twice the price to revenue multiplier applied to Anthropic in their most recent funding round, and really really hard to portray as a good deal.

I don't think it really helps Nvidia's competitive position. The serious competition to Nvidia is coming from Google's TPU, Amazon's Trainium, AMD's Instinct, and to a much lesser extent Intel's ARC.

Grow recent investors got back a 3x multiple and may now invest in one of Nvidia's other competitors instead.

  • bri3d
  • ·
  • 1 day ago
  • ·
  • [ - ]
The only thing I can think of here is that OpenAI’s DRAM land grab is going to stack on a non-NV target and NV need to hedge with an SRAM design that’s on the market NOW. Otherwise, I can’t see how NV couldn’t eat Groq’s lunch in one development cycle - it’s not like NV can’t attach a TPU to some SRAM and an interconnect. Either that or Groq closed a deep enough book to scare them, but 40x is a lot of scared.
That's an interesting take, it's plausible Nvidia wants to have an SRAM based product, but I am struggling to see why they would pay $20bn to have one /right now/. Even if DRAM prices make Groq's approach more economical, Nvidia can develop a competitive product before Groq could take any real market share.
  • bri3d
  • ·
  • 1 day ago
  • ·
  • [ - ]
Exactly. The only way this makes sense to me is if the board needed this product in <1 cycle. Which makes no sense for a market player like NV who already have the PDK, volume, and literally everything else in the universe. But here it is, so there is clearly a factor I have not considered :)
  • ece
  • ·
  • 1 day ago
  • ·
  • [ - ]
Would a Groq chiplet be worth $20B? If you bundle it while no one else can, maybe..
  • xgbi
  • ·
  • 1 day ago
  • ·
  • [ - ]
I want to subscribe to your AI wars news please!

Joke aside, the strategic choices here and there hint at the blood lust of all other actors to dethrone Nvidia, it’s fascinating.

But if all of these large companies create a monopoly where they realize that if they start undercutting each other or innovating too extremely then long term if Nvidia's stock decreases in value by a huge margin, the whole market itself can get down

As an example: if google TPU (perfected?) itself to the point that it hurts nvidia sales (maybe mass production perhaps?) then all the companies stock price might decrease (in my opinion including google)

Honestly, I feel like there is going to happen something in the market which is gonna be very spooky soon regarding AI.

I feel like we are gonna drag this bubble really long and actually worsen all the pain which is gonna be caused by it long term.

And Cerebras
  • ·
  • 1 day ago
  • ·
  • [ - ]
[dead]
Good deal for Donald Trump Jr.
Also Tsavorite scalable intelligence - their architecture seems to cover the broadest use cases and compatible with cuda
is this an ad? do you work there?

I'm following the chip industry on a daily basis and never heard of them...

  • nusl
  • ·
  • 1 day ago
  • ·
  • [ - ]
Legit feels like Nvidia just buying out competition to maintain their position and power in the industry. I sincerely hope they fall flat on their face.
> Legit feels like Nvidia just buying out competition to maintain their position and power

Well, I mean, isn't that exactly what they should be doing? (I'm not talking about whether or not it benefits society; this is more along the lines of how they're incentivized.)

Put yourself in their shoes. If you had all that cash, and you're hearing people talk of an "AI Bubble" on a daily basis, and you want to try and ensure that you ride the wave without ever crashing... the only rational thing to do is use the money to try and cover all your bases. This means buying competitors and it also means diversifying a little bit.

No one is claiming that it's a bad move.

It's just an anti-competitive move that could be very bad for the consumer as it makes the inference market less competitive.

Dunno thought AGI would make everything obsolete and it's just around the corner? It looks rather like it dawns on everyone that transformers won't bring salvation. It's a show of weakness.
[dead]
That's unfortunately what most acquisitions are.
which is exactly what a business should do.

it's not like Nvidia doesn't invest a ton into R&D, but hey, they have the cash, why not use it? like a good business.

  • nusl
  • ·
  • 15 hours ago
  • ·
  • [ - ]
What a business should do, sure. Businesses should - and do - do a lot of really shitty things because it benefits them but harms a lot of other things. I don't feel that it's a good justification to argue this way though.

In this case, removing a competitor, absorbing their IP, and maintain their ability to dictate the direction of an entire industry. They're hurting the industry itself by removing competition, since competition is good for consumers and also good for progression forward.

Businesses with a monopoly of some sort often stop innovating in the space and end up slowing the entire thing down. Often, they do their best to block anything and anyone that tries to do better, and effectively keep progress back in doing so, simply to maintain their position.

They're selfish self-preserving entities often driven by the same kinds of people, disregarding the harm they do in the name of profits and shareholder "value". Sure, until someone disrupts that (or they get bought out and dissolved).

In a normal world, this is where Nvidia gets trust busted. But that's long behind us now.
Stuff like tinygrad will change this. Geohot already made nvidia run on macs via thunderbolt.

Also: https://x.com/__tinygrad__/status/1983469817895198783

  • bri3d
  • ·
  • 1 day ago
  • ·
  • [ - ]
The bottleneck in training and inference isn’t matmul, and once a chip isn’t a kindergarten toy you don’t go from FPGA to tape out by clicking a button. For local memory he’s going to have to learn to either stack DRAM (not “3000 lines of verilog” and requires a supply chain which openai just destroyed) or diffuse block RAM / SRAM like Groq which is astronomically expensive bit for bit and torpedoes yields, compounding the issue. Then comes interconnect.
The main point is that it will not be an nvidia’s monopoly for too long.
This guy has the greatest dunning-kruger of all time. Lots of smoke and mirrors.
He’s no delusional: https://x.com/__tinygrad__/status/1983476594850283820

However, I would say you are wrong about it being only smoke

Look dude, this guy failed his Twitter internship and is not about to take on Jensen Huang. This isn't some young guy anymore and this isn't 200x where is he about to have another iPhone / Sony moment.

It is peak delulu.

Edit: His whole blog is 'hot take #n'. Not even anything serious. Basically podcast bro level stuff. https://geohot.github.io/blog/jekyll/update/2025/12/22/the-o...

And where do you think he’s wrong in that post?
There's this curious experience of people bringing up geohot / tinygrad and you can tell they've been sold into a personality cult.

I don't mean that pejoratively, I apologize for the bluntness. It's just I've been dealing with his nonsense since iPhone OS 1.0 x jailbreaking, and I hate seeing people taken advantage of.

(nvidia x macs x thunderbolt has been a thing for years and years and years, well before geohot) (tweet is non-sequitor beyond bogstandard geohot tells: odd obsession with LoC, and we're 2 years away from Changing The Game, just like we were 2 years ago)

Can you show any other thing that runs nvidia gpu under m-series macs?
Who cares? Nobody is building large scale inference services with macs.
Because this is exactly the demonstration of abstraction: the same stuff allows direct gpu communication so that even mac nvidia thing is possible.

It is not tied to nvidia as well.

This is the power of tinygrad

My deepest apologies, I can't parse this and I earnestly tried: 5 minutes of my own thinking, then 3 llms, then a 10 minute timer of my own thinking over the whole thing.

My guess is you're trying to communicate "tinygrad doesn't need gpu drivers" which maybe is transmutated into "tinygrad replaces CUDA" and you think "CUDA means other GPUs can't be used for LLMs, thus nvidia has a strangehold"

I know George has pushed this idea for years now, but, you have to look no further than AMD/Google making massive deals to understand how it works on the ground.

I hope he doesn't victimize you further with his rants. It's cruel of him to use people to assuage this own ego and make them look silly in public.

Re: has someone else does this? https://github.com/albertstarfield/apple-slick-rtx (May 2024, 19 months ago. didn't bother looking further than 4th google result for "apple silicon external gpu")

> Compute Workload Test: This will be add soon

Wonder what happened that it never came.

> Willy's got his i3-12100 Gen RTX3090 hosted on Ubuntu with Juice Server

E-gpu my ass

What a disaster. Any little competition they have, they are going to buy out.

The world needs much stronger anti trust laws.

the world needs ENFORCEMENT of existing ones. Laws don't help if government is complicit in the grift because AI is only thing propping up the GDP
Headline is incorrect.

NVIDIA isn't buying Groq.

It's a non exclusive deal for inference tech. Or am I reading it incorrectly?

There is also movement over of some key people to nvidia which is pretty significant: "As part of this agreement, Jonathan Ross, Groq’s Founder, Sunny Madra, Groq’s President, and other members of the Groq team will join Nvidia to help advance and scale the licensed technology."
curious how the execs can honor the fiduciary duty to share holders assuming only they? get the 20 billion and the company is left headless and leaking all of its intellectual property to Nvidia?
  • wmf
  • ·
  • 1 day ago
  • ·
  • [ - ]
Shareholders get the $20B so that's the fiduciary responsibility fully satisfied.
Non-exclusive licensing and hiring the team.

> As part of this agreement, Jonathan Ross, Groq’s Founder, Sunny Madra, Groq’s President, and other members of the Groq team will join Nvidia to help advance and scale the licensed technology.

Not good. This shouldn't be allowed. What would be better is if groq and cerebras combined, and maybe other companies invested in them to help them scale. Why would the major cloud providers not lobby against this?

Usually antitrust is for consumers, but here I think companies like Microsoft and AWS would be the biggest beneficiaries of having more AI chip competition.

Groq is absolutely tiny. I don't think antitrust is an issue here.
20 billions is tiny?
That's the sales price of the company. Their marketshare, I imagine, is absolutely miniscule.
WhatsApp was a tiny team
Is nowadays
Market share wise, Groq is perhaps "tiny"? Nvidia may be paying a premium for Groq [0] since it eliminates competition (at least on the inference side).

[0] valued ~£6.5bn 2mo ago https://www.reuters.com/business/groq-more-than-doubles-valu...

It's a non-exclusive deal.

No reason for antitrust action whatsoever.

That’s a loophole. Regulation hasn’t caught up to the innovation of non-exclusive licensing deal. Hopefully we’ll get some competence back in government soon-ish and can rectify the mistake
That's not a loophole. Non-exclusive licensing agreement is the opposite of loophole.
It's a backdoor acquisition by looting the key talent.
It's the opposite of an acquisition.

It's literally:

"I don't want you and your 200 tensorflow/pytorch monkeys. I just want your top scientist and I need a clever way to offer him a nine figure salary. Good of you to grant him so much stock and not options. Now I can just make a transfer to your shareholders, of which he is one! Awesome! Now I don't have to buy your company!"

I'll give you bonus points if you can guess what happens to the worthless options all those TF/PyTorch monkeys are holding?

Guys, seriously, be careful who you go to work for, because chances are, you are not the key scientist.

Non exclusive deal but also acquiring a lot of the staff, which seems pretty exclusive in that term.
Yeah but that's going nowhere in court right?

You can't have the government coming in telling a scientist who he has to work for. People are free to take jobs at whatever company they like.

This is just a clever mechanism of paying that intellectual capital an amount of money so far outside the bounds of a normal salary that it borders on obscenity.

All that said, I don't say anything when Jordan Love or Patrick Mahomes are paid hundreds of millions, so I need to learn to shut my mouth in this case as well. I just think it sucks for the regular employees. I guarantee they will lose their jobs over the next 24 months.

>> if groq and cerebras combined

There isn't to be shared between the two techs, Groq's hardware is a like a railgun that installs all the weights into the optimal location before firing off an inference. Cerebras computer engineering more convention requiring the same data movement that GPUs struggle with optimizing.

Suspect Groq is complementary/superior to nvidia's GPUs, while it is unclear what Cerebras brings other then maybe some deals with TSMC.

They are both SRAM based solutions currently with the same benefits and pitfalls.
I just stopped my Groq API. Sad to see competition being eaten up by shitty Nvidia. I like their products but Jensen is an absolute mfer with deceitful marketing.
  • pyk
  • ·
  • 20 hours ago
  • ·
  • [ - ]
As a relative laymen in this hardware inference space, I am curious what exactly was Groq useful for vs. the typical hardware architecture? Or was this a “step in before they become more generally useful” situation for Nvidia/Groq?
Much faster responses, before this deal I thought it would be Google vs Groq for the superior tech with nvidea missing out.
I literally said “oh no” out loud when I read the headline.
was the API good
fast but furiously expensive
Honestly cerebras is good. I can recommend it. I talked to their team once on discord as a literal free user so that was something really nice personally

They were also the faster one compared to groq but they were always a little slow on adding new models compared to groq but not sure what changed right now.

Definitely recommend cerebras tho now that groq's been eaten up from inside basically

I had the feeling that Cerebras only supports smaller modes. Maybe something to do with their hardware arch? I never dove into it. I wanted to use Kimi K2 fast for coding and Groq was the only fast provider at the time
Cerebras currently has GLM4.6 on it, and will be getting GLM4.7 soon.
  • maz1b
  • ·
  • 1 day ago
  • ·
  • [ - ]
Damn. Was hoping Groq and Cerebras would give the giants a run for their money.
There are others as well but NVidia is aggressive when it comes to punishing companies willing to buy non-NVidia products. As a result, they prefer to remain under the radar, at least until they have enough market leverage to be more widely known.
  • 7e
  • ·
  • 1 day ago
  • ·
  • [ - ]
Yes, Groq failed. But there will be others.
There is still Modular
And China.
It would be interesting if it turned out that Chinese competition was the only thing that kept this market working!
I imagine 2 big giants basically (Nvidia/google/amd basically influenced by a few select of people) vs (Chinese companies who have investments from the govt)

its sort of like proxy wars and this is sort of whats happening in software side of things with open source models but I think that the benefit of the proxy wars is going to be for the end consumers

But although on the other hand, having two large countries compete with each other while buying everything else and all feels like it astronomically increases the price if someone wants to compete with these two giants (any other country perhaps)

We definitely need a better system where it doesn't feel like we are seeing pacman eat everything up basically

> Nvidia/google/amd

One of these things is not like the others

  • wyre
  • ·
  • 1 day ago
  • ·
  • [ - ]
Is it not? All this money is going into AI under the fear that China will win the race to AGI. China releases open-source models that keep OpenAI/Anthropic researching and training their models, which in turn creates demand for more Nvidia GPUs.
Which China players are doing inference hardware? As indeed that is a good space for them.
Huawei
I guess I have some hope for Tenstorrent
There was an AMA here last year https://news.ycombinator.com/item?id=39429047
Is there less regulatory oversight when purchasing assets instead of the company, or do Nvidia really believe the FTC/DOJ are that blind? (Or doesn’t it matter in the current climate?)

The near exclusive global provider of AI chips taking key employees from and “licensing” the technology of the only serious competitor while quite specifically describing it as “not acquiring Groq as a company” seems quite obviously anti-competitive, and quite clearly an attempt to frame it as not.

American has not had functional anti-trust laws for the better part of the last 40 years. The current climate is just a peak.
I do not understand this move by Nvidia, they are afraid of being out competed by this startup in their core competence of building chips for AI? They may be eliminating a competitor for now but this move will immediately many more AI chip startups to get founded
They're not eliminating a competitor, they're (effectively) acquiring a competitor. Nvidia's GPUs are great for training, and not bad for inference, but the custom chips are better for inference and Nvidia's worried about losing customers. Nvidia will no doubt sell custom Groq-like chips for inference now.
> Nvidia will no doubt sell custom Groq-like chips for inference now.

For $0bn they could have sold an Nvidia-like chip for inference.

groq is a series E hardware startup founded in 2016. It took them this long to be a potential threat, I'm not sure they are even an actual threat.

Even if this purchase causes 100 new hardware startups to be funded tomorrow, nVidia is perfectly fine with that. Let's see how many survive 5 years down the line

  • baq
  • ·
  • 1 day ago
  • ·
  • [ - ]
the play is 10x faster inference leads to 100x demand give or take, which isn't a bad assumption at all if you ask me. the problem is actually fitting a good model onto hardware that fast.
Someone say to me that Nvidia is not buying Groq; the deal is a non-exclusive licensing agreement for its inference technology with some team members joining Nvidia.

and he gave me this link:

https://groq.com/newsroom/groq-and-nvidia-enter-non-exclusiv...

https://www.cnbc.com/2025/12/24/nvidia-buying-ai-chip-startu...

Indeed, it seems to be an Inflection-AI[1] style acquihire of senior leadership and not an acquisition. MS also entered into a non-exclusive licensing agreement with what was left of Inflection AI after poaching its founders.

[1]: https://en.wikipedia.org/wiki/Inflection_AI

This doesn't make much sense- In September, Groq was valued at $7B. How is it that in 4 months it is being bought for $20B?

Can someone with better understanding dumb this down for me please?

Acquisition premium: https://www.investopedia.com/terms/a/acquisitionpremium.asp

The acquisition price of a company usually comes at a premium to the last valuation. This applies even with publicly traded companies, which is why acquisition announcements cause stock prices to pop up to some number between the last trade price and the acquisition price, proportional to how much the market thinks the acquisition is likely to go through.

The premium can make sense to the acquirer because the acquired company is worth more when combined with all of the assets and power (brand name, distribution, patents, trade secrets) of the acquiring company.

This confuses a lot of people who think the valuation of a company is equivalent to the number that would be paid to acquire it at that instant, but it’s not.

  • ZiiS
  • ·
  • 1 day ago
  • ·
  • [ - ]
It only has to be overvalued by a lower multiple then NVidia; not undervalued.
  • wmf
  • ·
  • 1 day ago
  • ·
  • [ - ]
Imagine a pharma with a weight loss drug that isn't approved yet; it's either worth $20B (if approved) or zero (if not approved).

Now imagine the LPUv2 ASIC. If it works it's worth $20B and if it doesn't it's zero. If investors think LPUv2 has a 1/3 chance of success they would buy in at $7B. Then the chip boots up and... look at that.

Or it's just a massive bubble.

eli5 lpu please
  • ra7
  • ·
  • 1 day ago
  • ·
  • [ - ]
Groq calls their inference chips “Language Processing Unit”: https://groq.com/blog/the-groq-lpu-explained
Groq kept delivering so their valuation has effectively gone up.

A year ago it wasn't clear if they'd stay competitive but it seems they are.

I got recommended this and I will watch it today, thank you. One of the comments points out

"They’ve literally told us that the plan is to get bailed out by the taxpayers"

This reminded me of how I think what's gonna happen/ is already happening is that they become too big to fail and get bailed out and the burden/loss becomes of taxpayers

So we are kind of living in a system which is reckless about finances/stability behind businesses where the system is such that all the profits are privatized but all the losses are shared/even funded by the average person

Combine in a mix of corruption in any political party to begin with and I am wondering why we don't have yet another french revolution.

[dead]
Trump Jr. entered at $7 billion. In the meantime Nvidia got permission to sell GPUs to China.

All-In pundit Palihapitiya is invested in Groq as well. It is going well for friends of David Sacks.

This is smart as hell. I’ve long wondered how they’d combat ASIC’s without diluting their own benefits. This gives them a bit more time to figure out the moats, which is useful because Groq was going places. This juices Groq’s distribution, production, ability to access a wider range of skills where necessary.

I expect China to want to compete with this. Simpler than full-blown Nvidia chips. Cue much cheaper and faster inference for all.

The absolute best case I can make for this:

I think it’s pretty obvious at this point that Nvidia’s architecture has reached scaling limits - the power demands of their latest chips has Microsoft investing in nuclear fusion. Similar to Intel in both the pre-Core days and their more recent chips, they need an actual new architecture to move forward. As sits, there’s no path to profitability for the buyers of these chips given the cost and capabilities of the current LLM architectures, and this is obvious enough that even Nvidia has to realize it’s existential for them.

If Groq’s architecture can actually change the economics of inference and training sufficient to bring the costs in line with the actual, not speculative, benefits of LLMs, this may not be a buy-and-kill for Nvidia but something closer to Apple’s acquisition of P.A. Semi, which made the A- and M- class chips possible.

(Mind you, in Intel’s case they had to have their clocks cleaned by AMD a couple times to get them to see, but I think we’re further past the point of diminishing returns with Nvidia - I think they’re far enough past when the economics turned against them that Reality is their competition now.)

NVIDIA and "no path to profitability" don't belong in the same zip code.
  • jonah
  • ·
  • 1 day ago
  • ·
  • [ - ]
I read it as path to profitability for the AI companies buying Nvidia's chips.
No path to profitability for the people using their products for their putative purpose, which seems like it might affect Nvidia’s bottom line at some point. Clarified.
  • wmf
  • ·
  • 1 day ago
  • ·
  • [ - ]
there’s no path to profitability for the buyers of these chips given the cost and capabilities of the current LLM architectures

Didn't Anthropic say inference is already profitable?

Presuming that’s why they raised 3 times this year.
  • shwaj
  • ·
  • 1 day ago
  • ·
  • [ - ]
Inference being profitable doesn’t mean that they’re selling enough inference to offset their other costs.
Hopefully they plan to invest in the technology and not just eliminate a competitor.
They almost certainly plan to invest in the technology. One of the biggest threats to Nvidia is people developing AI-centric ASICs before they get there. Yes, Google has their TPUs and there are others around, but it's early on.

In some ways, it's not about eliminating a competitor. It's about eliminating all the competitors. Nvidia can use its resources to push AI ASICs farther faster than others, potentially cutting off a whole host of competitors that threaten their business. Nvidia has the hardware and software talent, the money, and the market position to give their AI ASICs an advantage. They know if they don't lean into ASICs that someone else will and their gravy train will end. So they almost certainly won't be abandoning the technology.

But that doesn't mean that it'll be good for us.

  • nl
  • ·
  • 1 day ago
  • ·
  • [ - ]
Given that Groq is basically the TPU spin out Google should have done years ago it shows what a valuable asset TPUs are in Google.

Still this they should spin that out though!

  • shwaj
  • ·
  • 1 day ago
  • ·
  • [ - ]
Groq is very different from TPUs. The difference in memory should be a big clue. (Groq using SRAM built into each compute unit, vs TPUs using HBM more similarly to GPUs)
  • nl
  • ·
  • 1 day ago
  • ·
  • [ - ]
The founder created TPUs at Google.

> Prior to founding Groq, Jonathan began what became Google’s Tensor Processing Unit (TPU) as a 20% project where he designed and implemented the core elements of the first generation TPU chip.

He did not "create" the TPU. He contributed to the creation of the TPU.

There are 66 authors on the original TPU paper: https://dl.acm.org/doi/10.1145/3079856.3080246

  • shwaj
  • ·
  • 19 hours ago
  • ·
  • [ - ]
I know, so what? Very smart people might have more than one idea in their careers. He left to pursue one of those.
Is there less regulatory oversight when purchasing assets instead of the company?

The near exclusive global provider of AI chips purchasing the only serious competitors technology while quite spceficially describing it as “not an acquisition” seems a bit…

This is the absolute biggest grift of the century by the groq team. They never shared actual TCO, and I remember a Seminalaysis article about the power consumption being actually insane - this makes sense because they scale the number of chips to fit a single model when they have no dram. They have good inference latency but there was no way the economics were going to work out. Meanwhile Nvidia with every advantage in the world decides they’re worth 20B? It actually doesn’t make sense at all. The only scenarios the groq system would be worth it is in the exact throughput-optimized scenarios Nvidia already thrives in.
Maybe the EU or individual states will sue under their own anti-trust laws will stop this - seems pretty clearly anti-competitive and probably a prelude of these over-valued companies using their stock to gobble up any possible competitor to consolidate even more.
Biggest surprise for me is that they chose to announce this on Christmas Eve.

I’d love to have been in the room when that was decided. The big, exciting news doesn’t typically get announced during a major holiday week.

They want the Trump Jr. and Chamath connections to be forgotten by the time the media and YouTubers get active again after New Year. It's a standard media tactic.
Honestly, it's basically an accepted fact that the best thing to do is just to never respond anything that media/people say. If you respond, the conversation will continue. If you just say nothing, they'll move on to the next topic, because everyone needs to keep their audience engaged. With no response, your audience isn't interested either, so you have to move onto something else.
[dead]
What's your angle on the Chamath link? Is Nvidia greasing Sack's favour through Chamath?
> Groq will continue to operate as an independent company with Simon Edwards stepping into the role of Chief Executive Officer.

Kindof feel bad for Simon Edwards, lol. I wonder what the plan is for the future of Groq

I would assume this is a very well paid position, and with basically nothing to do.
I would assume he is a fiduciary... I am curious what happens with investors here?
Probably nvidia will keep investing in random rounds groq will continue to raise.
  • htrp
  • ·
  • 1 day ago
  • ·
  • [ - ]
He has to keep the company in business for 2 more years
Are there consequences if he fails to do so?
I believe that this news kind of helps cerebras as groq and cerebras are the only two companies working extensively in this space

I feel as if Nvidia is eating up even companies which I thought had genuine potential or anything related to AI industry whether profitable or not

Nvidia's trying its best to take all major players and consolidate into one big entity from top to bottom.

The problem with this approach imo is that long term, nvidia's stock is extremely overvalued and its still a bubble which will burst and it will take nvidia first and foremost.

The issue is that when nvidia falls, it will take the whole literal industry from top to bottom, even those companies which I thought could survive an AI burst. Long term I feel like it will have really bad impacts if nvidia continues to gobble up every company.

I am pretty sure that Nvidia might be looking at cerebras too and if they offer them a shit ton of money and cerebras gets bought. I genuinely believe that Nvidia has sort of invested in literally all pockets of any hardware related investment for AI. And when OpenAI is unable to pay Nvidia, I feel like it can all come crashing down since this whole cycle is only being possible via external investor money.

Hopefully prices stay the same, I run my small apps on groq. I get good enough summarization and simple agents from gpt120b which is on 15 cents for a million input tks.
Can you list any internet facing apps you have that use Groq? Looking into them myself because of the speed
Still in development, but yes, the speed is absolutely nice.
I hope non executives and founders get something for their equity.
In my opinion, Groq's technical decisions are unsound in a normal world. But being HBM-free may have some merit in a world where HBM supply is constrained.
They raised at a ~6B round recently. I could have invested but missed the deadline :(
I could not figure out if „cash“ means literally cash or figuratively cash in the sense of “no trade in shares”.

Will there be a truck full of paper money or not?

Implies they will pay cash value to equity holders as opposed to issuing NVDA shares.

(Electronically)

Is this to somehow screw the employees with RSUs or what?
  • wmf
  • ·
  • 1 day ago
  • ·
  • [ - ]
No, it doesn't really matter if they pay in cash or stock. If you think NVDA has room to run you're welcome to use your buyout money to buy NVDA on the open market.
Well, this isn't framed as a buyout/takeover, so I was curious how existing RSUs would be cashed out?

This deal is framed as IP transfer and talent transfer without owning the full company. Probably to skirt anti trust, among other things.

I'm not sure in this specific case. They could choose to pay the employees some portion of the funds.

If not, the owners are likely liable to be sued for "selling in effect" without paying equity holders.

Presuming the company becomes a defacto subsidiary of Nvidia (even if not legally so)

My guess, without researching it, is they will compensate existing equity holders to avoid that possibility. I mean the valuation multiple is enormous, it's worth it simply to derisk the legal aspect.

  • wmf
  • ·
  • 1 day ago
  • ·
  • [ - ]
For vested RSUs it's likely that the Groq husk will pay out the $20B as a dividend or buyback or something. I don't know if unvested RSUs are accelerated or just canceled. Of course the employees will receive new RSUs when they join Nvidia.
Unless one is worried about the government, no one ever means paper money.
It's the latter. They'll send a wire.
groq was targeting a part of the stack where cuda was weakest: guaranteed inference time at a lower cost per token at scale. This was in response to more than just goog's tpus, they were also one of the few realistic alternative paths oai had with those wafers.
Congratulations to Chamath
  • 7e
  • ·
  • 1 day ago
  • ·
  • [ - ]
For failing?
The last time I tested, I could swear they were doing some problematic quantization, because I was getting kind of random results with one or two models, which worked perfectly when I switched providers.

It was really disappointing too because Cerebras does not provide any service reliability on their cheap plans. So I came to the conclusion that unless I could convince the client to set up an enterprise contract or something, we could not use either provider for low-latency, which we need for voice calls. I think for organizations that can afford a hefty contract that guarantees service levels, Groq and Cerebras especially are basically cheat codes for meeting latency requirements for voice. But that might not be an option for really small businesses.. although maybe I am just not a good sales person.

I remember when Google acquired YouTube in 2006 for $1.65 billion in stock.

Media said it was crazy back then, well I think this sounds a lot crazier but hindsight is 20/20.

I thought the Skype deal back then was worse: it sucked balls back then already.
And now we’re told that bet was such a sure thing that regulators should have blocked it!
  • fyrn_
  • ·
  • 1 day ago
  • ·
  • [ - ]
Seems anticompetitive..
I do not consider this good news; I had hopes Anthropic or so would buy them, not the main AI hardware people.
"2.7M Developers and Teams"

So, about ~$1,000/each? Seems pricey, even assuming all of them still use it every week/month.

What are the other top AI silicon vendors?

Graphcore

Tenstorrent

SambaNova

Rivos

the talk is that sambanova-intel is akin to a firesale, so who knows what the real story with any single company. same with graphcore.

https://www.datacenterdynamics.com/en/news/sambanova-explori...

From the comment below:

> Groq raised $750 million at a valuation of about $6.9 billion three months ago. Investors in the round included Blackrock and Neuberger Berman, as well as Samsung, Cisco, Altimeter and 1789 Capital, where Donald Trump Jr. is a partner.

Makes it very hard not to think of this as a way to give money to the current administration. I know, this sounds conspiracy theory grade, but 20b is too much for groq.

The value of Groq comes from its excellent price-to-performance ratio. Its inferencing speeds are faster than those of H200s, and it has the lowest costs in the industry. When running similar batch jobs across different providers compared to Groq, the processing speed can sometimes be more than 10 times faster. These figures are important for developing practical applications for production use. It's common for me to run workloads in Groq that cost less than $100, while the same workload can approach $1,000 on Bedrock or Gemini. They have tuned a set of OS models that can now deliver a full application. The speeds have allowed me to offload a lot of the functionality from heuristics to straight-up LLMs.
It's not a conspiracy theory. It's basically a bribe to Chamath, Don Jr. and Sacks.
Big win for Chamath!
If we had a functioning government this wouldn’t be legal
  • ·
  • 1 day ago
  • ·
  • [ - ]
is this in response to the threat from Google tpu
Can't wait for the abuse of the word Grok to die (bet none of these techbros even read the book). There was even an AI company that made a product called "Sophon". Talk about an overinflated sense of self-worth.

I like the Wright Brothers, they called the first plain, "Flyer".

  • ·
  • 1 day ago
  • ·
  • [ - ]
A few thoughts: [disclaimer - I don't know anything official at all just reading the same public info that has been officially shared - this is just a few thoughts from an outsider]

- The deal structure matters, and we don't have enough details yet. - If the license fee is distributed to shareholders, it goes above the liquidation preference. Anyone with common stock or options can exercise and get paid out. - The company continuing forward—I see this as great. There are discussions about it becoming just a shell, but I don't think that's the case at all. This looks like an acqui-hire for a few top people and a licensing deal for an alternative hardware approach to inference.

Let's say they keep $3–4B cash in the company. That's plenty to avoid another financing round and keep cranking on growth.

Groq and Cerebras can keep adding speed to open models while giving Nvidia key IP they can integrate into their large data center buildouts.

Also, on deal points I think could be interesting: would you negotiate this as a one-time payment license deal? I wouldn't. Maybe it's a hurdle, so it might take a while, but let's say Nvidia pushes massive investment to deploy this Groq hardware infrastructure integrated into their full stack… A. This could produce a nice royalty stream for the Groq company that still exists—benefiting all stakeholders. B. Use Nvidia's massive ability to deploy capital and hardware pipeline and add in another unit they can sell to their fat stakes customers and ultimately to cheapen and accelerate getting fast inference in the wild quickly.

And lastly, if management is smart or clever with the distribution part of this deal, maybe they convert all stock to common and squash the liquidation preference in this move. so they might have a quite compelling cap table post deal. (again hopefully structured as a distribution vs buy out) but at least pref is gone.

So employees exercise, get an exit, keep their stock—with investors already happy, liquidation preference gone, and potentially a well-capitalized future royalty stream coming from the largest market cap company in the world and the largest capex pipeline ever seen.

I'd much rather own the actual shares (fewer handcuffs) and have plenty of cash to deploy in a very capex-heavy moment.

It just seems like a win-win-win-win.

Nvidia wins new fundamentally different IP can sell to there existing customers Groq employees and stakeholders win. The open models win (big). We as AI consumers win because of cheaper, faster inference.

Common to what we all want to believe here, we’re not really in a winner take all moment here. nvdia is just taking a disproportional amount because of lack of real suitable alternatives… The base will for sure widen and expand from where we are at now, but that doesn’t mean that nvidia has to or is going or loose as part of it.

This is the most blatant buy the competition move if i've ever seen one....
chamath made a killing
Will be interesting technically to see what develops from this. NVLink? Full CUDA feels maybe doubtful but who knows. Nvidia CUDA Tile feels like more of a maybe, their new much more explicit way of making workloads.

This does feel a bit sad for sure, worrying whether this might hold Groq and innovation back. Reciprocally, perhaps kind of cool to see Groq get a massive funding boost and help from a very experienced chip making peer. It feels like an envious position somewhat, even with the long term consequences being so hazy. From the outside yes it looks like Nvidia solidifying their iron grasp over a market with very limited competitive suppliers, but this could help Groq, and maybe it's not on the terms we think we want right now, but could be very cool to see.

I really hope some of the rest of the markets can see what's happening, broadly, with Nvidia forming partnerships all over the place. NVLink with Intel, NVLink with Amazon's Tritanium... there's much more to the ecosystem, but just connecting the chips smartly is a huge task, is core to inter-operation. And for all we've heard of CXL, UltraAccelerator Link (UALink) and UltraEthernet (UET) it feels like very few major players are taking it seriously enough to just integrate these new interconnects & make them awesome. They remain incredible expensive & not commonly used, lacking broad industry adoption, and reserved for very expensive systems: there's a huge existential risk here that (lack of) interconnect will destroy competitors ability to get their good chips well integrated and used. The rest of the market needs more clear alarm bells going off, and needs to be making sure good interconnect is available on way more chips, get it into everyone's hands ASAP not just big customers, so that adoption & Linux nerd type folks can start building stacks that open up the future. The market risks getting left behind, if NVLink is built in everywhere and the various other fabrics never become common-place.

Is any part of this because Google has the TPU and Groq has the LPU?
  • wmf
  • ·
  • 1 day ago
  • ·
  • [ - ]
There's definitely a narrative that ASICs/TPUs/LPUs are more efficient than GPUs and thus Nvidia "needs" an ASIC. Whether this is true is debated.
Well that sucks.
  • rvz
  • ·
  • 1 day ago
  • ·
  • [ - ]
Great choice and what a great deal.

Quite obvious that Groq would get acquired. [0]

[0] https://news.ycombinator.com/item?id=39438820

Uh oh, not good that a major Nvidia competitor with genuine alternative technology will no longer be competing... Chances this tech gets killed post-acquisition?
  • wmf
  • ·
  • 1 day ago
  • ·
  • [ - ]
It may be more likely that Nvidia sells the LPUv2 at a price that doesn't threaten Rubin.
  • pohl
  • ·
  • 1 day ago
  • ·
  • [ - ]
Zero. Non-zero only if someone says something deemed “woke”.
> Groq is expected to alert its investors about the deal later on Wednesday. While the acquisition includes all of Groq’s assets, its nascent Groq cloud business is not part of the transaction, said Davis.

Wait, what? How is the cloud business supposed to run if Nvidia is acquiring the rights to the hardware?

It isn't, and the other companies that offer cloud AI that Nvidia has invested in can carry on happy they have one less competitor.

This is how business works in the 21st century - once one company has a dominant position and a massive warchest they can just buy any business that has any potential of disrupting their revenue. It's literally the thesis Peter Thiel sets out in Zero To One. It works really well for that one business.

That's the neat trick - it isn't...
Sell the asset and then lease it from the buyer.
That works fine with office buildings and stuff where a company is redistributing its risk profile, but not when the company it’s selling to has every incentive to kill the asset as a competitor.
The business model at Groq basically morphed over time so the internal cloud was their only client and all purchases were on some revenue sharing basis to finance set up and operate the cloud business. So this has a bigger impact on those Cloud hardware operators to the extent they were involved in the discussions with Nvidia. Saudi Aramco comes to mind, as an early big check investor, who hosts much of the Groq Cloud today. So now Nvidia is their sole source supplier and the whole Tokens-as-a-Service business model they signed up for is re-negotiated ?
From the press release, Nvidia now has a non exclusive license to the hardware.

Groq will continue to operate as an independent company with Simon Edwards stepping into the role of Chief Executive Officer.

GroqCloud will continue to operate without interruption.

Following the age old playbook of monopolies. https://www.arte.tv/en/videos/103517-001-A/capitalism-in-ame... (Use a vpn if outside EUR)

A free market is a regulated market. Otherwise you will end up with monopolies and a dead market.

…except they are a rather small hardware startup, there is a dozen other rather small hardware startups they did not buy, and there will be more such startups funded just on the news that NVIDIA bought one at a big premium.
I understand your line of reasoning, but the net effect is that any competition will be eliminated. Those startups that are not posing a real threat don't count anyways.

Hardware startups are not easy. If the effect is that the most likely way to make money is to be bought by the behemoth, then this distorts the market even more and suffocates innovation.

They should have bought nbis
How can this pass antitrust régulation ?
  • rvz
  • ·
  • 1 day ago
  • ·
  • [ - ]
There is no "antitrust regulation" in the US in 2025. (Until 2029)

States are "not allowed" to regulate AI companies.

There also weren't any antitrust regulations before, let's not kid ourselves.
There was an attempt under Lina Khan.
The regulations are still in place. They are not enforced, however.
Care to give more details?
I don't know specifically, but I think they're referring to the current USA administration's posture of approving anything, or pardoning anyone, in exchange for some cryptocurrency or similar big favour.
> Groq raised $750 million at a valuation of about $6.9 billion three months ago. Investors in the round included Blackrock and Neuberger Berman, as well as Samsung, Cisco, Altimeter and 1789 Capital, where Donald Trump Jr. is a partner.
They made Jimmy Carter sell his peanut farm…
That's the thing though -- no one made Jimmy Carter sell his farm[0].

But Jimmy Carter was an honorable human, and, well...there are fewer people fitting that description sitting behind the Resolute desk, today.

[0] He didn't sell it, he put it into a blind trust. He should have sold it. When he left office, the farm was $1MM in debt.

  • ·
  • 1 day ago
  • ·
  • [ - ]
I doubt Nvidia will be regulated in their home jurisdiction. America tends to protect it's cash cows, for better or worse.
people on twitter are calling it yet another acqui-hire.
this is genuinely sad, groq had really fast inference and was a legit alternative architecture to nvidia's dominance. feels like we're watching consolidation kill innovation in real time. really hoping regulators actually look at this one but not holding my breath
Nvidia. Please stop. Just stop it already.
Put Groq and Nvidia execs in prison, blatant anti-trust.
I got curious about how many wheelbarrows of cash $20bn actually is.

Two ways to think about it: weight vs volume.

By weight (assuming all $100 bills):

$20,000,000,000 / $100 = 200,000,000 bills

Each bill is roughly 1g, so total mass is ~200,000 kg

A typical builder’s wheelbarrow can take about 100 kg before it becomes unmanageable

200,000 kg total / 100 kg per wheelbarrow ≈ 2,000 wheelbarrows (weight limit)

By volume:

A $100 bill is ~6.14" × 2.61" × 0.11 mm, which comes out to about 102 cm³ per bill

200,000,000 bills × 102 cm³ ≈ 20,400 m³ of cash

A standard wheelbarrow holds around 0.08 m³ (80 litres)

20,400 m³ total / 0.08 m³ per wheelbarrow ≈ 255,000 wheelbarrows (volume limit)

So,

About 2,000 wheelbarrows if you only care about weight

About 255,000 wheelbarrows if you actually have to fit the cash in

So the limiting factor isn’t how heavy the money is; it’s that the physical volume of the cash is absurd. At this scale, $20bn in $100s is effectively a warehouse, not a stack.

  • mkl
  • ·
  • 1 day ago
  • ·
  • [ - ]
I think your volume per bill should be 6.14 * 0.0254 * 2.61 * 0.0254 * 0.00011 ≈ 1.137e-6 m³. That means about 227 m³ total volume, or about 2800 wheelbarrows.
Something wrong about representing the weight of US dollars in metric units.
  • zaik
  • ·
  • 1 day ago
  • ·
  • [ - ]
They should have converted to Euros first.
> Something wrong about representing the weight of US dollars in metric units.

The traditional unit of measure of truckloads of money is (drum roll) a dump truck. A large dump truck holds 16-20 cubic yards.

https://www.catdumptruck.com/standard-dump-truck-size-chart/

Then what do you say to 6.14" × 2.61" × 0.11 mm = 102 cm³
How many pounds is it? Whos on first?
The movie Blow had a scene about the logistics dealing with tons of cash. Even before thinking about laundering, it’s a huge PITA.
I think you’re off by about a factor of 100 on the volume of a single bill. So both cases it’s in the ballpark of 2000 wheelbarrows.
A better way to think of it is: If you got a dollar a second for the next 63 years, you still would not have gotten $2B.
Your volume of a single bill is a bit off.
[dead]
  • asdev
  • ·
  • 1 day ago
  • ·
  • [ - ]
[flagged]
He is good at scamming others
nVidia is being scammed here? Seems unlikely…
For nvidia increasing the numbers of their money-multiplying mutual investment ring is more important than the value of the deals. It's about involving more capital and people and making their grift too big to fail and keeping the stock numbers up. Nvidia has the ability to promise large amounts of money like this in announcements but I haven't read about any of them actually having money or good exchange hands yet.
Next they would acquire and kill Cerebras. I hate every part of Nvidia
That's great, but LLMs are still not generating revenue.
I pay $20/mo for Gemini, so they're generating at least that much in revenue!
All depends on how much it costs them to service your $20/month sub in OPEX and how much it cost them in capex to buy and maintain that hardware.
  • ·
  • 1 day ago
  • ·
  • [ - ]
They’re generating tons of revenue, just not necessarily profits
They are generating revenue, profit is the dubious thing.
  • NaOH
  • ·
  • 1 day ago
  • ·
  • [ - ]
Related on the business side, and from the last two years:

AI Chip Startup Groq Raises $750M at $6.9B Valuation - https://news.ycombinator.com/item?id=45276985 - Sept 2025 (5 comments)

Groq Raises $640M to Meet Soaring Demand for Fast AI Inference - https://news.ycombinator.com/item?id=41162875 - Aug 2024 (34 comments)

AI chip startup Groq lands $640M to challenge Nvidia - https://news.ycombinator.com/item?id=41162463 - Aug 2024 (12 comments)

Groq CEO: 'We No Longer Sell Hardware' - https://news.ycombinator.com/item?id=39964590 - April 2024 (149 comments)

From $6.9b to 20 in a few months, not bad…
Almost as good as forking VSCode, impressive.
That was impressive to see what you did there and the harsh reality that its true hits like a brick.

Don't forget that those forks of VScode are gonna be bought by Nvidia or chatgpt (OpenAI which gets invested by Nvidia) and everything else

Its all one large web connecting every investment and cross-investments and everything. The bubble image which got infamous recently is definitely popping up even more. Its crazy basically.

Acquisitions generally come in at a significantly higher price.

Even in public markets, acquiring all the shares of a company will require an offer that is a significant step above the current trading price.