Another interesting part:
> Under the terms of the new investment round, OpenAI has two years to transform into a for-profit business or its funding will convert into debt, according to documents reviewed by The Times.
Considering there are already lawsuits ongoing about their non-profit structure, that clause with that timeline seems a bit risky.
This is pretty much obvious just from the valuations.
The wild bull case where they invent a revolutionary superintelligence would clearly value them in the trillions, so the fact that they're presently valued an order of magnitude less implies that it is viewed as an unlikely scenario (and reasonably so, in my opinion).
Were there competitors that did the same thing? AltaVista? Yahoo? Did they undercut on cost? Google was free, I guess. But Google won because it maintained its quality, kept its interface clean and simple, and kept all the eyeballs as a result. Now Google is essentially the entry point to the internet, baked into every major browser except Edge.
Could ChatGPT become the “go-to” first stop on the internet? I think there’s a fair chance. The revenue will find its way to the eyeballs from there.
I already use ChatGPT as my first go-to stop for certain search queries.
I wouldn’t be surprised if OpenAI were still losing money even with the same CPM that Google search has.
Normal people would need to start using a Chat GPT owned interface for search to make an ad based business viable surely? And there's no real sign of that even beginning to happen.
Dissenters should consider that their might be short plays, that if what they think is true they could make some money.
I encourage you to visit https://chatgpt.com in incognito mode.
There are several ways to monetize a product (they already make some use of it).
Are they? I would guess that the cost per query for Google, even back then was insignificant compared to how must OpenAI is spending on GPU compute for every prompt. Are they even breaking even on the $20 subscriptions?
During their growth phase Google could make nothing from most of their users and still have very high gross margins.
OpenAI not only has to attract enough new users but to also ensure that they are bringing more revenue than they cost. Which isn’t really a problem Google or FB ever faced.
Of course presumably more optimized models and faster hardware might solve that longterm. However consumers expectations will likely keep increasing as well and OpenAI has a bunch of competitors willing to undercut them (e.g. they have to keep continuously spending enough money to stay ahead of “open/free” models and then there is Google who would probably prefer to cannibalize their search business themselves than let someone else do it).
> in fact as bezos demonstrated with Amazon for many years, profit is an indication you’ve run out of uses for capital to grow.
Was Amazon primarily funding that growth using their own revenue or cash from external investors? Because that makes a massive difference which makes both cases hardly comparable (Uber might be a better example).
It’s not even clear if OpenAI is breaking even with the $20 subscription just on GPU/compute costs alone (the newer models seem to be a lot faster so maybe they are). So incrementally growing their revenue might be very painful if they keep making the UX worse with extra ads while still simultaneously losing money on every user.
Presumably the idea is that costs will go down as HW became faster and models themselves more optimized/efficient. But LLMs themselves already seem to almost be a commodity so it might become tricky for OpenAI to compete with a bunch of random services using open models that are offering the same thing (while spending a fraction on R&D)
So it’s a long term bet but the idea that Google would lose to an LLM isn’t far fetched to me.
The models will have diminishing returns and other players seem better suited to providing value added features.
https://finance.yahoo.com/news/uae-backs-sam-altman-idea-095...
>The models will have diminishing returns
Wasn't that the going thinking before ChatGPT? And before AlexNet. Of course, we'll again be having some diminishing returns until the next leap.
They are spending a lot on shovels but it’s not clear that there is that much “stuff” (consumer demand) to be shoveled.
VC money can only take you so far, you still need to have an actual way of making money.
LLMs might effectively replace Google but they are already a commodity. It’s really not clear what moat OpenAI can build when there are already a bunch of proprietary/open models that are more or less on the same level.
That basically means that they can’t charge much above datacenter cost + small premium longterm and won’t be able achieve margins that are high enough to justify current valuation.
Only moat OAI has right now is advanced audio mode / real-time audio API, plus arguably o1 and the new eval tools shown the other day as those are essentially vertical integrations.
And maybe, like you said, first-mover advantage. But is not that clear, as even Anthropic got ahead in the race for a while with Claude 3.5 Sonnet.
2 trillion. Approximately 13x OpenAI's current valuation. Google nets almost 100 billion a year. OpenAI grosses 4 billion a year.
Wild numbers.
If we throw out some conservative numbers, and assume costs will rise super modestly, you have to believe OpenAI's earning will grow 50-100x for the investment to make sense. They'd have to maintain their current growth rate for 5+ years, but I wouldn't be surprised if their revenue growth is already slowing
And much as AI hype irritates me, the idea that the most popular LLM platform becomes a ubiquitous consumer technology heavily monetised by context-sensitive ads or a B2B service integrated into everything doesn't seem nearly as fanciful as some of the other "next Googles". At the very least they seem to have enough demand for their API and SaaS services to be able stop losing money as soon as VCs stop queuing up to give them more.
Facebooks IPO financials were among the best financials at IPO ever
OpenAI has negative 130% adjusted operating margins.
Revenue - 3,711 - 88% YoY growth.
Net Income - 1,000
Cash - 3,908
Tell me how those are bad?
It reminds me of the old joke:
Heard about the guy who fell off a skyscraper? On his way down past each floor, he kept saying to reassure himself: So far so good... so far so good... so far so good.
Google's generative ai models probably are used more in a day than the rest combined. Google is a highly profitable business that still has never not grown YoY in its nearly 3 decade history.
In you mind you might think Google is going down, but in reality they have only been going up for nearly 3 decades now.
only display ads business, which is a fraction of total ads revenue
not really; ChatGPT may have the brand name but there are other offerings that are just as good and which can be incorporated into existing apps that have a captive userbase (Apple, Google, Meta). Why should I switch to another app when I can do genAI within the apps that I'm already using.
for this, in addition to "google" part, they also need to build "ads" part for monetization, which is also not trivial task.
I was prototyping some ideas with ChatGPT that I wanted to integrate into a MVP. It basically involved converting a users query into a well categorized JSON object for processing downstream. I took the same prompt verbatim and used it with an Amazon Bedrock hosted Anthropic model.
It worked just as well. I’m sure there will be plenty of “good enough” models that are just as good as Open AI’s models.
nothing
I personally always use ChatGPT over Google.
Love the use of the personal anecdote to refute my point BTW.
Hell even open source models are nowadays better than the best models this billions burning company had just 6 months ago.
I'm lost at what is the moat here, because I really don't see it and I don't believe any of these companies has any advantage at all.
This actually represents only the narrow "aligned" range of AI outcomes, so it makes sense it's a small one.
(Don't want to think about that too much but man just imagine... A superintelligence with Sam Altman mindset)
Research and safety have to take a backseat for cost reduction. I mean there are many avenues to profitability for them, one I can think of is they could cut their cost significantly by creating smaller and smaller models that match or nearly match Gpt4, while paid subscribers wouldn't be able to really tell the difference. No one is really challenging them on their benchmark claims.
I think their main challenge is that in 5-10 years from now, if their current definition of AGI is still elusive, models of Gpt4 capabilities or similar (Llama 3 can fool most people I think) will be running locally and freely on pretty much any OS of choice without having to make a single outside API call. Every app will have access to local inference that neither costs developers nor the users anything to use. Especially after the novelty has worn off a bit, it's hard to see consumers or developers paying up to use something that's technically better but not significantly enough to justify a $20+/mo subscription or per token cost. Right now though, local inference has a huge barrier of entry, especially when you think across platforms.
Honestly, I think Google and Apple can afford to spend the cash to develop these models in perpetuity, while OpenAi needs to worry about massive revenue growth for the next few years, and they probably don't really have the personnel to grow revenue aggressively either. It's a research lab. The downside of revenue seeking too, is that sometimes the pursuit kills the product.
> models of Gpt4 capabilities or similar
I took Apple how many years to change their bass config memory from 8 GB to 16? Somewhere between 8 and 10..
Regardless, I’m not sure running reasonably advanced models locally will necessarily become that common anytime soon on mainstream devices. $20 per month isn’t that much compared to the much higher HW costs, of course it’s not obvious that OpenAI/etc. can make any money longterm by charging that.
> OpenAI's monthly revenue hit $300 million in August, and the company expects to make $3.7 billion in revenue this year (the company will, as mentioned, lose $5 billion anyway), yet the company says that it expects to make $11.6 billion in 2025 and $100 billion by 2029
> For OpenAI to hit $11.6 billion of revenue by the end of 2025, it will have to more than triple its revenue. At the current cost of revenue, it will cost OpenAI more than $27 billion to hit that revenue target. Even if it somehow halves its costs, OpenAI will still lose $2 billion.
Venture capital loses money to win marketshare, tale as old as time
I don't think Netflix and Uber had even a fraction of the competition that this field will have.
It's easy with hindsight to underestimate the forces of Netflix's competitors, but consider that even today, the revenue of Pay TV worldwide is still bigger that the revenue of streaming platforms. And unlike streaming platforms, Pay TV make profits hand over fist and didn't incur crazy debts to gain marketshare. They may be on the way to extinction, but they'll make tons of profits on their way out.
Netflix discovered that content was the only moat available to them, after which it was effectively inevitable that they become a major production studio. I think Uber are still trying to figure out their most, but ironically regulation is probably part of it.
OpenAI also created a new product category, but the competition was very quick to move into that category and has very deep pockets. At some point, they clearly felt regulation might be a moat but it’s hard to see them landing that and winning.
Uber had significantly more competition than all of these companies combined. Including from China which they were forced out of.
I'd venture to guess that they will start building their own data centers with their own inference infra to cut the cost by potentially 75% -- i.e., the gross markup of a public cloud service. Given their cost structure, building their own infra seems cheap.
Like you could discount say a 10B investment down to 6B or w/e the actual costs are but now your share of OpenAI is based on 6B. Seems better to me to give OpenAI a "10B" investment that internally only costs 6B so your share of OpenAI is based on 10B.
Plus OpenAI probably likes the 10B investment more as it raises their market cap.
Someone has to pay for the hardware and electricity.
I don’t see why that’s so far-fetched?
Inevitably this will result in this one company's eventual decline but it'll push everyone else in the space forward.
Last I saw, the whole thing was convertible debt with a $150bn cap. I’m not sure if they swapped structures or this is some brilliant PR branding the cap as the heading valuation.
I mean, what exactly do you see happening? The have a product people love and practically incalculable upside potential. They may or may not end up the winners, but I see no scenario in which it "doesn't end well". It's already "well", even if the company went defunct tomorrow.
>that clause with that timeline seems a bit risky.
I'm 99% certain that OpenAI drove the terms of this investment round, they weren't out there hat in hand begging. Debt is just another way to finance a company, cant really say it's better or worse.
Will people love ChatGPT et al just as much if OpenAI have to charge what it costs them to buy and run all the GPUs? Maybe, but it's absolutely not certain.
If they "went defunct" tomorrow then the people who just invested US$6bn and lost every penny probably would not agree with your assessment that it "ended well".
It’s less (gross) expensive for inference, since it takes less time, but the cost of that time (per second) is the same as training.
We can do the math. GPT-4o can emit about 70 tokens a second. API pricing is $10/million for output tokens and $2.5/million for input tokens.
Assuming a workload where inputs tokens are 10:1 with output tokens, and that I can generate continuous load (constantly generating tokens). I'll end up paying $210/day in API fees, or $76,650 in a year.
Let's assume the hardware required to service this load is a rack of 8 H100s (probably not accurate, but likely in the ballpark.). That cost $240k.
So the hardware would pay for itself in 3 years. It probably has a service life of about double that.
Of course we have to consider energy too. Each H100 is 700watts, meaning our rack is 5.6 kilowatts, so we're looking at about 49 megawatt-hours to operate for the year. Let's assume they pay wholesale electricity prices of $50/mwh (not unreasonable), and you're looking at a ~$2,500 annual energy bill.
So there's no reason to think that inference alone isn't a profitable business.
It's not unusual for a startup to not be profitable, and they're obviously not as the company doesn't make a profit, but I'm not sure why isolating one aspect of their business and declaring it profitable would justify the idea that this company is inevitably a good investment "even if the company went defunct tomorrow".
Perhaps you meant "win" in the sense of "being influential" or something, but I'm pretty sure the people who invested billions of dollars use definitions that involve more concrete returns on their investment.
I was responding to someone upthread suggesting that they were running even inference at a loss.
The issue is OpenAI is not just selling inference.
Though I wouldn’t be surprised if there were some hidden costs that are hard for us to account for due to the sheer amount of traffic they must be getting on an hourly basis.
I'm willing to bet that if you swapped out GPT with Claude, Gemini or Llama under the hood 95% of their users wouldn't even notice. LLMs are fast becoming a commodity. The differentiating factor is simply how many latest NVIDIA GPUs the company owns.
And even otherwise, people loving a product isn't what makes a company successful. People loved WeWork as well. Ultimately what matters is the quarterly financial statement. OpenAI is burning an incredible amount of money on training newer models and serving every query, and that's not changing anytime soon.
You can say exactly the same about Google and Bing (or any other search engines), yet Google search is still dominant. Execution, market perception, brand recognition, momentum are also important factors, not to mention talent and funding.
Not everyone who wants to invest, can invest in this round. You may bet the investors are wrong, but they put money where their mouth is. Microsoft participate, even though they already invested $13b.
The aspect of corporate usage where OpenAI seems to be ahead is in direct enterprise subscriptions to ChatGPT.
How is A16Z's massive crypto fund doing again? And what about Softbank's other big bets?
They are not the only ones with the product. They don't have a moat. They are marginally better than competing models, at best. There is no moat into LLMs unless you happen to find the secret formula for super intelligence, hope nobody else finds it, and lock all of your R&D in the Moria mines so they don't go working and building it elsewhere.
There is no moat here. I can't believe so many intelligent people, even on HN cannot grasp it.
also, for my use case I eventually found it’s faster and easier and less frustrating to write code myself (empowering) and not get sucked into repeatedly reminding AI of my intent and asking it to fix bugs and divergences (disempowering)
Plus, you can find alternatives now for the random edge cases where you do actually want to chat with an out of date version of the docs, which don’t train on user input.
I recommend we all “hard pass” on OpenAI, Anthropic, Google, basically anyone who’s got prohibitions on competition while simultaneously training their competing intelligence on your input. Eventually these things are going to wreck our knowledge work economy and it seems like a form of economic self-harm to knowingly contribute to outsource our knowledge work to externals…
The git repo and review history for any large project is probably more helpful for training a model than anything, including people using the model to write code.
It sounds like you are happy to not use LLMs. I’m the opposite way; code is a means to an end. If an LLM can help smooth the road to the end result, I’m happy to take the smooth road.
Refusing to learn the new tool won’t keep it from getting made. I really don’t think that code writers are going to influence it that much. The training data is already out there.
It seems that me most likely outcome is that they have one replaceable product against many and few options to get return commensurate with valuation.
My guess is that investors are are making a calculated bet. 90% chance the company become irrelevant, 10% chance it has a major breakthrough and somehow throws up a moat to prevent everyone else from doing the same.
That said, I have no clue what confidential information they are showing to investors. For all we know, they are being shown super human intelligence behind closed doors.
If that were the case, I wonder why Apple passed on this investment.
This seems to be the "crypto is about to replace fiat for buying day to day goods/services" statement of this hype cycle. I've been hearing it at least since gpt-2 that the secret next iteration will change everything. That was actually probably most true with 2 given how much of a step function improvement 3 + chatGPT were.
...yet they struggle to find productive applications, shamefully hide their training data and can't substantiate their claims of superhuman capability. You could have said the same thing about Bitcoin and been technically correct, but society as a whole moved in a different direction. It's really not that big of a stretch to imagine a world where LLM capability plateaus and OpenAI's value goes down the toilet.
There is simply no evidence for the sort of scaling Sam Altman insists is possible. No preliminary research has confirmed it is around the corner, and in fact tends to suggest the opposite of what OpenAI claims is possible. It's not nuclear fusion or commercial supersonic flight - it's a pipe-dream from start to finish.
Everyone involved is hoping OpenAI will either
a) figure out how to resolve all these issues before the clock runs out, or
b) raise more money before the clock runs out, to buy more time again.
All I can say to the investors, with the best of hopes, is:
Good luck! You'll need it!
https://www.threads.net/@nixcraft/post/C5vj0naNlEq
If they haven't built AGI yet that just means you should give them more billions so they can build the AGI. You wouldn't want your earlier investments to go to waste, right?
I wouldn't even be surprised if they were losing money on paying ChatGPT users on inference compute alone, and that isn't even factoring in the development of new models.
There was an interesting article here (can't find the link unfortunately) that was arguing that model training costs should be accounted for as operating costs, not investments, since last year's model is essentially a total write-off, and to stay competitive, an AI company needs to continue training newer frontier models essentially continuously.
Really their biggest risk is total compute costs falling too quickly or poor management.
I don’t think it’s very likely, but think of it like an auction. In a room with 100 people 99 of them should think the winner overpaid. In general most people should feel a given startup was overvalued and only looking back will some of these deals look like a good investment.
As long as we’re talking independent revenue streams it’s worth counting separately from an investment standpoint.
I'd be surprised it that was the case. How many tokens is the average user going through? I'd be surprised if the avg user even hit a 1m tokens much less 20m.
Even for regular old 4o: You’re comparing to their API rates here, which might or might not cover their compute cost.
Voice mode is around $0.25 per minute via API. I don't use that much, but 3 minutes ago per day would already exceed the cost of a ChatGPT Plus subscription by quite a bit.
I’m not sure I understand this, sorry. I see GPT-4o at $3.75 per million input tokens and $10 per million output tokens, on OpenAI’s pricing page.
That’s expensive and I can’t see how they can run Copilot on the standard API pricing. But, it makes a message (one interaction?) lower cost than $4 to me.
How many tokens are in a typical message for you?
Now, the question is: would you trust it? As a human, a manager, a president? With the current generation, I treat is as a dumb but quick typist, it can create code much faster than I can, but the responsibility to verify it all is entirely on me. We would need decades of proof such an AGI is reliable in order to actually start trusting it - and even then, I'm not sure how safe it would be.
You have no option but to trust an ASI as it is all-powerful by definition. If you don't trust ASI, your only option is to prevent it from existing to begin with.
Edit: please note that AGI ≠ "human intelligence," just a general intelligence (that may exceed humans in some areas and fall behind in others.)
By this definition a calculator would be an AGI. (Behold -- a man!)
But if I can't convince you, maybe Norvig can: https://www.noemamag.com/artificial-general-intelligence-is-...
Maybe.
> Could it accomplish even the most basic tasks?
Definitely: https://youtu.be/Sq1QZB5baNw
I don't understand this sentence. I don't trust Generative AI because it often spits out false, inaccurate or made up answers, but I don't believe my "only option is to prevent it from existing".
Because if you don't trust it, you're fucked:
- Your attempts to limit its influence on your life will be effective only if the ASI decides to willfully ignore or not notice you.
- If it lies or not, there's nothing you can really do. The outcome it wants is practically guaranteed regardless. It will be running circles around your mental capacity like a human drawing circles around an ant on a piece of paper.
So what option do you really have but to trust it? I mean, sure, you can not trust the all-powerful god, but your lack of trust will never really have any effect on the real world, or even your life, as the ASI will always get what it wants.
Really, your only option is to prevent it from happening to begin with.
All that said - I think ASI will be great and people's concerns are overblown.
That's a huge assumption.
AI in SF: all-powerful all-controlling abstract entity.
AI in reality: LLMs on thousands of servers with GPUs, operated and controlled by engineers, gated by a startup masquerading as a non-profit and operating for years at a loss, with a web interface and an API, fragile enough that it becomes unavailable at times.
Making the central planner an AGI would just make it worse, because there's no guarantee that just because it's (super)intelligent it can empathize with its wards and optimize for their needs and desires.
Most humans can not lie all the time. Their true intentions do come out from time to time.
AGI might not have that problem - AGI might hide its true intentions for hundreds of years.
It is an argument about signal bandwidth, compression, and noise.
Assuming money even makes sense in a world with AGI, that is.
That's what the competition with OpenAI looks like to me. There are at least three other American companies with near-peer models plus strong open-weights models coming from multiple countries. No single institution or country is going to end up with a ruling-the-Earth lead in AI.
Fanciful, yes, but that is the AI fantasy.
With AI, I think there is extremely strong power laws that benefit the top performing models. The best model can attract the most users, which then attracts the most capital, most data, and best researchers to make an even better model.
So while there is no hard moat, one only needs to hold the pole position until the competition runs out of money.
Also, even if no single AI company will rule the earth, if AI turns out to be useful, the AI companies might get a chunk of the profits from the additional usefulness. If the usefulness is sufficiently large, the chunk doesn't have to be large percentually to be large in absolute terms.
As for a post-money world, if AGI can do every economically viable thing better than any human, the rational economic agent will at the very least let go all humans from all jobs.
In times gone by this would be a public company already. It's just an investment in a company with almost 2000 employees, revenue, products, brand etc. It's not an early stage VC investment, they aren't looking for a 10x.
The legal and compliance regime + depth of private capital + fear of reported vol in the US has made private investing the new public investing.
If they can IPO, they will easily hit a $1.5T valuation. All Altman would have to do is follow what Elon did with Tesla. Lots of massive promises marinated in trending hype that tickles the hearts of dumb money. No need to deliver, just keep promising. He is already doing it.
And that's going by Nissan's claimed range, not even real world. So that's on a 100% charge, when the car is brand new with no battery degradation, and under the ideal efficiency conditions that you never really get.
Tesla has really dropped off on its 50% CAGR number so now it is worth half that.
As Isaac Newton once said, "I can calculate the motion of heavenly bodies, but not the madness of people."[a]
---
[a] https://www.goodreads.com/quotes/74548-i-can-calculate-the-m...
I thought Satya said Microsoft had access to everything during the Altman debacle.
Your general sense that the later stage higher dollar figure raises look for a lower multiple than the earlier ones is correct, but they’d consider 2x a dud.
If they fall short of AGI there are still many ways a more limited but still quite useful AI might make them worth far more than Meta.
I don’t know how to handicap the odds of them doing either of these at all, but they would seem to have the best chance at it of anyone right now.
A lot of people said Microsoft’s Windows moat in desktop operating systems was gone when you could do most of the things that a program did inside a browser instead, but it’s been decades now and they still have a 70% market share.
If you establish a lead in a product, it’s usually not that hard to find a moat.
Windows’ moat is enterprise integration, and the sheer amount of software targeting it (despite appearances, the whole world doest’t run on the web), including hardware drivers (which, among other things, makes it the gaming platform that it is).
OpenAI could build a moat on integrations, as I mentioned.
OpenAI could build a moat in a lot of different ways including ones that haven’t been thought of yet.
They’ll find several I am sure.
Anthropic because their investment in tools for understanding/debugging AI.
Meta because free/open source.
OpenAI valuation is reliant IMO on them on them 1) AGI possible through NNs, 2) them developing AGI first and 3) it being somewhat hard to replicate. Personally I’d probably stick 10%, 40%, and 10% on those but I’m sure others would have very different opinions or even disagree with my whole premise.
Alternatively, what are you imagining this “AGI” you speak of to be?
ChatGPT is not autonomous or capable of doubling global GDP.
The founders of OpenAI were drawn from an intellectual movement that made very specific, falsifiable predictions about the pipeline from AGI (original definition) to superintelligence, predictions which have since been entirely falsified. OpenAI talks about AGI as if it were ASI, because in their minds AGI inevitably leads to ASI in very short order (weeks or months was the standard assumption). That has proven not to be the case.
General: able to solve problem instances drawn from arbitrary domains.
Intelligence: definitions vary, but the application of existing knowledge to the solution of posed problems works here.
Artificial. General. Intelligence. AGI.
As in contrast to narrow intelligence, like AlphaGo or DeepBlue or air traffic control expert systems, ChatGPT is a general intelligence. It is an AGI.
What you are talking about is, I assume, a superintelligence (ASI). Bostrom is careful to distinguish these in his writing. Bostrom, Yudkowsky et al make some implicit assumptions that led them to believe that any AGI would very quickly lead to ASI. This is why, for example, Yudkowsky has a very public meltdown two years ago, declaring the sky is falling:
https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-annou...
(Ignore the date. This was released on April 1st to give plausible deniability. It has since become clear this really represents his view.)
The sky is not falling. ChatGPT is artificial general intelligence, but it is not superintelligence. The theoretical model used by Bostrom et al to model AGI behavior does not match reality.
Your assumptions about AGI and superintelligence are almost certainly downstream from Bostrom and Yudkowsky. The model upon which those predictions were made has been falsified. I would recommend reconsidering your views and adjusting your expectations accordingly.
The issue with the “once OpenAI achieves AGI [sic], everything changes” narrative is that it is based off models with infinite integrals in them. If you assume infinite compute capability, anything becomes easy. In reality as we’ve seen, applying GPT-like intelligence to achieve superhuman capabilities, where it is possible at all, is actually quite difficult, field-specific, and time intensive.
Another issue here is that at this value level they are now required to become a public company and a direct competitor to their largest partners. It will be interesting to see how the competitive landscape changes.
I'd say that's very high risk.
OpenAI on the other hand has to spend billions to train every new iteration of their model and still loses money on every query you make. They can't scale their way out of the problem – scaling will only make it worse. They are counting on (1) the price of GPUs to come down in the near term or (2) the development of AGI, and neither of these may realistically happen.
Once they were the default option, they merely had to be the "same" as other options. If someone were to make a 1:1 clone of google with ~5% fewer ads - I do not believe they would break get a substantial market share of web search traffic (see bing).
Previous to this, they had about ~10B (via MS), and they've been operating for about 2 years at this scale. Unless they got this $$$ like a week away from being bankrupt, which I highly doubt.
Note: I'm not arguing they're profitable.
I think the question is, is OpenAI that company and is market dominance possible given all other players? I believe some investors are betting that it is OpenAI, while you and others are sceptic.
Personally I agree with you, or rather hope that it is not, primarily as I don’t trust Sam Altman and wouldn’t want him to have that power. But so far things are looking good for OpenAI.
But as far as the technology, we're drowning in a flood of good AI models, which are all neck to neck in benchmarks. Claude might be slightly stronger for my use, but only by a hair. Gemini might be slightly behind, but it has a natural mass market platform with Android
I don't see how a single player sticks their neck out without being copied within a few months. There is — still — no moat.
Talking to friends who are very successful, knowledgeable AI researchers in industry and academia, their big takeaway from o1 is that the scaling hypothesis appears to be nearing its end, and that this is probably the motivation for trading additional training-time compute for additional inference-time compute.
So where does that leave an investor's calculus? Is there evidence OpenAI can pull another rabbit or two out of its hat and also translate that into a profitable business? Seems like a shaky bet right now.
They do not actually need any further technology development to continue to add profitable products. There are numerous ways they can apply their existing models to offer services with various levels of specificity and target markets. Even better, their API offerings can be leveraged by an effectively infinite variety of other types of business, so they don't even need to do the application development themselves and can still profit from those companies using their API.
But o1 is also incredibly verbose. It'll respond with 1-2 pages of text which often contains redundant data. GPT-4o is better in it's descriptions.
Google's money printing is based on people telling Google what they want in the search bar, and google placing ads about what they want right when they ask for it. Today people type what they want in the ChatGPT search bar.
I cant really do the same for Google.
I haven’t used google in years.
Like literally some of their best talent tomorrow starts their own company and all they need is data center credits to catch up with open ai itself.
What is the company's moat exactly? Being few months at best ahead of the curve (and only on specific benchmarks)?
It started with "AI progress is fake." "AI will never beat human creativity." "AI can't code anything past 15k tokens." "AI will always be too expensive." "AI will never sound like a human." "AI can't even understand humor."
And now we're at "AI is only like this because they're stealing copyright." "If everyone uses AI, it will cannibalize its own outputs." "Other people can build LLMs too."
But year after year, they solve these problems. They weren't worth a hundredth of it in 2021. At this rate, they would be worth a tenth of it in 2026, and maybe a oneth of it in 2030. And that's what the VCs are backing on. If they're not, well, it converts to debt.
Google (the search engine) hasn't really had a moat either. Plenty of competitors, some better ones, but they're still around. ChatGPT is a brand name.
Hell, even Nvidia 2 weeks ago released their own LLM model which is competitive with the best commercial ones.
AI is a commodity, eventually where money will be made (interacting with the software and user, from Excel to your OS like Apple Intelligence) it won't be much relevant which model you use, average Joe won't even notice.
Unbenchmarked things like their ability to use south Jakartan slang, writing jokes, how and when they reject input, how tightly they adhere to system input, how they'd rate a thing from 1-10. They function as a part of a complex system and can't be swapped out. I'm using Claude Sonnet 3.0 for a production app and I need a week to be able to swap it to 3.5 while maintaining the same quality. We've trained our own models and it's still incredibly hard to compete with $0.075 per million tokens just on things like cost of talent, hardware, electricity. And that speed.
The question is why not something like Anthropic?
I'd say OpenAI has other cards up their sleeve. The hardware thing Jony Ive is working on. Sam Altman invests into fusion power and Stripe; guess who's getting a discount? There is a moat, but it lies at a higher level. Other competitors are also playing other kinds of moats like small offline AI.
I have also tried the different models in cursor and again the differences were negligible with some projects and questions slightly favoring one or the other.
Which does nothing but confirm that none of those has any kind of moat really. Data and products are eventually what's gonna make money, and the models used will likely be an implementation detail like choosing a database or a programming language.
Related: Microsoft and NVIDIA's revenues increase by a combined $6.6B.
Look at Google, Meta, etc.
They were super stable in leadership when they took off.
Can’t say the same for OpenAI.
Also, being an AI researcher, them converting to profit org after accepting donations in name of humanity and non-profit is honestly shameful and will not attract most talented researchers.
Similar to what happened to Microsoft once they got labelled as “evil”.
As far as I understand it they're actually underwater on their API and even $20/month pricing, so we'll either see prices aggressively increase and or additional revenue streams like ads or product placement in results.
We've witnessed that every time a company's valuation is impossibly high: They do anything they can to improve outlook in an attempt to meet it. We're currently in the equivalent of Netflix's golden era where the service was great, and they could do no wrong.
Personally I'll happily use it as long as I came, but I know it is a matter of "when" not "if" it all starts to go downhill.
It's largely flown under the radar but they appear to already be testing this:
Given how picky the ad industry can be about where their ads are being placed, I somehow suspect this is going to be complicated. After all, every paragraph produced is potentially plain untrue.
I've assumed that when AI becomes much more mainstream we'll see multiple levels of services.
The cheapest (free or cash strapped services) will implement several (hidden/opaque) ways to reduce the cost of answering a query by limiting the depth and breadth of its analysis.
Not knowing any better you likely won't realize that a much more complete, thoroughly considered answer was even available.
This already happens. Many of the cheap API providers aggressively quantize the weights and KV cache without making clear that they do.
It already happens, when your model randomly gets worse all of a sudden for the same price of service.
It's like watching Muhammad Ali or Mike Tyson boxing - they're the most agile large company we've ever seen and they're able to do it on a heavyweight platform.
What discount rate do you use on a cash burning non-profit?
They were on the round up until today.
Not at all - their dropping out leaked last week, who knows how long ago they actually backed away from the table...
https://www.wsj.com/tech/apple-no-longer-in-talks-to-join-op...
Thanks for posting
There has never been a better time to have human intelligence and apply it to fields that are moving towards making critical decisions based on latent space probability maps.
NLP and Human-Computer Interaction however are fields that have actually been revolutionized though this wave so I do expect at least much better voice interfaces/suggestion engines.
Yeah, that bodes well. Led by Jared Kushner's brother's VC firm with the UAE's sovereign wealth fund and Softbank following. If not for Microsoft and NVIDIA, this would be the ultimate dumb money round.
https://www.newcomer.co/p/sequoia-founders-fund-usv-elad-gil
IMO -- this is not a serious company with serious people building an important long-lived product. This is a group of snake oil salesmen that are in the middle of the greatest grift of their careers. That, and some AI researchers that are probably enjoying limitless gpus.
[1] https://www.timesnownews.com/technology-science/next-gen-cha...
But that’s obviously not a fair description either, because they have the world-leading product in an intensely competitive field that does stuff nobody would have thought possible five years ago.
The marketing is obviously massive hyperbole bordering the ridiculous, but the idea that they haven’t produced something deeply serious and important is also ridiculous, to me.
The only (gigantic, huge, fatal—perhaps) problem they have at the moment is that their moat seems to only consist of a short head start over the competition.
https://www.cnet.com/tech/services-and-software/chatgpt-vs-g...
Gemini won this one in September. It won another one I read in March. I just use Gemini for free.
They have produced something impactful sure, but I don't think their product is a tenth as valuable or "intelligent" as they are claiming it to be. There are too many domain-specific tools that are far superior to whatever pseudo-intelligence you get from chatgpt... what proof do we have that any industry has found material value from this chatbot?
For another thing there is not a single google search result for the phrase "high school intelligence level task". Unless Google is malfunctioning it seems you are just making things up?
I don't think this wording is something that needs to be analysed to death, after all someone will just move the goalposts (typically by those who want to place human intelligence on some podium as special).
This is a task I'd entrust a high school student to do (emphasis on student). They said high school intelligence not high school education.
>>>This is a task I'd entrust a high school student to do (emphasis on student). They said high school intelligence not high school education.
I'm sure you don't actually believe all high school students are equally intelligent or on the same coursework track, or that all high schools teach the same courses with the same level of sophistication, so I'm not sure why you are defending the term "high school intelligence" or saying they even made a "claim".
At no point did I say that all students are at the same level.
I can't quite see what issue you have with the all of this, other than you don't like it because you don't like it.
The issue isn't "official terms" the issue is nonsensical framing which is "not even wrong".
>>>At no point did I say that all students are at the same level.
Yet you used the term "high school intelligence level task".
1) "TERM X" is not real but I can see where you are coming from since other people believe in "TERM X".
and
2) You literally just made up "TERM X" on the spot to argue some other thing was true.
Later they replaced it with https://www.youtube.com/watch?v=vgYi3Wr7v_g
The "no revenue" scene... All OpenAI needs to do is start making some revenue to offset the costs!
I could be _very_ wrong though.
Also, if "significantly ahead" just means "a few months ahead" that does not justify the valuation.
Perhaps, but at most generous, it’s three months ahead of competitors I imagine
The race is, can OpenAI innovate on product fast enough to get folks to switch their muscle memory workflows to something new?
It doesn't matter how good the model is, if folks aren't habituated to using it.
At the moment, my muscle memory to go to Claude, since it seems to do better at answering engineering questions.
The competition really is between FAANG and OpenAI, can OpenAI accumulate users faster than Apple, Google, Meta, etc layer in AI-based features onto their existing distribution surfaces.
If somebody puts a cheaper and better version, then no moat.
They’ve built a great product, the price is good, but it’s entirely unclear to me that they’re continue to offer special sauce here compared to the competition.
Edit: You can downvote me all you want, I have plenty of karma to spare. This is OpenAI's strongest moat, whether people like it or not.
Edit: nvm, I already know.
Those moats are pretty weak. People use Apple Idioticnaming or MS Copilot or Google whatever, which transparently use some interchangeable model in the background. Compared to chatgpt these might not be as smart, but have much easier access to OS level context.
In other words: Good luck defending this moat against OS manufacturers with dominant market shares.
Name any other AI company with better brand awareness and that argument could make a little bit of sense.
Armchair analysts have been saying that since ChatGPT came out.
"Anyone could steal the market, anytime" and there's a trillion USD at play, yet no one has, why? Because that's a delusion.
Anecdotally, I used to pay for ChatGPT. Now I run a nice local UI with Llama 3. They lost revenue from me.
I just gave you three of them. Right now a large share of chatgpt customers come from the integration provided by those three.
> "Anyone could steal the market, anytime" and there's a trillion USD at play, yet no one has, why? Because that's a delusion.
Bullshit. It is not about "stealing" but about carving a significant niche. And that has happened: Apple In happens in large parts on device using not-chatgpt, google's circle to search, summaries etc use not-chatgpt, copilot uses not-chatgpt.
The danger to a moat is erosion not invasion.
So probably on you to explain it since you came up with that claim.
But the revenue has flatlined and you can't raise your existing users cost by 20x...
It truly is a mystery as to how anybody throwing other peoples money hopes to get it back from OpenAI
[1]https://www.nbcnews.com/business/business-news/openai-closes...
300m $/mo. * 12 mo. * x - costs = 7.85b
or
$3.6b * x = 7.85b + costs
I hold costs constant at $8B and get x = 4.4. $8B is probably a slight overestimate of current costs, I just took the losses from the article and discounted the last year's revenue to $3B. Users use inference which costs money so, in reality, costs will scale up with revenue, which is why I note this is a false assumption. But I also don't know how much of that went into training and whether they'll keep training at the current rate, so I can't get to a better guess.
Furthermore, training also gets exponentially more expensive as models keep growing and this R&D is not optional. It's absolutely necessary to keep current OpenAI subscribers happy.
OpenAI will lose money, and lots of it, for years to come. They have no clear path to profitability. The money they just raised will last maybe 18 months, and then what? Are they going to raise another 20bn at a 500bn valuation in 2026? Is their strategy AGI or bust?
Where do you get $400M and flatline?
https://tvtropes.org/pmwiki/pmwiki.php/Main/IAmNotLeftHanded
Why not? They’re already shopping a 2k/mo subscription option
That’s someone’s rent.
Spammers.