• paxys
  • ·
  • 6 months ago
  • ·
  • [ - ]
$6.6B raise. The company loses $5B per year. So all this money literally gives them just an extra ~year and change of runway. I know the AI hype is sky high at the moment (hence the crazy valuation), but if they don't make the numbers make sense soon then I don't see things ending well for OpenAI.

Another interesting part:

> Under the terms of the new investment round, OpenAI has two years to transform into a for-profit business or its funding will convert into debt, according to documents reviewed by The Times.

Considering there are already lawsuits ongoing about their non-profit structure, that clause with that timeline seems a bit risky.

> if they don't make the numbers make sense soon then I don't see things ending well for OpenAI.

This is pretty much obvious just from the valuations.

The wild bull case where they invent a revolutionary superintelligence would clearly value them in the trillions, so the fact that they're presently valued an order of magnitude less implies that it is viewed as an unlikely scenario (and reasonably so, in my opinion).

You don't need science fiction to find the bull case for OpenAI. You just have to think it stands to be the "next" Google, which feels increasingly plausible. Google's current market capitalization is in the trillions.
  • paxys
  • ·
  • 6 months ago
  • ·
  • [ - ]
Google is a digital advertising company. OpenAI hasn't even entered the ads business. In the absolute best case they can take over a large chunk of Google's search market share, sure, but that still doesn't make it anything similar to Google in terms of finances. How do they start making the queries profitable? What do they do when their competitors (Claude, Gemini, Llama, Mistral, Grok and several others) undercut them on price?
Google didn’t start as an ads company. It started as a blank text box that gave you a bunch of good answers from the internet in a list of links.

Were there competitors that did the same thing? AltaVista? Yahoo? Did they undercut on cost? Google was free, I guess. But Google won because it maintained its quality, kept its interface clean and simple, and kept all the eyeballs as a result. Now Google is essentially the entry point to the internet, baked into every major browser except Edge.

Could ChatGPT become the “go-to” first stop on the internet? I think there’s a fair chance. The revenue will find its way to the eyeballs from there.

Well when you describe it that way, OpenAI also started as a blank text box that gave you a bunch of good answers, and they've already expanded with other services.

I already use ChatGPT as my first go-to stop for certain search queries.

I guess the difference (at least at comparable development stages) is that a single user query cost almost nothing for Google compared to how much money running ChatGPT costs.

I wouldn’t be surprised if OpenAI were still losing money even with the same CPM that Google search has.

  • tgma
  • ·
  • 6 months ago
  • ·
  • [ - ]
I am not bearish on OpenAI, but the analogy is flawed in that Google probably raised, I don't know, less than 50 million, before it was actually profitable.
It quickly moved into ads though. Incorporated late 1998 and started selling ads in 2000.

Normal people would need to start using a Chat GPT owned interface for search to make an ad based business viable surely? And there's no real sign of that even beginning to happen.

This subthread is full of people explaining why they don't believe OpenAI could successfully match Google's financial performance. Sure. I'm not investing either. My point isn't that they're going to be successful, it's that there are plausible stories for their success that don't involve science fiction.
People don’t seem to understand that investment portfolios are personal, that just because an investment doesn’t make sense for their portfolio doesn’t mean that it doesn’t make sense for anyone’s. Allocating a tiny fraction of a portfolio to high risk/high reward investments is a sound practice. When those portfolios are, say, large pension funds, the total sum to invest can be hundreds of millions.

Dissenters should consider that their might be short plays, that if what they think is true they could make some money.

Google also offered a free product. OpenAI isn’t offering a free product but a subscription product plus metered API product amongst other. Their economics are structurally better than googles assuming they can keep growing their captured market share. Their outrageous costs are also opportunities to optimize costs, including massive amounts of R&D, etc. They don’t need to be profitable now - in fact as bezos demonstrated with Amazon for many years, profit is an indication you’ve run out of uses for capital to grow.
> OpenAI isn’t offering a free product

I encourage you to visit https://chatgpt.com in incognito mode.

That’s demoware.
That's like saying Youtube isn't free because there's a subscription...
It’s like saying just because there’s a free offering the product isn’t free. Anyone who has ever built a shareware or demoware product understand the free offering is a funnel to the paid offering and doesn’t exist as an independent part but as an advertisement for the paid product.
Yes, but still a free product. Unless you say that in the future the product won't have a free offering anymore, it's a free product.
That’s literally not how people in the business of creating product strategies classify things.
Yes, it is? Usually, products with free offering + subscriptions get classified as "freemium".
No it isn’t. Free tier of YouTube make money from ads. Free tier of ChatGPT is just a funnel for the paid tier. It’s a marketing cost.
Don't you think they will put ads on ChatGPT?

There are several ways to monetize a product (they already make some use of it).

> Their economics are structurally better than googles assuming

Are they? I would guess that the cost per query for Google, even back then was insignificant compared to how must OpenAI is spending on GPU compute for every prompt. Are they even breaking even on the $20 subscriptions?

During their growth phase Google could make nothing from most of their users and still have very high gross margins.

OpenAI not only has to attract enough new users but to also ensure that they are bringing more revenue than they cost. Which isn’t really a problem Google or FB ever faced.

Of course presumably more optimized models and faster hardware might solve that longterm. However consumers expectations will likely keep increasing as well and OpenAI has a bunch of competitors willing to undercut them (e.g. they have to keep continuously spending enough money to stay ahead of “open/free” models and then there is Google who would probably prefer to cannibalize their search business themselves than let someone else do it).

> in fact as bezos demonstrated with Amazon for many years, profit is an indication you’ve run out of uses for capital to grow.

Was Amazon primarily funding that growth using their own revenue or cash from external investors? Because that makes a massive difference which makes both cases hardly comparable (Uber might be a better example).

Ads are the most likely monetization path for openai. They want to capture as many users right now and can pull the trigger on ads whenever they want to start juicing users further. As long as the funding flows they can delay the ads. Google and Facebook were ad free initially for years only switching to it for monetization after building up critical user mass.
To cost per user for Google and FB was/is almost insignificant(relative to LLMs). So all the ad revenue was almost free cash.

It’s not even clear if OpenAI is breaking even with the $20 subscription just on GPU/compute costs alone (the newer models seem to be a lot faster so maybe they are). So incrementally growing their revenue might be very painful if they keep making the UX worse with extra ads while still simultaneously losing money on every user.

Presumably the idea is that costs will go down as HW became faster and models themselves more optimized/efficient. But LLMs themselves already seem to almost be a commodity so it might become tricky for OpenAI to compete with a bunch of random services using open models that are offering the same thing (while spending a fraction on R&D)

They’re already monetized in lots of ways. I doubt ads make much sense or are necessary.
  • fumar
  • ·
  • 6 months ago
  • ·
  • [ - ]
I would say they are likely already working through a potential ads experience
With slipping consumer standards around separating ads from real content, OpenAI are in a position to much more insidiously advertise than Google.
Google started as a useful search service then corrupted itself with ads. This is the same thing that Facebook and Reddit did. It’s not hard to imagine an LLM that provides “sponsored” responses.

So it’s a long term bet but the idea that Google would lose to an LLM isn’t far fetched to me.

The unit economics appear to be substantially different.
  • ·
  • 6 months ago
  • ·
  • [ - ]
Also Google's adds are not just in their products. They are absolutely everywhere step down from other big players. And I don't think OpenAI can beat that moat. It is entirely different game, and very hard to enter or someone else would have done it.
But don't they need a moat? They're running against not only every major tech company with access to training data and also all the open source models.

The models will have diminishing returns and other players seem better suited to providing value added features.

you don't need a moat during the gold rush. You need scale - largest number of biggest shovels with which the stuff to be shoveled the fastest. There is so much money right now to be sucked up from the world. We're talking valuations in AI at $100M+ per employee.

https://finance.yahoo.com/news/uae-backs-sam-altman-idea-095...

>The models will have diminishing returns

Wasn't that the going thinking before ChatGPT? And before AlexNet. Of course, we'll again be having some diminishing returns until the next leap.

> the stuff to be shoveled

They are spending a lot on shovels but it’s not clear that there is that much “stuff” (consumer demand) to be shoveled.

VC money can only take you so far, you still need to have an actual way of making money.

LLMs might effectively replace Google but they are already a commodity. It’s really not clear what moat OpenAI can build when there are already a bunch of proprietary/open models that are more or less on the same level.

That basically means that they can’t charge much above datacenter cost + small premium longterm and won’t be able achieve margins that are high enough to justify current valuation.

The moat is a first-party integration with Windows, a third-party integration with iOS, and first-mover advantage. The discount rate still isn't very high; ~5% is the risk-free rate. 157 billion is a reasonable valuation.
Integrations are not OAI's moat as those are primarily UX developments that are kept by Apple/MSFT. Right now, and if they want, they can change some lines of code to get up and running with another provider, like Anthropic or whatever.

Only moat OAI has right now is advanced audio mode / real-time audio API, plus arguably o1 and the new eval tools shown the other day as those are essentially vertical integrations.

And maybe, like you said, first-mover advantage. But is not that clear, as even Anthropic got ahead in the race for a while with Claude 3.5 Sonnet.

> Google's current market capitalization is in the trillions.

2 trillion. Approximately 13x OpenAI's current valuation. Google nets almost 100 billion a year. OpenAI grosses 4 billion a year.

Wild numbers.

A private pre-IPO investment is a bet on where OpenAI will be 10 years from now, not where they are now.
Yes, but they generally shoot for 5-10x. So they are betting on OpenAI growing gross revenue like 20x assuming their costs stay the same.
The later the investment stage, the lower the expected multiple, but also I don't know that 20x is a crazy expectation to have about OpenAI? It might not happen, but people really fixated on that "might", which is not what venture investing is about.
20x isn't crazy, but its what they need just to reach parity P/E with Google (assuming their costs remain flat). In order for the investment to make sense they have to grow much much more than that to account for the extra risk, otherwise you're better off buying Google stock.

If we throw out some conservative numbers, and assume costs will rise super modestly, you have to believe OpenAI's earning will grow 50-100x for the investment to make sense. They'd have to maintain their current growth rate for 5+ years, but I wouldn't be surprised if their revenue growth is already slowing

To someone that uses OpenAI's tools everyday and generally finds them to be genius morons, I disagree that it feels increasingly plausible they stand to be the next Google.
You're kind of selling investing in Google instead, given that they're one of OpenAI's competitors.
If you think Google wins and crushes them, sure.
In 2023 Google had $307.39B in revenue and $24B in profit last quarter (suggesting ~100B in profit this year). Meanwhile OpenAI is losing money and making no where near these sums.
In fairness, whilst Google did reach profitability early (given the VCs had got their fingers burned on internet companies in 1999, they didn't have much choice) its revenues were lower than OpenAIs at IPO stage. The IPO was both well below Google's original hopes and considered frothy by others in the Valley because at the time the impressive and widely-used tech, limited as a business argument seemed to totally apply to the company that did search really well and had just settled the litigation for cloning Yahoo's Overture advertising idea. And their moat didn't look any better than OpenAIs

And much as AI hype irritates me, the idea that the most popular LLM platform becomes a ubiquitous consumer technology heavily monetised by context-sensitive ads or a B2B service integrated into everything doesn't seem nearly as fanciful as some of the other "next Googles". At the very least they seem to have enough demand for their API and SaaS services to be able stop losing money as soon as VCs stop queuing up to give them more.

Facebook got all the way to an IPO with business fundamentals so bad that after the IPO, Paul Graham wrote a letter to all the then-current YC companies warning them that the Facebook stink was going to foul the whole VC market in the following years. Meta is now worth something like 1.4T.
Facebook grew 100% and had 45% gaap operating margins the year before their IPO.

Facebooks IPO financials were among the best financials at IPO ever

OpenAI has negative 130% adjusted operating margins.

I don't think OpenAI is about to IPO.
FB financials were incredibly good at their IPO.

Revenue - 3,711 - 88% YoY growth.

Net Income - 1,000

Cash - 3,908

Tell me how those are bad?

Facebook made it out by committing click fraud against advertisers on a massive scale, which I don't see as a viable path for sama (even ignoring any legal concerns) considering that openAI isn't a platform company.
Look, I don't care. At some point we're just arguing about the validity of big tech investing. I don't invest money in tech companies. I don't have a strong opinion about Facebook, or, for that matter, about OpenAI. I'm just saying, you don't need a sci-fi story about AGI to see why people would plow this much money into them. That's all.
  • ·
  • 6 months ago
  • ·
  • [ - ]
Google has also magnificently shit the bed with Gemini; their ads business is getting raked over the coals in court; they are a twice convicted monopolist; and are driving away top talent in droves.

It reminds me of the old joke:

Heard about the guy who fell off a skyscraper? On his way down past each floor, he kept saying to reassure himself: So far so good... so far so good... so far so good.

People have been talking about the downfall of Google and FB/Meta for years now and yet every single year both of them still grow, still print money, and run the most used products in the world by far.

Google's generative ai models probably are used more in a day than the rest combined. Google is a highly profitable business that still has never not grown YoY in its nearly 3 decade history.

In you mind you might think Google is going down, but in reality they have only been going up for nearly 3 decades now.

> their ads business is getting raked over the coals in court

only display ads business, which is a fraction of total ads revenue

It’s all wired together.
display ads probably wired to google infra, but google search + youtube + ads can exists on their own.
[flagged]
Product adoption counts, imaginary benchmarks don’t.
Tell me how Search AI Overviews aren't using Gemini?
I used the newly released gemini live mode. I thought they were positioning it against chatgpt live mode. But the voice is hilariously unnatural. It makes you not want to talk to it at all. If this is the best google can do, they are years behind the competition.
I just tried it after your message. It has like 5-10 voices. Many of them very reasonable, some of them quite good on my pixel phone.
I don't even have any options to choose a different voice. It's just a very mechanical voice. I feel the default google assistant voice was way better than this one.
Uber has not had a profitable year from their core business during its life time.
> which feels increasingly plausible.

not really; ChatGPT may have the brand name but there are other offerings that are just as good and which can be incorporated into existing apps that have a captive userbase (Apple, Google, Meta). Why should I switch to another app when I can do genAI within the apps that I'm already using.

> You just have to think it stands to be the "next" Google, which feels increasingly plausible. Google's current market capitalization is in the trillions.

for this, in addition to "google" part, they also need to build "ads" part for monetization, which is also not trivial task.

Besides name recognition, what’s special about OpenAI at this point?

I was prototyping some ideas with ChatGPT that I wanted to integrate into a MVP. It basically involved converting a users query into a well categorized JSON object for processing downstream. I took the same prompt verbatim and used it with an Amazon Bedrock hosted Anthropic model.

It worked just as well. I’m sure there will be plenty of “good enough” models that are just as good as Open AI’s models.

> Besides name recognition, what’s special about OpenAI at this point?

nothing

Do you want to know how much traffic OpenAI has pulled off Google in the past two years? Because it's not pretty lol. It's definitely single percentage points if not less than a percent (can't remember the exact numbers). They're a rounding error compared to Google.
Is there a source for this data?

I personally always use ChatGPT over Google.

https://datos.live/predicted-25-drop-in-search-volume-remain...

Love the use of the personal anecdote to refute my point BTW.

Its also plausible they are the next AltaVista.
  • ·
  • 6 months ago
  • ·
  • [ - ]
That assumes that the revolutionary superintelligence is willing to give away its economic value by the trillions. (Revolutionary superintelligences are known to be supergenerous too)
I assume it needs super amounts of hardware and super amounts of energy. No one gets a free ride not even superintelligences or people working for health insurance.
I would put odds on whatever it is generating the revenue having the leverage at the table. 'We pay your salary which allows you to eat' is a poor argument when the opposing one is 'without me your company would be losing money'.
That's true. Thankfully, superintelligences are also poor at negotiating and projecting their own income, so you can still make a handsome profit off of them.
But even if they invent revolutionary super intelligence, big if, what's stopping other companies to follow suit? Talent between these companies moves fast, some start their own.

Hell even open source models are nowadays better than the best models this billions burning company had just 6 months ago.

I'm lost at what is the moat here, because I really don't see it and I don't believe any of these companies has any advantage at all.

It actually represents the scenario where they invent a revolutionary superintelligence that doesn't kill the VCs investing in the firm, and allows them enough control to take profit. In the top range ASI capacity outcomes, the sand god does not return trillions to the VCs.

This actually represents only the narrow "aligned" range of AI outcomes, so it makes sense it's a small one.

Judging by the ones I have met, the VCs probably believe that any kind of superintelligence would by definition be something that would like them and be like them. If it wasn’t on their side they would take it as incontrovertible proof that it wasn’t a superintelligence.
Thanks, made me giggle because it rings true! But what if... it's training actually caused it to acquire similar motivation?

(Don't want to think about that too much but man just imagine... A superintelligence with Sam Altman mindset)

I am not sure who you have met, but I have mostly talked to VCs with the same range of optimism and concerns regarding AI as normal technologists.
OpenAI finances is a bit tricky since most of their expenses are the cloud costs while their biggest investor/shareholder Microsoft invested in them with mostly Azure credit so although their finances seem unsustainable, I think the investors are banking in if the things went bad Microsoft will buy them out and make even small ROI like what they did with Inflection.
I think OpenAi will drop their ambition for AGI and focus on product, they'll never state this of course, but it's clearly telegraphed in this for profit move.

Research and safety have to take a backseat for cost reduction. I mean there are many avenues to profitability for them, one I can think of is they could cut their cost significantly by creating smaller and smaller models that match or nearly match Gpt4, while paid subscribers wouldn't be able to really tell the difference. No one is really challenging them on their benchmark claims.

I think their main challenge is that in 5-10 years from now, if their current definition of AGI is still elusive, models of Gpt4 capabilities or similar (Llama 3 can fool most people I think) will be running locally and freely on pretty much any OS of choice without having to make a single outside API call. Every app will have access to local inference that neither costs developers nor the users anything to use. Especially after the novelty has worn off a bit, it's hard to see consumers or developers paying up to use something that's technically better but not significantly enough to justify a $20+/mo subscription or per token cost. Right now though, local inference has a huge barrier of entry, especially when you think across platforms.

Honestly, I think Google and Apple can afford to spend the cash to develop these models in perpetuity, while OpenAi needs to worry about massive revenue growth for the next few years, and they probably don't really have the personnel to grow revenue aggressively either. It's a research lab. The downside of revenue seeking too, is that sometimes the pursuit kills the product.

> 5-10 years from now

> models of Gpt4 capabilities or similar

I took Apple how many years to change their bass config memory from 8 GB to 16? Somewhere between 8 and 10..

Regardless, I’m not sure running reasonably advanced models locally will necessarily become that common anytime soon on mainstream devices. $20 per month isn’t that much compared to the much higher HW costs, of course it’s not obvious that OpenAI/etc. can make any money longterm by charging that.

Check out this more in-depth financial analysis: https://www.wheresyoured.at/oai-business/

> OpenAI's monthly revenue hit $300 million in August, and the company expects to make $3.7 billion in revenue this year (the company will, as mentioned, lose $5 billion anyway), yet the company says that it expects to make $11.6 billion in 2025 and $100 billion by 2029

> For OpenAI to hit $11.6 billion of revenue by the end of 2025, it will have to more than triple its revenue. At the current cost of revenue, it will cost OpenAI more than $27 billion to hit that revenue target. Even if it somehow halves its costs, OpenAI will still lose $2 billion.

Same thing was said of Netflix, of Uber, etc...

Venture capital loses money to win marketshare, tale as old as time

Were Microsoft, Google, Amazon and Tesla/Twitter(x), and a whole bunch of other gigantic (Chinese aswell) corporations trying to compete for the same market back then?

I don't think Netflix and Uber had even a fraction of the competition that this field will have.

  • rtsil
  • ·
  • 6 months ago
  • ·
  • [ - ]
Netflix had all the paid TV channels worldwide, physical media sales and video-clubs as competitors. Uber had (still has) taxis. All of these were entrenched competitors with strong moats.

It's easy with hindsight to underestimate the forces of Netflix's competitors, but consider that even today, the revenue of Pay TV worldwide is still bigger that the revenue of streaming platforms. And unlike streaming platforms, Pay TV make profits hand over fist and didn't incur crazy debts to gain marketshare. They may be on the way to extinction, but they'll make tons of profits on their way out.

Netflix and Uber both created new product categories in their respective markets. And both where quite established before there was any real competition in those product categories.

Netflix discovered that content was the only moat available to them, after which it was effectively inevitable that they become a major production studio. I think Uber are still trying to figure out their most, but ironically regulation is probably part of it.

OpenAI also created a new product category, but the competition was very quick to move into that category and has very deep pockets. At some point, they clearly felt regulation might be a moat but it’s hard to see them landing that and winning.

  • wslh
  • ·
  • 6 months ago
  • ·
  • [ - ]
I think the simplified question is if OpenAI is a natural monopoly like Google was/is or if that market has a different structure like an oligopoly (e.g. mobile phone market), "perfect" market, etc. On the technological side it seems obvious (does it?) that we can run good models ourselves or in the cloud. What is less clear if there will be a few brands that will offer you a great UX/UI, and the last mile wins.
> I don't think Netflix and Uber had even a fraction of the competition that this field will have.

Uber had significantly more competition than all of these companies combined. Including from China which they were forced out of.

They totally did in their own ways. Blockbuster. YouTube. Taxis. Google self driving
Neither had Zuckerberg giving free hikes and movies to people.
Most of the loss comes from hefty cost of inference, right? OAI runs on Azure, so everything is expensive at their scale: the data, the compute, and GPU instances, and storage, and etc.

I'd venture to guess that they will start building their own data centers with their own inference infra to cut the cost by potentially 75% -- i.e., the gross markup of a public cloud service. Given their cost structure, building their own infra seems cheap.

Given their scale and direct investments from Microsoft, what makes you think this is any cheaper? They’ll be getting stuff at or near cost from azure, and Azure already exists and with huge scale to amortize all of the investments across more than just OpenAI, including volume discounts (and preference) from nvidia.
I assumed that Microsoft gave them discount and credits as investment, but the cost by itself is like any other big customer can get.
Of course the cost isn’t near zero, but is the pricing “at Microsoft’s cost”? If it is, then them building a datacenter wouldn’t save anything, plus would have enormous expenses related to, well, building, maintaining, and continuously upgrading data centers.
Good point. I updated the assumption accordingly. I assume the cost of using Azure cloud after discount but before the credit will be on par with Azure's big customers. And given the scale of OAI, I suspect that the only way to be profitable is to have their own infra.
Wouldn't it be foolish for microsoft to give them a discount?

Like you could discount say a 10B investment down to 6B or w/e the actual costs are but now your share of OpenAI is based on 6B. Seems better to me to give OpenAI a "10B" investment that internally only costs 6B so your share of OpenAI is based on 10B.

Plus OpenAI probably likes the 10B investment more as it raises their market cap.

The cost of vertically integrated inference isn’t zero either.
  • paxys
  • ·
  • 6 months ago
  • ·
  • [ - ]
As part of Microsoft's last investment in 2023 OpenAI agreed to exclusively use Azure for all their computing needs. I'm not sure what the time limit on that agreement is, but at least for now OpenAI cannot build their own datacenters even if they wanted to.
Microsoft restricted their computing needs provider, but they could have allowed them to build their own datacenters.
If they only have enough for a year or two of runway at current costs how the heck would they have enough capital to build an AI data center?
I cannot believe they don't already get steep discounts via custom contracts with providers.

Someone has to pay for the hardware and electricity.

Building datacenters will take a significant amount of time. if they don’t have locations secured then even more so.
Are there enough gpus available at the scale that they need them?
All they have to do in order to make the bet pay off is create a heretofore only imagined technology that will possibly lead us into either a techno-utopia or post-apocalyptic hellscape.

I don’t see why that’s so far-fetched?

My hope is that they verge failure in 2 years, then turn to a non-profit. This in turn gets subsidized by DARPA, who by then realize they're WAY behind the curve on AI and the investment is helpful towards keeping the US competitive in the race to AGI that many believe to be a 'first to the goal wins forever' scenario. And the contingency for this prop-up? OpenAI have to actually be 'open' to receive the money.

Inevitably this will result in this one company's eventual decline but it'll push everyone else in the space forward.

> OpenAI has two years to transform into a for-profit business or its funding will convert into debt

Last I saw, the whole thing was convertible debt with a $150bn cap. I’m not sure if they swapped structures or this is some brilliant PR branding the cap as the heading valuation.

>I don't see things ending well for OpenAI.

I mean, what exactly do you see happening? The have a product people love and practically incalculable upside potential. They may or may not end up the winners, but I see no scenario in which it "doesn't end well". It's already "well", even if the company went defunct tomorrow.

>that clause with that timeline seems a bit risky.

I'm 99% certain that OpenAI drove the terms of this investment round, they weren't out there hat in hand begging. Debt is just another way to finance a company, cant really say it's better or worse.

I don't think it really matters how much people love their product if every person using it costs them money. I'm sure people would love a company that sold US$10 bills for 25c, but it's not exactly a sustainable venture.

Will people love ChatGPT et al just as much if OpenAI have to charge what it costs them to buy and run all the GPUs? Maybe, but it's absolutely not certain.

If they "went defunct" tomorrow then the people who just invested US$6bn and lost every penny probably would not agree with your assessment that it "ended well".

  • lukev
  • ·
  • 6 months ago
  • ·
  • [ - ]
Model training is what costs so much. I would expect OpenAI makes a profit on inference services.
Running models locally brings my beefy rig to the knees for about half a minute for each querry for smaller models. Answering querries has to be expensive too?
The hardware required is the same, just in different amounts.

It’s less (gross) expensive for inference, since it takes less time, but the cost of that time (per second) is the same as training.

  • lukev
  • ·
  • 6 months ago
  • ·
  • [ - ]
Obviously, that's my point.

We can do the math. GPT-4o can emit about 70 tokens a second. API pricing is $10/million for output tokens and $2.5/million for input tokens.

Assuming a workload where inputs tokens are 10:1 with output tokens, and that I can generate continuous load (constantly generating tokens). I'll end up paying $210/day in API fees, or $76,650 in a year.

Let's assume the hardware required to service this load is a rack of 8 H100s (probably not accurate, but likely in the ballpark.). That cost $240k.

So the hardware would pay for itself in 3 years. It probably has a service life of about double that.

Of course we have to consider energy too. Each H100 is 700watts, meaning our rack is 5.6 kilowatts, so we're looking at about 49 megawatt-hours to operate for the year. Let's assume they pay wholesale electricity prices of $50/mwh (not unreasonable), and you're looking at a ~$2,500 annual energy bill.

So there's no reason to think that inference alone isn't a profitable business.

That doesn't sounds like brilliant margins, to be honest. You've left out the entire "running a business" costs, plus the model training costs. They need to pay their staff, offices, and especially lawyers (for all the lawsuits over the scraped content used to train the models).

It's not unusual for a startup to not be profitable, and they're obviously not as the company doesn't make a profit, but I'm not sure why isolating one aspect of their business and declaring it profitable would justify the idea that this company is inevitably a good investment "even if the company went defunct tomorrow".

Perhaps you meant "win" in the sense of "being influential" or something, but I'm pretty sure the people who invested billions of dollars use definitions that involve more concrete returns on their investment.

  • lukev
  • ·
  • 6 months ago
  • ·
  • [ - ]
Oh they are 100% losing money hand over fist if you include training costs and the eye-watering salaries they pay some of their employees.

I was responding to someone upthread suggesting that they were running even inference at a loss.

You're missing the fact that requests are batched. It's 70 tokens per second for you, but also for 10s-100s of other paying customers at the same time.
  • lukev
  • ·
  • 6 months ago
  • ·
  • [ - ]
All these efficiencies just increase OpenAI's margin on inference. Of course it's not "one cluster per customer" and of course a customer can't saturate a cluster by themselves, my illustration was only to point out that the economics work.
Inference alone totally can be. Just look at banana.dev, runpod, lambda labs, or replicate.

The issue is OpenAI is not just selling inference.

Though I wouldn’t be surprised if there were some hidden costs that are hard for us to account for due to the sheer amount of traffic they must be getting on an hourly basis.

Oh actually banana.dev shutdown. Maybe it’s not as profitable.
70 somethings per second, is slow. So that means it does take a very significant amount of resources, considering it's running on the same or better hardware. To sustain 70 things per second for thousands of users, it gets expensive really quickly.
  • lukev
  • ·
  • 6 months ago
  • ·
  • [ - ]
My point is that at current API pricing the users are paying enough to cover inference costs.
  • paxys
  • ·
  • 6 months ago
  • ·
  • [ - ]
> The have a product people love and practically incalculable upside potential

I'm willing to bet that if you swapped out GPT with Claude, Gemini or Llama under the hood 95% of their users wouldn't even notice. LLMs are fast becoming a commodity. The differentiating factor is simply how many latest NVIDIA GPUs the company owns.

And even otherwise, people loving a product isn't what makes a company successful. People loved WeWork as well. Ultimately what matters is the quarterly financial statement. OpenAI is burning an incredible amount of money on training newer models and serving every query, and that's not changing anytime soon.

> I'm willing to bet that if you swapped out GPT with Claude, Gemini or Llama under the hood 95% of their users wouldn't even notice

You can say exactly the same about Google and Bing (or any other search engines), yet Google search is still dominant. Execution, market perception, brand recognition, momentum are also important factors, not to mention talent and funding.

Not everyone who wants to invest, can invest in this round. You may bet the investors are wrong, but they put money where their mouth is. Microsoft participate, even though they already invested $13b.

Thing is, when I go onto Google, I know I'm using Google. When my employees use the internal functions chatbot at my company (we're small but it's an enterprise use case), they don't know whether it's OpenAI or Claude under the hood. Nor do they care honestly.
From an API point of view (I.e. developer/corporate usage), I am quite sure that OpenAI is lower usage than Gemini now, and Anthropic API revenue is like 60% of OpenAI based on recent reporting. OpenAI is definitely not dominant in this area.

The aspect of corporate usage where OpenAI seems to be ahead is in direct enterprise subscriptions to ChatGPT.

  • paxys
  • ·
  • 6 months ago
  • ·
  • [ - ]
Yeah because no company that attracted funding from lots of top VCs has ever failed...

How is A16Z's massive crypto fund doing again? And what about Softbank's other big bets?

I use LLMs for coding and I would instantly notice. It’s GPT4 or Claude, Gemini a close third, llama and rest are far away. The harder the question, the better OpenAI performs
> they have a product people love and practically incalculable upside potential.

They are not the only ones with the product. They don't have a moat. They are marginally better than competing models, at best. There is no moat into LLMs unless you happen to find the secret formula for super intelligence, hope nobody else finds it, and lock all of your R&D in the Moria mines so they don't go working and building it elsewhere.

There is no moat here. I can't believe so many intelligent people, even on HN cannot grasp it.

the default answer is to love ChatGPT but be unable to use it because of the prohibition on competition. Who wants to chat with something that learns to imitate you and you can’t learn to imitate it back? Seems like everyone using ChatGPT is sleepwalking into economic devastation…

also, for my use case I eventually found it’s faster and easier and less frustrating to write code myself (empowering) and not get sucked into repeatedly reminding AI of my intent and asking it to fix bugs and divergences (disempowering)

Plus, you can find alternatives now for the random edge cases where you do actually want to chat with an out of date version of the docs, which don’t train on user input.

I recommend we all “hard pass” on OpenAI, Anthropic, Google, basically anyone who’s got prohibitions on competition while simultaneously training their competing intelligence on your input. Eventually these things are going to wreck our knowledge work economy and it seems like a form of economic self-harm to knowingly contribute to outsource our knowledge work to externals…

We already freely distribute entire revision histories with helpful notes explaining everything in the git history.

The git repo and review history for any large project is probably more helpful for training a model than anything, including people using the model to write code.

It sounds like you are happy to not use LLMs. I’m the opposite way; code is a means to an end. If an LLM can help smooth the road to the end result, I’m happy to take the smooth road.

Refusing to learn the new tool won’t keep it from getting made. I really don’t think that code writers are going to influence it that much. The training data is already out there.

I seems like not ending well is the vast majority of outcomes. They dont have a profitable product or business today.

It seems that me most likely outcome is that they have one replaceable product against many and few options to get return commensurate with valuation.

My guess is that investors are are making a calculated bet. 90% chance the company become irrelevant, 10% chance it has a major breakthrough and somehow throws up a moat to prevent everyone else from doing the same.

That said, I have no clue what confidential information they are showing to investors. For all we know, they are being shown super human intelligence behind closed doors.

  • zeusk
  • ·
  • 6 months ago
  • ·
  • [ - ]
> That said, I have no clue what confidential information they are showing to investors. For all we know, they are being shown super human intelligence behind closed doors.

If that were the case, I wonder why Apple passed on this investment.

  • paxys
  • ·
  • 6 months ago
  • ·
  • [ - ]
If that was the case then the valuation would have a couple extra 0s at the end of it. Right now this "super human intelligence" is still figuring out how to count the number of Rs in strawberry, and failing.
[flagged]
> being shown super human intelligence behind closed doors

This seems to be the "crypto is about to replace fiat for buying day to day goods/services" statement of this hype cycle. I've been hearing it at least since gpt-2 that the secret next iteration will change everything. That was actually probably most true with 2 given how much of a step function improvement 3 + chatGPT were.

> The have a product people love and practically incalculable upside potential.

...yet they struggle to find productive applications, shamefully hide their training data and can't substantiate their claims of superhuman capability. You could have said the same thing about Bitcoin and been technically correct, but society as a whole moved in a different direction. It's really not that big of a stretch to imagine a world where LLM capability plateaus and OpenAI's value goes down the toilet.

There is simply no evidence for the sort of scaling Sam Altman insists is possible. No preliminary research has confirmed it is around the corner, and in fact tends to suggest the opposite of what OpenAI claims is possible. It's not nuclear fusion or commercial supersonic flight - it's a pipe-dream from start to finish.

They took money from ARKK and SoftBank this round which suggests that there were a lot of funds passing.
SoftBank wrote a ticket of $500m which is only 2x the min ticket size of $250m.
  • ·
  • 6 months ago
  • ·
  • [ - ]
The AI hype train is like gasoline for somebody’s car. It’s something people pay for to protect themselves against risk.
How does OpenAI manage to spend $5B in one year?

Is that mostly spend in building their own datacenters with GPUs?

Training and inference costs: Paying for GPU time in Azure datacenters.
  • cs702
  • ·
  • 6 months ago
  • ·
  • [ - ]
Yeah, they're buying a bit of time.

Everyone involved is hoping OpenAI will either

a) figure out how to resolve all these issues before the clock runs out, or

b) raise more money before the clock runs out, to buy more time again.

Valued at $157B, and losing $5B a year. What a price-earnings ratio.
With all new tech you've got to value them on expected future earnings really.
[dead]
  • cs702
  • ·
  • 6 months ago
  • ·
  • [ - ]
Given the high risk, investors likely want a shot of earning at least a 10x return. $157 billion x 10 = $1.57 trillion, greater than META's current market capitalization. Greater returns would require even more aggressive assumptions. For example, a 30x return would require OpenAI to become the world's most valuable company by a large margin.

All I can say to the investors, with the best of hopes, is:

Good luck! You'll need it!

It's fine, Sam's bulletproof plan is to build AGI (how hard could it be) and then ask the AGI how they can make a return on their investments.

https://www.threads.net/@nixcraft/post/C5vj0naNlEq

If they haven't built AGI yet that just means you should give them more billions so they can build the AGI. You wouldn't want your earlier investments to go to waste, right?

  • giarc
  • ·
  • 6 months ago
  • ·
  • [ - ]
How old is that though? They seem to be making revenue pretty well now, so I suspect this might be quite old?
  • lxgr
  • ·
  • 6 months ago
  • ·
  • [ - ]
Revenue isn't profit. They're burning money at an impressive rate: https://www.nytimes.com/2024/09/27/technology/openai-chatgpt...

I wouldn't even be surprised if they were losing money on paying ChatGPT users on inference compute alone, and that isn't even factoring in the development of new models.

There was an interesting article here (can't find the link unfortunately) that was arguing that model training costs should be accounted for as operating costs, not investments, since last year's model is essentially a total write-off, and to stay competitive, an AI company needs to continue training newer frontier models essentially continuously.

Training costs scale to infinite users making them a perfect moat even if they need to keep updating it. Success would be 10-100x current users at which point training costs at the current scale just don’t matter.

Really their biggest risk is total compute costs falling too quickly or poor management.

  • lxgr
  • ·
  • 6 months ago
  • ·
  • [ - ]
The potential user base seems quite finite to me, even under most optimistic assumptions.
Not everyone is going to pay 20$/month, but an optimistic trajectory is they largely replace search engines while acting as a back end for a huge number of companies.

I don’t think it’s very likely, but think of it like an auction. In a room with 100 people 99 of them should think the winner overpaid. In general most people should feel a given startup was overvalued and only looking back will some of these deals look like a good investment.

  • ac29
  • ·
  • 6 months ago
  • ·
  • [ - ]
WSJ reported today that ChatGPT has 250M weekly users. 10x that would be nearly the the majority of internet users. 100x that would be significantly more than the population of Earth.
Someone can be a direct user, be on some companies corporate account, and an indirect user via 3rd parties using OpenAI on their backend.

As long as we’re talking independent revenue streams it’s worth counting separately from an investment standpoint.

> I wouldn't even be surprised if they were losing money on paying ChatGPT users on inference compute alone

I'd be surprised it that was the case. How many tokens is the average user going through? I'd be surprised if the avg user even hit a 1m tokens much less 20m.

  • lxgr
  • ·
  • 6 months ago
  • ·
  • [ - ]
With o1? A lot.

Even for regular old 4o: You’re comparing to their API rates here, which might or might not cover their compute cost.

o1 is about $2-$4 per message over the API. I'm probably costing OpenAI less than 24hrs after my subscription renewal each month.

Voice mode is around $0.25 per minute via API. I don't use that much, but 3 minutes ago per day would already exceed the cost of a ChatGPT Plus subscription by quite a bit.

> o1 is about $2-$4 per message over the API

I’m not sure I understand this, sorry. I see GPT-4o at $3.75 per million input tokens and $10 per million output tokens, on OpenAI’s pricing page.

That’s expensive and I can’t see how they can run Copilot on the standard API pricing. But, it makes a message (one interaction?) lower cost than $4 to me.

How many tokens are in a typical message for you?

I think this might be the article you mentioned

https://benn.substack.com/p/do-ai-companies-work

  • lxgr
  • ·
  • 6 months ago
  • ·
  • [ - ]
That was it, thank you!
Thought of this way makes AI companies remarkably similar to Bitcoin mining companies which always just barely stay ahead of the difficulty increases and often fail.
  • ·
  • 6 months ago
  • ·
  • [ - ]
AGI's answer will be easy: hyperinflation. Kills many birds with one stone.
The answer would need to be "LET THERE BE LIGHT!" for these valuations to make sense.
  • HPMOR
  • ·
  • 6 months ago
  • ·
  • [ - ]
Even before reading your username I could've kissed you <3 Asimov
The Last Question is a wonderful short story. One of my favorites.
Reminds me of Stanislav Lem's The Phools.
The funny thing about that statement is that if it actually does become true, all of those VCs (and Altman himself), whose job is ostensibly to find the optimal uses for capital, would immediately become obsolete. Heck, the whole idea that capitalism could just continue along in its current form if true AGI existed is pretty laughable.
There are so many things wrong in this statement (starting with "immediately"). Let's assume they built a system they claim is AGI. Let's assume its energy consumption is smaller than that of a small country. Let's assume that we can verify it's AGI. Let's assume its intelligence is higher than average human. That's many "ifs" and I omitted quite a few.

Now, the question is: would you trust it? As a human, a manager, a president? With the current generation, I treat is as a dumb but quick typist, it can create code much faster than I can, but the responsibility to verify it all is entirely on me. We would need decades of proof such an AGI is reliable in order to actually start trusting it - and even then, I'm not sure how safe it would be.

Would you settle for a few days of testing in perfect conditions? Just kidding, companies don't care!
  • beAbU
  • ·
  • 6 months ago
  • ·
  • [ - ]
/All Gifts, Bestowed/ by Gayou is a great read that explores this topic.
If anyone in the thread has used o1 or the real-time voice tools, it's pretty clear AGI is here already, so we are really talking about ASI.

You have no option but to trust an ASI as it is all-powerful by definition. If you don't trust ASI, your only option is to prevent it from existing to begin with.

Edit: please note that AGI ≠ "human intelligence," just a general intelligence (that may exceed humans in some areas and fall behind in others.)

> please note that AGI ≠ "human intelligence," just a general intelligence (that may exceed humans in some areas and fall behind in others.)

By this definition a calculator would be an AGI. (Behold -- a man!)

Meh, what I've seen is that we continually move the goalposts for AGI, and even GTP-3.5 would have been considered AGI by our standards from just 5 years ago.

But if I can't convince you, maybe Norvig can: https://www.noemamag.com/artificial-general-intelligence-is-...

  • ·
  • 6 months ago
  • ·
  • [ - ]
Let’s say you gave o1 an API to control a good robot. Could it throw a football? Could it accomplish even the most basic tasks? If not, it’s not generally intelligent.
> Let’s say you gave o1 an API to control a good robot. Could it throw a football?

Maybe.

> Could it accomplish even the most basic tasks?

Definitely: https://youtu.be/Sq1QZB5baNw

I stand corrected
> If you don't trust ASI, your only option is to prevent it from existing to begin with.

I don't understand this sentence. I don't trust Generative AI because it often spits out false, inaccurate or made up answers, but I don't believe my "only option is to prevent it from existing".

ASI by definition will be all-powerful or close to it. What are you gonna do if it comes into existence?

Because if you don't trust it, you're fucked:

- Your attempts to limit its influence on your life will be effective only if the ASI decides to willfully ignore or not notice you.

- If it lies or not, there's nothing you can really do. The outcome it wants is practically guaranteed regardless. It will be running circles around your mental capacity like a human drawing circles around an ant on a piece of paper.

So what option do you really have but to trust it? I mean, sure, you can not trust the all-powerful god, but your lack of trust will never really have any effect on the real world, or even your life, as the ASI will always get what it wants.

Really, your only option is to prevent it from happening to begin with.

All that said - I think ASI will be great and people's concerns are overblown.

> ASI by definition will be all-powerful or close to it.

That's a huge assumption.

AI in SF: all-powerful all-controlling abstract entity.

AI in reality: LLMs on thousands of servers with GPUs, operated and controlled by engineers, gated by a startup masquerading as a non-profit and operating for years at a loss, with a web interface and an API, fragile enough that it becomes unavailable at times.

You are conflating AGI (what we have today) with ASI (what you are referring to in science-fiction.) These are completely different things and my comment refers to ASI, not AGI.
The problems with central planning that capitalism ostensibly solves don't exist because of a lack of intelligence, but due to the impedance mismatch between the planner and the people.

Making the central planner an AGI would just make it worse, because there's no guarantee that just because it's (super)intelligent it can empathize with its wards and optimize for their needs and desires.

I don't think the concern is that an AGI would become a central planner, but that an AGI would be so much better than human investors that the entire VC class would be outclassed, and that the free market would shift towards using AGI to make investment/capital allocation decisions. Which, of course, runs the risk of turning the whole system into a paperclip optimizing machine that consumes the planet in pursuit of profit; but the VC class seems to desire that anyway, so I don't think we can assume that a free market would consider that a bad outcome.
A fair amount of evidence has existed for at least 50 years that a chimpanzee throwing darts at a wall can outperform most active fund managers, yet this has done nothing to reduce their compensation or power.
That VC class appears to really enjoy the frisson of bullshit, elaborate games of guess-what’s-behind-the-curtain, and status posturing. Remove that hedonistic factor and the optimization is likely to be much more effective.
  • baq
  • ·
  • 6 months ago
  • ·
  • [ - ]
The problem isn't AGI becomes the oracle central planner. The problem is AGI becomes the central planner, the government and everybody else who currently has a job.
I think there won't be just one AGI, no central planner. LLM abilities leak, other models can catch up in a few months.
The problem with all such criticisms is that there is an implicit assumption that humans can be trusted.
I know human limitations. I don’t know AGI limtations.

Most humans can not lie all the time. Their true intentions do come out from time to time.

AGI might not have that problem - AGI might hide its true intentions for hundreds of years.

Have you… met humans?
It has been known since the 1920s that capitalism isn't perfectly efficient. The competition has always been between an imperfect market directed by distributed human compute vs a planner directed by politicians directed by human computed.

It is an argument about signal bandwidth, compression, and noise.

  • baq
  • ·
  • 6 months ago
  • ·
  • [ - ]
Honestly, it isn't a bad plan at all.

Assuming money even makes sense in a world with AGI, that is.

In the Star Trek/Culture/Commonwealth equally distributed, benevolent AI, sure. In the I’ve-got-mine reality, I assume only the select few can speak with the AI and use it to control the serfs.
There's no future where OpenAI makes everyone else a "serf" though. In 1948 certain Americans imagined that the US could rule the Earth because it got the atomic bomb first, and they naively imagined that other countries would take a generation to catch up. In reality the USSR had its own atomic bomb by 1949.

That's what the competition with OpenAI looks like to me. There are at least three other American companies with near-peer models plus strong open-weights models coming from multiple countries. No single institution or country is going to end up with a ruling-the-Earth lead in AI.

I am not thinking some better LLM, but a genuine AI capable of original thought. Vastly superior capabilities to a human. A super intelligence which could silently sabotage competitor systems preventing the key breakthrough to make their own AI. One which could manipulate markets, hack every system, design Terminator robots, etc

Fanciful, yes, but that is the AI fantasy.

  • est31
  • ·
  • 6 months ago
  • ·
  • [ - ]
In many ways the USA does rule the earth now. The Grand Area is big.

With AI, I think there is extremely strong power laws that benefit the top performing models. The best model can attract the most users, which then attracts the most capital, most data, and best researchers to make an even better model.

So while there is no hard moat, one only needs to hold the pole position until the competition runs out of money.

Also, even if no single AI company will rule the earth, if AI turns out to be useful, the AI companies might get a chunk of the profits from the additional usefulness. If the usefulness is sufficiently large, the chunk doesn't have to be large percentually to be large in absolute terms.

America not using its nuclear advantage to secure its nuclear advantage doesn’t mean it couldn’t have.
  • baq
  • ·
  • 6 months ago
  • ·
  • [ - ]
I mean, it isn't a bad plan for VCs. Never said it's a good plan for us peasants. My opinion of sama is 'selling utopia, implementing dystopia' and that's assuming he's playing clean, which he obviously isn't.

As for a post-money world, if AGI can do every economically viable thing better than any human, the rational economic agent will at the very least let go all humans from all jobs.

This is a company at a $4 bill annual run rate.

In times gone by this would be a public company already. It's just an investment in a company with almost 2000 employees, revenue, products, brand etc. It's not an early stage VC investment, they aren't looking for a 10x.

The legal and compliance regime + depth of private capital + fear of reported vol in the US has made private investing the new public investing.

Another POV is that, how many new household names familiar to adults can you think of? Since 2015? Basically TikTok and ChatGPT. If you include kids you get Fortnite, Snapchat and Roblox. Do you see why this is such a big deal?
Moviepass was a new tech driven household name in my social circle back in 2017. It... didn't end well.
"tech driven" is very generous.
Expected return is set by risk and upside, not offering size. What do you think the risk of ruin is here? I think there is a substantial chance that Open AI wont exist in 5 years.
There was an implication in there on risk. If you don't believe a company doing $4 bill of revenue is significantly less risky than the average VC investment, you might be in dreamworld.
My understanding is that it is 4B of losses, not revenue.
Both are approximately true. But, anything software has some accounting for things (like training a model) that really should be categorized as capex instead of opex and it distorts the numbers.
My understanding was that total revenue was an order of magnitude lower, in the multi-millions.
No, it's reported to be at a run rate of $4 bill pa.
>You'll need it!

If they can IPO, they will easily hit a $1.5T valuation. All Altman would have to do is follow what Elon did with Tesla. Lots of massive promises marinated in trending hype that tickles the hearts of dumb money. No need to deliver, just keep promising. He is already doing it.

  • mcast
  • ·
  • 6 months ago
  • ·
  • [ - ]
The difference is Tesla had a moat with the electric car market, there were no affordable and practical EVs 10 years ago. OpenAI is surrounded by competition and Meta is constantly releasing Llama weights to break up any closed source monopolies.
Tesla is still overvalued today with a moat that is more a puddle than anything. Elon realized that cars weren't gonna carry the hype anymore, so now it's all robotaxi, which will almost certainly be more vaporware.
I think he’s even past robo taxi and onto AI and robots that build robots that build robotaxis. I wish I were joking.
  • smt88
  • ·
  • 6 months ago
  • ·
  • [ - ]
The Nissan Leaf was far more affordable than a Tesla 10 years ago and very practical for anyone living in a city.
While it was an affordable vehicle, saying that it was practical is an overstatement. Charging networks were abysmal and actually still are for non-Tesla compatible vehicles. If you had experience using EVgo and similar small networks you probably wouldn't sound as confident.
People back then didn't use charging networks; they charged at home (or work).
Since I am technically "people", I can assure you that there indeed existed non-Tesla charging stations in 2014. I was living in a medium size city in an apartment. Since your original comment is specifically about cities, I would like to point out that cities are often associated with apartment buildings, lack of individual garages, etc. Even today saying that EV owners in cities mostly rely on charging at home or at work does not seem valid.
That wasn't my comment but I will say that lots of people have houses with garages in cities. Those that don't often will choose to not purchase an electric vehicle.
Eh it was pretty limited. The Leaf (then) couldn't go from my house, to the airport in my city (Melbourne) and back on one charge. That always made it a dealbreaker for me.

And that's going by Nissan's claimed range, not even real world. So that's on a 100% charge, when the car is brand new with no battery degradation, and under the ideal efficiency conditions that you never really get.

Surely your city is not 50 miles wide?
What's dogecoins valuation? Cardanos? Bitcoins? There is a nigh-infinite amount of capital ready to get entranced by a sexy story.
If OpenAI hits $100B in revenue, $15B in profit with a 50% CAGR they will likely be worth even more than Tesla was at those numbers.

Tesla has really dropped off on its 50% CAGR number so now it is worth half that.

It took around 20 years for Amazon to get to $15B profit, and over 10 years for Meta/FB. Both had very clear paths to profit: sales and ads. OpenAI did not yet demonstrate how they will be able to consistently monetize their models. And if you consider how quickly similar quality free models are released today, it's definitely raising questions.
Yah, I need a "big if" interjection in my comment or something. I highly doubt they'll get there. But like Tesla & Meta, if they did get there they'd be a trillion dollar company.
  • cs702
  • ·
  • 6 months ago
  • ·
  • [ - ]
Yeah, there's no upper limit to hype and exuberance.

As Isaac Newton once said, "I can calculate the motion of heavenly bodies, but not the madness of people."[a]

---

[a] https://www.goodreads.com/quotes/74548-i-can-calculate-the-m...

Don't some of these investors, such as Microsoft get access to run the models on their own servers as well as other benefits?

I thought Satya said Microsoft had access to everything during the Altman debacle.

  • cs702
  • ·
  • 6 months ago
  • ·
  • [ - ]
My understanding is that Microsoft has already earned a large return, from incremental Azure revenues.
  • tqi
  • ·
  • 6 months ago
  • ·
  • [ - ]
I think later rounds generally have lower return expectations - if you assume the stock market will return ~10%/year, you probably only need it to 2X by IPO time (depending on how long that takes) for your overall fund's IRR to beat the stock market.
You would if it were the fund’s only investment. But it won’t be. And this is still not a mature company, as their expenses currently vastly outnumber revenue, so there’s always a chance of failure.

Your general sense that the later stage higher dollar figure raises look for a lower multiple than the earlier ones is correct, but they’d consider 2x a dud.

If they accomplish AGI first, they will be the world’s most valuable company, by far.

If they fall short of AGI there are still many ways a more limited but still quite useful AI might make them worth far more than Meta.

I don’t know how to handicap the odds of them doing either of these at all, but they would seem to have the best chance at it of anyone right now.

If AGI is accomplished, there’s unlikely to be a “secret sauce” to it (or a patentable sauce), and accomplishing AGI won’t by itself constitute a moat.
Maybe. Moats are often surprising. Google’s moat is just that people think of Google when they think of search. Bing could be significantly better than Google, and in fact, a lot of people think it is, and still not get anywhere.

A lot of people said Microsoft’s Windows moat in desktop operating systems was gone when you could do most of the things that a program did inside a browser instead, but it’s been decades now and they still have a 70% market share.

If you establish a lead in a product, it’s usually not that hard to find a moat.

Google’s moat is their search index and infrastructure (which is significantly larger-scale than an LLM), and the fact that non-Google/Microsoft web crawlers are being blocked by most websites.

Windows’ moat is enterprise integration, and the sheer amount of software targeting it (despite appearances, the whole world doest’t run on the web), including hardware drivers (which, among other things, makes it the gaming platform that it is).

OpenAI could build a moat on integrations, as I mentioned.

Eh, Bing’s index and infrastructure are perfectly adequate and they’ve still got a single digit market share. One might argue other people dont have them (others once did) because Google’s brand moat drowned the competition and makes nobody else bother.

OpenAI could build a moat in a lot of different ways including ones that haven’t been thought of yet.

They’ll find several I am sure.

My bet would be on Anthropic or Meta winning the AI race.

Anthropic because their investment in tools for understanding/debugging AI.

Meta because free/open source.

People can't even consistently define what AGI is. Ask 10 different people, you'll get 11 different answers.
Aren't they proving the opposite of your proposed alternative already? A limited AI is not making them money and since every new model becomes obsolete within a year, they can't just stop and enjoy the benefits of the current model.
The fact that it isn’t making money now isn’t indicative it never will. I can think of a lot of very large tech companies who people once said the same about.
It’s an arms race, then, no? Whichever company can survive the burn can sit on their LLaurels and recoup?
That's the thing, nothing points to a world with a single winner in AI models. I get what you are saying, but not sure OpenAI can survive the burn unless they build an unmatchable AGI. And that's pure speculation at this point.
I mean, someone needs to rise to the top, unless society as a whole just says "There's no value here." and frankly there's too much real value right now for that. So someone's surviving, at least at the service level. Maybe they just end up building off of open source models, but I can't see how the best brains in the business don't find a way to get paid to make these models. Am I missing something?
There’s definitely a future for LLMs from an enterprise point of view. Even current capability models will be widely used by companies. But it’s seems that will be highly commoditized space, and OpenAI lacks the deep pockets and infrastructure capabilities of Meta and Google to distribute that commodity at the lowest cost.

OpenAI valuation is reliant IMO on them on them 1) AGI possible through NNs, 2) them developing AGI first and 3) it being somewhat hard to replicate. Personally I’d probably stick 10%, 40%, and 10% on those but I’m sure others would have very different opinions or even disagree with my whole premise.

I am not saying that LLMs don't provide value, just that this value might not be captured exclusively by OpenAI in the future. If the idea is that OpenAI will have an unmatched competitive advantage over everyone else in this area, then that has already been proven to be wrong. The rest is speculation about AGI, the genius or Altman, etc.
They accomplished AGI (artificial general intelligence) years ago. What do you think ChatGPT is?

Alternatively, what are you imagining this “AGI” you speak of to be?

OpenAI defines AGI as "autonomous systems that outperform humans at most economically valuable work."

ChatGPT is not autonomous or capable of doubling global GDP.

That’s not the definition of AGI that has been in wide use within the research community for two decades prior to the founding of OpenAI.

The founders of OpenAI were drawn from an intellectual movement that made very specific, falsifiable predictions about the pipeline from AGI (original definition) to superintelligence, predictions which have since been entirely falsified. OpenAI talks about AGI as if it were ASI, because in their minds AGI inevitably leads to ASI in very short order (weeks or months was the standard assumption). That has proven not to be the case.

  • phito
  • ·
  • 6 months ago
  • ·
  • [ - ]
They haven't. Why are you stating lies as facts?
Artificial: man-made.

General: able to solve problem instances drawn from arbitrary domains.

Intelligence: definitions vary, but the application of existing knowledge to the solution of posed problems works here.

Artificial. General. Intelligence. AGI.

As in contrast to narrow intelligence, like AlphaGo or DeepBlue or air traffic control expert systems, ChatGPT is a general intelligence. It is an AGI.

What you are talking about is, I assume, a superintelligence (ASI). Bostrom is careful to distinguish these in his writing. Bostrom, Yudkowsky et al make some implicit assumptions that led them to believe that any AGI would very quickly lead to ASI. This is why, for example, Yudkowsky has a very public meltdown two years ago, declaring the sky is falling:

https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-annou...

(Ignore the date. This was released on April 1st to give plausible deniability. It has since become clear this really represents his view.)

The sky is not falling. ChatGPT is artificial general intelligence, but it is not superintelligence. The theoretical model used by Bostrom et al to model AGI behavior does not match reality.

Your assumptions about AGI and superintelligence are almost certainly downstream from Bostrom and Yudkowsky. The model upon which those predictions were made has been falsified. I would recommend reconsidering your views and adjusting your expectations accordingly.

I appreciate these definitions and distinctions. Thanks for sharing. You've helped me understand that I need a better, more precise vocabulary about this topic. I think on an abstract level I would think of AGI as "the brain that's capable of understanding", but I really then have no way to truly define "understanding" in the context of something artificial. Maybe ChatGPT "understands" well enough, if the output is the same.
It does understand to a certain degree for sure. Sometimes it understands impressively well. Sometimes it seems like a special needs case. Ultimately its understanding is different than that of a human’s.

The issue with the “once OpenAI achieves AGI [sic], everything changes” narrative is that it is based off models with infinite integrals in them. If you assume infinite compute capability, anything becomes easy. In reality as we’ve seen, applying GPT-like intelligence to achieve superhuman capabilities, where it is possible at all, is actually quite difficult, field-specific, and time intensive.

Billion dollars isn't cool, you know what is? A trillion dollars.
  • baq
  • ·
  • 6 months ago
  • ·
  • [ - ]
If everyone is building datacenters, sell nuclear reactors.
You need to consider time and baseline growth. Google tells me Nasdaq CAGR for the past 17 years is around 17% so that will be just under 5x over 10 years. 10x over 10 years will be about 25%. High, but not as crazy as you suggest.
  • senko
  • ·
  • 6 months ago
  • ·
  • [ - ]
At their stage and size, it's probably 3x-5x. Still sky high!
Maybe they view it as at least a sure thing for a 2x return...

Another issue here is that at this value level they are now required to become a public company and a direct competitor to their largest partners. It will be interesting to see how the competitive landscape changes.

  • cs702
  • ·
  • 6 months ago
  • ·
  • [ - ]
My understanding is that the company is burning $0.5+ to $1+ billion each month.

I'd say that's very high risk.

That is also much lower than Uber at its peak.
  • paxys
  • ·
  • 6 months ago
  • ·
  • [ - ]
Uber's spending was directly attributed to growth. They were launching in new countries, new cities, new markets every day, and that required burning through an immense amount of money. Of course that growth didn't need to last forever, and once the service was fairly established everywhere the spending stopped.

OpenAI on the other hand has to spend billions to train every new iteration of their model and still loses money on every query you make. They can't scale their way out of the problem – scaling will only make it worse. They are counting on (1) the price of GPUs to come down in the near term or (2) the development of AGI, and neither of these may realistically happen.

  • smt88
  • ·
  • 6 months ago
  • ·
  • [ - ]
Uber was a literally life-changing product with an obvious value for anyone. LLMs have neither benefit.
Every cafe, airport, school I've been to has people using ChatGPT or its competitors. Its obviously valuable for almost anyone. Just like how people cant imagine life before smartphones, people wont be able to imagine life before LLMs became ubiquitous. Its everywhere.
True, but there isn’t really a moat for text LLMs. Llama is open source, Gemini is basically free.
define "moat", Open source search engines and competitive scraping strategies were built in the early 2000s. The Google search moat has always been that they were better for 5-6 years - and then they were the default.

Once they were the default option, they merely had to be the "same" as other options. If someone were to make a 1:1 clone of google with ~5% fewer ads - I do not believe they would break get a substantial market share of web search traffic (see bing).

Google has decades of people making it their habit as a pretty strong moat. They have inertia that chap bought simply do not yet. People have been using ChatGPT for a year or two now, it’s much easier for them to switch to another one.
  • what
  • ·
  • 6 months ago
  • ·
  • [ - ]
I’m sorry, but what? How can you tell people at the cafe/airport/school are using LLMs?
You just walk around and see chatgpt interface on their screens?
That's just not true. Source: a two-digit division.

Previous to this, they had about ~10B (via MS), and they've been operating for about 2 years at this scale. Unless they got this $$$ like a week away from being bankrupt, which I highly doubt.

Note: I'm not arguing they're profitable.

It is speculated that a majority of those 10B is Azure cloud credits. Basically company scrip. You can't pay Nvidia in the scrip, or the city electricity department, or even the salary.
I remember when openai first raised and had the 100x cap and everyone said that was ridiculous and insane and of course they're not going to 100x from 1b... That would require them to become a 100b company!
The 10x return is on the investment amount, not the total valuation. And is a rule of thumb for early stage companies, not late rounds like this.
  • m3kw9
  • ·
  • 6 months ago
  • ·
  • [ - ]
The investor will probably have no say or be told to stfu and leave if they try to do some stuff like forming an activist group
For “an” AI company, that can achieve market dominance, to achieve 1.57T market cap is not unrealistic.

I think the question is, is OpenAI that company and is market dominance possible given all other players? I believe some investors are betting that it is OpenAI, while you and others are sceptic.

Personally I agree with you, or rather hope that it is not, primarily as I don’t trust Sam Altman and wouldn’t want him to have that power. But so far things are looking good for OpenAI.

  • tux3
  • ·
  • 6 months ago
  • ·
  • [ - ]
OpenAI feels like the most politicaly active with its storylines, flashy backstabs, and other intrigue.

But as far as the technology, we're drowning in a flood of good AI models, which are all neck to neck in benchmarks. Claude might be slightly stronger for my use, but only by a hair. Gemini might be slightly behind, but it has a natural mass market platform with Android

I don't see how a single player sticks their neck out without being copied within a few months. There is — still — no moat.

Investors probably aren't expecting a 10x return on a late stage investment like this.
If I were an investor, I'd be pretty concerned with such a high valuation after the o1 release. It's great, no question, but in my usage so far it's a modest step up from 4o, much smaller than the 3->4 jump. Real world exponential growth is exponential until it's logistic, and this sort of feels like entering that phase of the LLM paradigm.

Talking to friends who are very successful, knowledgeable AI researchers in industry and academia, their big takeaway from o1 is that the scaling hypothesis appears to be nearing its end, and that this is probably the motivation for trading additional training-time compute for additional inference-time compute.

So where does that leave an investor's calculus? Is there evidence OpenAI can pull another rabbit or two out of its hat and also translate that into a profitable business? Seems like a shaky bet right now.

They have evidence that inference time and inference time during training can continue to increase the reasoning abilities.

They do not actually need any further technology development to continue to add profitable products. There are numerous ways they can apply their existing models to offer services with various levels of specificity and target markets. Even better, their API offerings can be leveraged by an effectively infinite variety of other types of business, so they don't even need to do the application development themselves and can still profit from those companies using their API.

anecdotally, I'm flipping back and forth between o1 and GPT-4. o1 is mildly better at editing larger code segments. I worked with it to edit a large ~2k line python file in an unusual domain.

But o1 is also incredibly verbose. It'll respond with 1-2 pages of text which often contains redundant data. GPT-4o is better in it's descriptions.

That's exactly what it is and what has been done. It's just an iterative model with a planning step. In my tests you can get nearly the same results with GPT4 by first asking for a plan, and then asking for each step of the plan within the same context window.
  • m3kw9
  • ·
  • 6 months ago
  • ·
  • [ - ]
Then you alone will not invest
They haven't even started their own versions of AdWords.

Google's money printing is based on people telling Google what they want in the search bar, and google placing ads about what they want right when they ask for it. Today people type what they want in the ChatGPT search bar.

Compare Google’s cost per query to ChatGPT’s.
Difference is is that I can just switch to another LLM.

I cant really do the same for Google.

Why not? For Google Search specifically, there is no lock-in (obviously yhe productivity suite has more)
  • paxys
  • ·
  • 6 months ago
  • ·
  • [ - ]
There's no lock in but there's also no alternative. Bing is vastly inferior. ChatGPT meanwhile can be directly challenged by Claude, Gemini and several others.
  • nicce
  • ·
  • 6 months ago
  • ·
  • [ - ]
I guess you mean free alternative. Kagi at least provides very competitive results.
Kagi uses Google as one of it's search provider, so is hardly competing with Google.
In terms of quality search results, kagi is ahead of google. Unless I misunderstand the situation, inferior search result are a natural product of google getting paid by advertisers rather than kagi getting paid by their users. Google won’t be able to get away from using their index to serve up adverts (rather than quality results).
My point is Google gets money when you use Kagi, so they aren't really directly competing with Google.
  • nicce
  • ·
  • 6 months ago
  • ·
  • [ - ]
I haven't used Google for year, and I find better results than ever.
For me ddg and brave search have been fine with an occasional dose of chatgpt . I don’t miss google. I used Kagi for a few months and it was great but I got tired of them aggressively logging me out because I use a VPN that often changes IPs all over the US to find the speediest server. I understand why they do it, but I could not tolerate it forcing me to log back in every couple of days across devices. I was using 2FA so it was obviously me and not someone else using my account.
I use private mode on safari as my primary browser experience- so I’m never logged into anything. Kagi has a ‘Session Link’ feature where the URL itself logs you in. It’s pretty easy to get setup and then you never have to log in again unless you want to make account changes. I agree with you that it wouldn’t be worth it if I was constantly having to log in. Anyways, just something to consider, there is a solution.
There are other search engines…

I haven’t used google in years.

There's no chance in the world they are worth even a tenth of it.

Like literally some of their best talent tomorrow starts their own company and all they need is data center credits to catch up with open ai itself.

What is the company's moat exactly? Being few months at best ahead of the curve (and only on specific benchmarks)?

I've been following them for years, and every year everyone says they're not worth it.

It started with "AI progress is fake." "AI will never beat human creativity." "AI can't code anything past 15k tokens." "AI will always be too expensive." "AI will never sound like a human." "AI can't even understand humor."

And now we're at "AI is only like this because they're stealing copyright." "If everyone uses AI, it will cannibalize its own outputs." "Other people can build LLMs too."

But year after year, they solve these problems. They weren't worth a hundredth of it in 2021. At this rate, they would be worth a tenth of it in 2026, and maybe a oneth of it in 2030. And that's what the VCs are backing on. If they're not, well, it converts to debt.

Google (the search engine) hasn't really had a moat either. Plenty of competitors, some better ones, but they're still around. ChatGPT is a brand name.

Yet you're ignoring that competitors, have done a terrific job at catching up in extremely small time frames.

Hell, even Nvidia 2 weeks ago released their own LLM model which is competitive with the best commercial ones.

AI is a commodity, eventually where money will be made (interacting with the software and user, from Excel to your OS like Apple Intelligence) it won't be much relevant which model you use, average Joe won't even notice.

I don't consider models a commodity though. Coffee and steel are commodities. You need some level of quality and materials, but they can be swapped out. Models are more like engines or CPUs.

Unbenchmarked things like their ability to use south Jakartan slang, writing jokes, how and when they reject input, how tightly they adhere to system input, how they'd rate a thing from 1-10. They function as a part of a complex system and can't be swapped out. I'm using Claude Sonnet 3.0 for a production app and I need a week to be able to swap it to 3.5 while maintaining the same quality. We've trained our own models and it's still incredibly hard to compete with $0.075 per million tokens just on things like cost of talent, hardware, electricity. And that speed.

The question is why not something like Anthropic?

I'd say OpenAI has other cards up their sleeve. The hardware thing Jony Ive is working on. Sam Altman invests into fusion power and Stripe; guess who's getting a discount? There is a moat, but it lies at a higher level. Other competitors are also playing other kinds of moats like small offline AI.

I have also worked on an ai chatbot for legal purposes, and the differences between models were negligible when most of what you're doing is RAG (99% of apps out there).

I have also tried the different models in cursor and again the differences were negligible with some projects and questions slightly favoring one or the other.

Which does nothing but confirm that none of those has any kind of moat really. Data and products are eventually what's gonna make money, and the models used will likely be an implementation detail like choosing a database or a programming language.

To be a commodity, there has to be a constrained supply. So-called AI is just software and the replicable nature of software means it cannot be a commodity. I'd agree on your other point that the competitors have done a great job of catching up to a point where the valuation makes little sense given the limited moat.
After Theranos and WeWork, I'm always skeptical of any Pre-IPO "valuations".
  • kredd
  • ·
  • 6 months ago
  • ·
  • [ - ]
For every Theranos and WeWork, there’s Uber, Coinbase, AirBnb. I know they didn’t raise as much as OpenAI, but it wasn’t insignificant amount of money burning before they became profitable with large market caps. It’s very strange times we’re living in.
Airbnb is down 10% since going public. Coinbase is down over 50%. I think some skepticism around pre-IPO valuation is warranted.
  • _1
  • ·
  • 6 months ago
  • ·
  • [ - ]
Have any of those become profitable?
AirBnB has been profitable for two years. Coinbase’s financials are complicated by them holding a substantial amount of cryptocurrency, but they’ve been profitable for two quarters even with significant losses there.
  • padjo
  • ·
  • 6 months ago
  • ·
  • [ - ]
Uber made a 1.1 billion profit last year.
  • kredd
  • ·
  • 6 months ago
  • ·
  • [ - ]
Yup, I think all three are posting about 500M/quarter profits on average for the past year or so. Might be wrong though, I don’t really keep up with all of them.
  • paxys
  • ·
  • 6 months ago
  • ·
  • [ - ]
Yes, all of them are profitable.
  • paxys
  • ·
  • 6 months ago
  • ·
  • [ - ]
OpenAI raises $6.6B.

Related: Microsoft and NVIDIA's revenues increase by a combined $6.6B.

Seeing the discussions, I would point out that a startup needs stability in leadership to grow that fast.

Look at Google, Meta, etc.

They were super stable in leadership when they took off.

Can’t say the same for OpenAI.

Also, being an AI researcher, them converting to profit org after accepting donations in name of humanity and non-profit is honestly shameful and will not attract most talented researchers.

Similar to what happened to Microsoft once they got labelled as “evil”.

I hope better competition appear before the enshittification begins.

As far as I understand it they're actually underwater on their API and even $20/month pricing, so we'll either see prices aggressively increase and or additional revenue streams like ads or product placement in results.

We've witnessed that every time a company's valuation is impossibly high: They do anything they can to improve outlook in an attempt to meet it. We're currently in the equivalent of Netflix's golden era where the service was great, and they could do no wrong.

Personally I'll happily use it as long as I came, but I know it is a matter of "when" not "if" it all starts to go downhill.

> additional revenue streams like ads or product placement in results.

It's largely flown under the radar but they appear to already be testing this:

https://news.ycombinator.com/item?id=41658837

"This hallucination was brought to you by the coca cola company."

Given how picky the ad industry can be about where their ads are being placed, I somehow suspect this is going to be complicated. After all, every paragraph produced is potentially plain untrue.

> the enshittification

I've assumed that when AI becomes much more mainstream we'll see multiple levels of services.

The cheapest (free or cash strapped services) will implement several (hidden/opaque) ways to reduce the cost of answering a query by limiting the depth and breadth of its analysis.

Not knowing any better you likely won't realize that a much more complete, thoroughly considered answer was even available.

> The cheapest (free or cash strapped services) will implement several (hidden/opaque) ways to reduce the cost of answering a query by limiting the depth and breadth of its analysis.

This already happens. Many of the cheap API providers aggressively quantize the weights and KV cache without making clear that they do.

Or an answer that left out the fact that Pepperidge Farm remembers, Coke is life, and yo queiro Taco Bell.
> before the enshittification begins.

It already happens, when your model randomly gets worse all of a sudden for the same price of service.

  • ddxv
  • ·
  • 6 months ago
  • ·
  • [ - ]
I'm surprised there isn't more concern for OpenAI in that open source / open weight models are fast catching up to the plateau that OpenAI is at. Include edge AI models, ie tiny models that fit in apps/extensions etc and you have a LOT of nearly free competition coming for OpenAI.
They've dodged several of these lately. GPT-4o brought the competition to multimodal. o1 brought it back to reasoning/coding. There's products like Assistants and Memories, which isn't easily doable on open source either.

It's like watching Muhammad Ali or Mike Tyson boxing - they're the most agile large company we've ever seen and they're able to do it on a heavyweight platform.

People need to stop looking at the implied valuation of the stock purchase and take into account the rest of the financial terms. If the tiny stock trade values it at 10 and the attached huge preferred debt at 1, I know which valuation to believe.
ChatGPT is valued $157BN?

What discount rate do you use on a cash burning non-profit?

negative times a negative is a positive
It’s just math, duh.
Ask the same to Tesla which has a 700B+ valuation, earlier over 1T. Like it or not, company valuations are about stories, not facts.
  • paxys
  • ·
  • 6 months ago
  • ·
  • [ - ]
Tesla made $12.4B in profit last year. You can argue that the company is overvalued, sure, but there's no case to be made that it isn't a very viable and successful business. OpenAI meanwhile is banking on the fact that it will invent AGI soon and the AGI will figure out how to stop losing money on every query.
I wonder why Apple pulled out
  • paxys
  • ·
  • 6 months ago
  • ·
  • [ - ]
Apple is an extremely conservative minded company. Making a huge gamble on an overhyped overvalued company for a chance at a 10x return isn't in their DNA.
  • dom96
  • ·
  • 6 months ago
  • ·
  • [ - ]
or after building their own LLM that runs locally on Apple Sillicon they've decided that this technology is crazily overhyped
Apple survived with Siri for 10 years that is absolutely useless beside creating a timer … so they have time to wait and even use open source llms in the future.
I asked Siri how much 500 mL of water weighed the other day and she said 0.13 gallons. I should have remembered a mL weighs a gram but still. Siri is dumber than dirt.
It's almost as if people are willing to pay more for an iPhone than an assistant / LLM.
But why wait until the 11th hour to pull out?

They were on the round up until today.

> They were on the round up until today.

Not at all - their dropping out leaked last week, who knows how long ago they actually backed away from the table...

https://www.wsj.com/tech/apple-no-longer-in-talks-to-join-op...

Well that's a good reminder to not believe what I read on X.

Thanks for posting

  • paxys
  • ·
  • 6 months ago
  • ·
  • [ - ]
I assume the valuation got too rich for their liking. $157B is even higher than what was speculated on the news in recent weeks.
  • ·
  • 6 months ago
  • ·
  • [ - ]
  • sub7
  • ·
  • 6 months ago
  • ·
  • [ - ]
Funny that they will call syntactically correct word vectors trained on reddit comments "AGI". There is no "self learning", just chaining outputs trained on different .txt instruction sets.

There has never been a better time to have human intelligence and apply it to fields that are moving towards making critical decisions based on latent space probability maps.

NLP and Human-Computer Interaction however are fields that have actually been revolutionized though this wave so I do expect at least much better voice interfaces/suggestion engines.

Think you need to read up what transformers actually do (they refine syntactically, semantically and whatever additional way you wish) and what emergent properties they have.
  • sub7
  • ·
  • 6 months ago
  • ·
  • [ - ]
will look into emergent transformers again but I did take cs231n at Stanford and know wtf is up here
If it's of any help, it's the multi-head attention mechanism, one of the more unique characteristics (although it can be done in other NN's).
I wonder if these investors have a liquidation preference as they would in normal VC rounds. And if it's a 1x preference (as is normal) or if a higher multiple is built in.
> The new fund-raising round, led by the investment firm Thrive Capital, values OpenAI at $157 billion, according to two people with knowledge of the deal. Microsoft, the chipmaker Nvidia, the tech conglomerate SoftBank, the United Arab Emirates investment firm MGX and others are also putting money into OpenAI.

Yeah, that bodes well. Led by Jared Kushner's brother's VC firm with the UAE's sovereign wealth fund and Softbank following. If not for Microsoft and NVIDIA, this would be the ultimate dumb money round.

Microsoft and NVIDIA are guaranteed an ROI since it comes right back as revenue for them.
Thrive is one of the 10 most respected firms in venture capital. They work super hard and have a track record to prove it. Nobody who knows what they’re talking about would consider them dumb money.

https://www.newcomer.co/p/sequoia-founders-fund-usv-elad-gil

Softbank is the one I'd worry about
  • jddj
  • ·
  • 6 months ago
  • ·
  • [ - ]
Microsoft and (indirectly) Nvidia are the real destinations for a bunch of that money anyway, so I think your point stands.
SoftBank you say … boy I wouldn’t touch anything that SoftBank is investing in.
[flagged]
I for one can never get over the fact that Mira Murati was not laughed out of the room when she said -- with a straight face -- that GPT4 had high school level intelligence and the non-existent GPT5 will have PHD-level intelligence [1].

IMO -- this is not a serious company with serious people building an important long-lived product. This is a group of snake oil salesmen that are in the middle of the greatest grift of their careers. That, and some AI researchers that are probably enjoying limitless gpus.

[1] https://www.timesnownews.com/technology-science/next-gen-cha...

> this is not a serious company with serious people building an important long-lived product. This is a group of snake oil salesmen

But that’s obviously not a fair description either, because they have the world-leading product in an intensely competitive field that does stuff nobody would have thought possible five years ago.

The marketing is obviously massive hyperbole bordering the ridiculous, but the idea that they haven’t produced something deeply serious and important is also ridiculous, to me.

The only (gigantic, huge, fatal—perhaps) problem they have at the moment is that their moat seems to only consist of a short head start over the competition.

But is it world leading?

https://www.cnet.com/tech/services-and-software/chatgpt-vs-g...

Gemini won this one in September. It won another one I read in March. I just use Gemini for free.

The whole industry is full of snake oil salesmen. Do you really think this [2] is not a hype bubble waiting to be turned into a tell-all docuseries?

They have produced something impactful sure, but I don't think their product is a tenth as valuable or "intelligent" as they are claiming it to be. There are too many domain-specific tools that are far superior to whatever pseudo-intelligence you get from chatgpt... what proof do we have that any industry has found material value from this chatbot?

[2] https://x.com/deedydas/status/1841670760705949746

  • piyuv
  • ·
  • 6 months ago
  • ·
  • [ - ]
Nice of her to think these advanced autocomplete models have any intelligence at all
I know right?! Now, all you have to prove is that humans are anything more sophisticated than that :)
  • piyuv
  • ·
  • 6 months ago
  • ·
  • [ - ]
Descartes did it 400 years ago
That doesn't seem far off to me. Having used GPT4 it seems similar to a high school student in abilities and Murati's actual quote was "The next iteration, GPT-5, is expected to reach the intelligence level of a PhD holder in specific tasks" which seems quite plausible. Note the "specific tasks" bit.
I currently can ask gpt4 to do a high school intelligence level task for me (financial data capture) so the issue?
For one thing high school is a level of education not a "intelligence level".

For another thing there is not a single google search result for the phrase "high school intelligence level task". Unless Google is malfunctioning it seems you are just making things up?

Don't shoot the messenger, that's someone else's wording not mine, I'm speaking to their claim.

I don't think this wording is something that needs to be analysed to death, after all someone will just move the goalposts (typically by those who want to place human intelligence on some podium as special).

This is a task I'd entrust a high school student to do (emphasis on student). They said high school intelligence not high school education.

A forum user above said they are not a serious company with serious people. Would a serious person with a serious interest in "intelligence" make up terms like "high school intelligence".

>>>This is a task I'd entrust a high school student to do (emphasis on student). They said high school intelligence not high school education.

I'm sure you don't actually believe all high school students are equally intelligent or on the same coursework track, or that all high schools teach the same courses with the same level of sophistication, so I'm not sure why you are defending the term "high school intelligence" or saying they even made a "claim".

I can't take you as a serious person if you think serious people are limited to talking in official terms. They can also talk generally and make assumptions, yes that is allowed.

At no point did I say that all students are at the same level.

I can't quite see what issue you have with the all of this, other than you don't like it because you don't like it.

>>>official terms

The issue isn't "official terms" the issue is nonsensical framing which is "not even wrong".

>>>At no point did I say that all students are at the same level.

Yet you used the term "high school intelligence level task".

you said the same thing twice.
I don't think so? There's a difference between :

1) "TERM X" is not real but I can see where you are coming from since other people believe in "TERM X".

and

2) You literally just made up "TERM X" on the spot to argue some other thing was true.

In April they hyped up being able to talk to something like a H.E.R. Yet it seems all hype and not a reality. They say sign up and u might get access lol ... startup for speak give us ur money but we don't have the product we advertised/hyped up! Won't give them anymore of my money!
And the avatar in the demo was from another company.
What avatar?
This is the video they first went with: https://www.youtube.com/watch?v=kO9Jge1z7OU

Later they replaced it with https://www.youtube.com/watch?v=vgYi3Wr7v_g

who knows in that regards ... all i know is they sold and advertised a product to get people to pay them for hype and dangling a product in front of them with tricky language .. subscribe and you might get it lol. They are playing the start-up horseshit marketing game as its still not a product on the market and does it really exist.......
Can't help but think about this scene from silicon valley: https://www.youtube.com/watch?v=BzAdXyPYKQo

The "no revenue" scene... All OpenAI needs to do is start making some revenue to offset the costs!

It's too late! They have revenue. Safe Superintelligence Inc is showing the way.
  • eichi
  • ·
  • 6 months ago
  • ·
  • [ - ]
Really good financial engineering which had convinced the value. Cheers.
Does OpenAI have a moat?
  • jjice
  • ·
  • 6 months ago
  • ·
  • [ - ]
As a layman outsider, it doesn't seem like it. Anthropic is doing great work (I personally prefer Claude) and now there are so many quality LLMs coming out that I don't know if OpenAI is particularly special anymore. They had a lead at first, but it feels like many others are catching up.

I could be _very_ wrong though.

Agreed. Sonnet 3.5 is still by far the most useful model I've found. o1-mini is priced similar and no where near as useful even if programming which it is suppose to excel. I recently tried o1-mini using `aider` and it would randomly start responding in russian mid way through despite all input being in english. If anything, I think Anthropic still has a decent lead when it comes to price to performance. Their update to Haiku and Opus will be very interesting.
They recently released a new model, called "o1-preview", that is significantly ahead of the competition in terms of mathematical reasoning.
Is it? There was some discussion on HN a while ago that it is better than gpt4o but nothing about the competition and that seems quite doubtful compared to e.g. alphaproof.

Also, if "significantly ahead" just means "a few months ahead" that does not justify the valuation.

On benchmarks where it’s impossible to verify whether there’s contamination?
> that is significantly ahead

Perhaps, but at most generous, it’s three months ahead of competitors I imagine

[dead]
Hard to say in my opinion. I can say that I still use OpenAI heavily compared to the competition. It really depends though. I do believe they are still leaders in offering compelling apis and solutions.
No.

The race is, can OpenAI innovate on product fast enough to get folks to switch their muscle memory workflows to something new?

It doesn't matter how good the model is, if folks aren't habituated to using it.

At the moment, my muscle memory to go to Claude, since it seems to do better at answering engineering questions.

The competition really is between FAANG and OpenAI, can OpenAI accumulate users faster than Apple, Google, Meta, etc layer in AI-based features onto their existing distribution surfaces.

It still has the first mover advantage, based on the revenue and usage graphs.

If somebody puts a cheaper and better version, then no moat.

It's called llama. And it's free.
  • m3kw9
  • ·
  • 6 months ago
  • ·
  • [ - ]
Llama sucks man vs the best models sorry you cannot really be serious
I have only tried it with gpt4. Seems to be doing a pretty good job? What models should I try?
  • neom
  • ·
  • 6 months ago
  • ·
  • [ - ]
Eh, in the b2c play, sure- if they nail the enterprise maybe not.
  • m3kw9
  • ·
  • 6 months ago
  • ·
  • [ - ]
In fact they do, it’s called servers, GPUs, scale. You need them to train new models and to serve them. They also have speed and in AI speed is a non traditional moat. They got crazy connections too because of Sam. All of that together becomes a moat that someone just can’t do a “Facebook clone” on OpenAI
  • fach
  • ·
  • 6 months ago
  • ·
  • [ - ]
Someone certainly can "Facebook clone" OpenAI. Google, Meta and Apple all are more well capitalized than OpenAI, operate at a larger scale and are actively training and publishing their own models.
  • m3kw9
  • ·
  • 6 months ago
  • ·
  • [ - ]
Not anyone, it would be tough. You could also say the same that any one of these companies can do a Facebook clone, but it won’t be easy
I’m building several commercial projects with LLMs at the moment. 4o mini has been sufficient, and is also super cheap. I don’t need better reasoning at this point, I just need commodification, and so I’ll be using it for each product right up to the point that it gets cheaper to move up the hosting chain a little with Llama, at which point I won’t be giving any money to them.

They’ve built a great product, the price is good, but it’s entirely unclear to me that they’re continue to offer special sauce here compared to the competition.

OpenAI is dependent on Microsoft for GPUs, who are in turn dependent on Nvidia for GPUs. It’s nearly the least moat-y version of this out there.
  • m3kw9
  • ·
  • 6 months ago
  • ·
  • [ - ]
Used to be when they had no money
Money doesn't just give you hyperscaler datacenters or custom silicon competitive with Nvidia GPUs. Money and 5 years might, but as this shows, OpenAI only really has a 1.5 year runway at the moment, and you can't build a datacenter in that time, let alone perfect running them at scale, same with chip design.
Aside from vendor lock-in by making their integrations (APIs) as idiosyncratic and multifaceted as possible, I don’t think so.
It does. "ChatGPT", GPT, "OpenAI", etc... are strong brands associated with it.

Edit: You can downvote me all you want, I have plenty of karma to spare. This is OpenAI's strongest moat, whether people like it or not.

Nobody cares, though, really. My experience is that clients are only passingly interested in what LLM powers the projects they need and entirely interested in the deployed cost and how well the end product works.
Which provider do you use for your clients' AI solutions? Be honest.

Edit: nvm, I already know.

https://news.ycombinator.com/item?id=36028029

https://news.ycombinator.com/item?id=41725073

GPT is a generic tool name.

Those moats are pretty weak. People use Apple Idioticnaming or MS Copilot or Google whatever, which transparently use some interchangeable model in the background. Compared to chatgpt these might not be as smart, but have much easier access to OS level context.

In other words: Good luck defending this moat against OS manufacturers with dominant market shares.

>Those moats are pretty weak.

Name any other AI company with better brand awareness and that argument could make a little bit of sense.

Armchair analysts have been saying that since ChatGPT came out.

"Anyone could steal the market, anytime" and there's a trillion USD at play, yet no one has, why? Because that's a delusion.

What you are overlooking is the fact that AI today and especially AI in the future is going to be about integrations. Assisted document writing, image generation for creative work, etc etc. Very few people will look at the tiny gray text saying "Powered by ChatGPT" or "Powered by Claude"; name recognition is not as relevant as eg iPhone.

Anecdotally, I used to pay for ChatGPT. Now I run a nice local UI with Llama 3. They lost revenue from me.

> Name any other ~~AI~~ company with better brand awareness and that argument could make a little bit of sense.

I just gave you three of them. Right now a large share of chatgpt customers come from the integration provided by those three.

> "Anyone could steal the market, anytime" and there's a trillion USD at play, yet no one has, why? Because that's a delusion.

Bullshit. It is not about "stealing" but about carving a significant niche. And that has happened: Apple In happens in large parts on device using not-chatgpt, google's circle to search, summaries etc use not-chatgpt, copilot uses not-chatgpt.

The danger to a moat is erosion not invasion.

It's the company that's most likely to be the first to develop superintelligence.
Based on… CEO proclamations?
Based on a comparison with DeepMind and Anthropic.
Define super intelligence first maybe?
More intelligent than any human.
What is intelligence and how does one measure that?
What is measurement? What is a definition?
Well the original claim is super intelligence is going to be achieved by OpenAI. So I assume you have defined it and figured out a way to measure in the first place so that you know it has been achieved.

So probably on you to explain it since you came up with that claim.

I think first you have to explain what you mean with definition and measurement since you were the one to ask for those.
We heard for years that Uber was the company that's most likely to be the first to develop self-driving cars. Until they weren't. You can't just trust what the CEOs are hyping.
I think Waymo was always ahead in terms of self-driving, and still is today.
Uber's autonomous division had more hype around it, and company's evaluation was largely based on the idea of replacing human drivers "very very soon". Now the bulk of their revenue comes from food delivery.
If superintelligence happens, then money won't matter anymore anyway.
I don't disagree, but what makes you say this?
  • K0IN
  • ·
  • 6 months ago
  • ·
  • [ - ]
so they are now close to where every float in the original gpt3 model, is worth 1$.
I'm wondering... if the rapid development of openai will actually have deflationary effect on the economy.
I've been very disappointed in recent model releases, to be honest. It seems that o1 is their venture into reasoning, llms lack so much, but it's unclear if their approach does actually works towards robust reasoning. I do cheer for them and hope they can come up with something. Ai research is advancing too slowly!
The closer you get to 100% accurate, the harder it is to improve.
  • gsky
  • ·
  • 6 months ago
  • ·
  • [ - ]
as a geniune user (not robot) i could not create an account with OpenAI even after solving their puzzles 100 times.
  • ·
  • 6 months ago
  • ·
  • [ - ]
$157B marketcap means they need to 20x their current revenue of roughly 400 million dollars by next year...

But the revenue has flatlined and you can't raise your existing users cost by 20x...

It truly is a mystery as to how anybody throwing other peoples money hopes to get it back from OpenAI

They made 300 million revenue last month, apparently up 17x from last year[1]. To get a P/E ratio of 20, assuming (falsely) that their spending holds constant, they'd need ~4x more revenue

[1]https://www.nbcnews.com/business/business-news/openai-closes...

For a P/E ratio of 20 they'd need to generate earnings (not revenue) of $7.85 billion, earnings are revenue less all costs.
Right, so

300m $/mo. * 12 mo. * x - costs = 7.85b

or

$3.6b * x = 7.85b + costs

I hold costs constant at $8B and get x = 4.4. $8B is probably a slight overestimate of current costs, I just took the losses from the article and discounted the last year's revenue to $3B. Users use inference which costs money so, in reality, costs will scale up with revenue, which is why I note this is a false assumption. But I also don't know how much of that went into training and whether they'll keep training at the current rate, so I can't get to a better guess.

  • gizmo
  • ·
  • 6 months ago
  • ·
  • [ - ]
If OpenAI starts making a lot of money on each subscription -- implied by your assumption that revenues will 4.4x while expenses stay constant -- the competition will aggressively undercut OpenAI in price. Everybody wants to take market share away from OpenAI, and that means OpenAI has to subsidize their users or sell at break even to prevent that from happening.

Furthermore, training also gets exponentially more expensive as models keep growing and this R&D is not optional. It's absolutely necessary to keep current OpenAI subscribers happy.

OpenAI will lose money, and lots of it, for years to come. They have no clear path to profitability. The money they just raised will last maybe 18 months, and then what? Are they going to raise another 20bn at a 500bn valuation in 2026? Is their strategy AGI or bust?

They NEED to keep training forever. Otherwise free/cheap competitors catch up. Their only edge is being a half year ahead.
> The start-up expects about $3.7 billion in sales this year

Where do you get $400M and flatline?

"That meant OpenAI could provide a return for investors, but those profits were limited. OpenAI has also long been in talks to restructure itself as a for-profit company. But that is not expected to happen until sometime next year..."

https://tvtropes.org/pmwiki/pmwiki.php/Main/IAmNotLeftHanded

  • ·
  • 6 months ago
  • ·
  • [ - ]
I'm not justifying anything here but I think their revenues are expected to triple next year...now that doesn't mean they will of course. But why do you say they've flatlined?
> But the revenue has flatlined and you can't raise your existing users cost by 20x...

Why not? They’re already shopping a 2k/mo subscription option

Who would pay 2k/mo? For what?

That’s someone’s rent.

>Who would pay 2k/mo?

Spammers.

[dead]
[flagged]
  • ·
  • 6 months ago
  • ·
  • [ - ]
  • ·
  • 6 months ago
  • ·
  • [ - ]