You can see the very different response by OpenAI: https://openai.com/index/our-approach-to-advertising-and-exp.... ChatGPT is saying they will mark ads as ads and keep answers "independent," but that is not measurable. So we'll see.
For Anthropic to be proactive in saying they will not pursue ad based revenue I think is not just "one of the good guys" but that they may be stabilizing on a business model of both seat and usage based subscriptions.
Either way, both companies are hemorrhaging money.
- Sergey Brin and Lawrence Page, The Anatomy of a Large-Scale Hypertextual Web Search Engine, 1998
Anthropic is mainly focusing on B2B/Enterprise and tool use cases, in terms of active users I'd guess Claude is distant last, but in terms of enterprise/paying customers I wouldn't be surprised if they were ahead of the others.
> AI will increasingly interact with commerce, and we look forward to supporting this in ways that help our users. We’re particularly interested in the potential of agentic commerce
I don't think they have an accurate model for what they're doing - they're treating it like just another app or platform, using tools and methods designed around social media and app store analytics. They're not treating it like what it is, which is a completely novel technology with more potential than the industrial revolution for completely reshaping how humans interact with each other and the universe, fundamentally disrupting cognitive labor and access to information.
The total mismatch between what they're doing with it to monetize and what the thing actually means to civilization is the biggest signal yet that Altman might not be the right guy to run things. He's savvy and crafty and extraordinarily good at the palace intrigue and corporate maneuvering, but if AdTech is where they landed, it doesn't seem like he's got the right mental map for AI, for all he talks a good game.
It appears they trend in the right direction:
- Have not kissed the Ring.
- Oppose blocking AI regulation that other's support (e.g. They do not support banning state AI laws [2]).
- Committing to no ads.
- Willing to risk defense department contract over objections to use for lethal operations [1]
The things that are concerning: - Palantir partnership (I'm unclear about what this actually is) [3]
- Have shifted stances as competition increased (e.g. seeking authoritarian investors [4])
It inevitable that they will have to compromise on values as competition increases and I struggle parsing the difference marketing and actually caring about values. If an organization cares about values, it's suboptimal not to highlight that at every point via marketing. The commitment to no ads is obviously good PR but if it comes from a place of values, it's a win-win.
I'm curious, how do others here think about Anthropic?
[2]https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-reg...
[3]https://investors.palantir.com/news-details/2024/Anthropic-a...
Not that I've got some sort of hate for Anthropic. Claude has been my tool of choice for a while, but I trust them about as much as I trust OpenAI.
Anthropic being a PBC probably helps.
And company execs can hold strong principles and act to push companies in a certain direction because of them, although they are always acting within a set of constraints and conflicting incentives in the corporate environment and maybe not able to impose their direction as far as they would like. Anthropic's CEO in particular seems unusually thoughtful and principled by the standards of tech companies, although of course as you say even he may be pushed to take money from unsavory sources.
Basically it's complicated. 'Good guys' and 'bad guys' are for Marvel movies. We live in a messy world and nobody is pure and independent once they are enmeshed within a corporate structure (or really, any strong social structure). I think we all know this, I'm not saying you don't! But it's useful to spell it out.
And I agree with you that we shouldn't really trust any corporations. Incentives shift. Leadership changes. Companies get acquired. Look out for yourself and try not to tie yourself too closely to anyone's product or ecosystem if it's not open source.
That's the main reason I stick with iOS. At least Apple talks about caring about privacy. Google/Android doesn't even bother to talk about it.
https://www.anthropic.com/news/anthropic-s-recommendations-o...
Also codex cli, Gemini cli is open source - Claude code will never be - it’s their moat even though 100% written by ai as the creator says it never will be . Their model is you can use ours be it model or Claude code but don’t ever try to replicate it.
LLMs allow hostile actors to do wide-scale damage to society by significantly decreasing the marginal cost and increasing the ease of spreading misinformation, propaganda, and other fake content. While this was already possible before, it required creating large troll farms of real people, semi-specialized skills like photoshop, etc. I personally don't believe that AGI/ASI is possible through LLMs, but if you do that would magnify the potential damage tenfold.
Closed-weight LLMs can be controlled to prevent or at least reduce the harmful actions they are used for. Even if you don't trust Anthropic to do this alone, they are a large company beholden to the law and the government can audit their performance. A criminal or hostile nation state downloading an open weight LLM is not going to care about the law.
This would not be a particularly novel idea - a similar reality is already true of other products and services that can be used to do widespread harm. Google "Invention Secrecy Act".
Also, your comment is a bit presumptuous. I think society has been way too accepting of relying on services behind an online API, and it usually does not benefit the consumer.
I just think it's really dumb that people argue passionately about open weight LLMs without even mentioning the risks.
There are no good guys, Anthropic is one of the worst of the AI companies. Their CEO is continuously threatening all of the white collar workers, they have engineering playing the 100x engineer game on Xitter. They work with Palantir and support ICE. If anything, chinese companies are ethically better at this point.
- Blocking access to others (cursor, openai, opencode)
- Asking to regulate hardware chips more, so that they don't get good competition from Chinese labs
- partnerships with palantir, DoD as if it wasn't obvious how these organizations use technology and for what purposes.
at this scale, I don't think there are good companies. My hope is on open models, and only labs doing good in that front are Chinese labs.
Similar to Oracle vs Postgres, or some closed source obscure caching vs Redis. One day I hope we will have very good SOTA open models where closed models compete to catch up (not saying Oracle is playing a catch up with Pg).
> Asking to regulate hardware chips more
> partnerships with [the military-industrial complex]
> only labs doing good in that front are Chinese labs
That last one is a doozy.
They’re moving towards becoming load-bearing infrastructure and then answering specific questions about what you should do about it become rather situational.
I’m very pleased they exist and have this mindset and are also so good at what they do. I have a Max subscription - my most expensive subscription by a wide margin - and don’t resent the price at all. I am earnestly and perhaps naively hoping they can avoid enshittification. A business model where I am not the product gives me hope.
This is exactly what chatgpt 5 was about. By tweaking both the model selector (thinking/non-thinking), and using a significantly sparser thinking model (capping max spend per conversation turn), they massively controlled costs, but did so at the expense of intelligence, responsiveness, curiosity, skills, and all the things I've valued in O3. This was the point I dumped openai, and went with claude.
This business model issue is a subtle one, but a key reason why advertisement revenue model is not compatible (or competitive!) with "getting the best mental tools" -margin-maximization selects against businesses optimizing for intelligence.
This is going to be tough to compete against - Anthropic would need to go stratospheric with their (low margin) enterprise revenue.
I use it as codegen too but I easily have 20x more brainstorming conversations than code projects
Most non-tech people I talk to are finding value with it with traditional things. The main one I've seen flourish is travel planning. Like, booking became super easy but full itinerary planning for a trip (hotels, restaurants, day trips/activities, etc) has been largely a manual thing that I see a lot of non-tech people using llms for. It's very good for open ended plans too, which the travel sites have been horrible at. For instance, "I want to plan a trip to somewhere warm and beachy I don't care about the dates or exactly where" maybe I care about the budget up front but most things I'm flexible on - those kinds of things work well as a conversation.
I wish the financial aspects were different, because Anthropic is absolutely correct about ads being antithetical to a good user experience.
I agree with this - I'm not so much worried that ChatGPT is going to silently insert advertising copy into model answers. I'm worried that advertising alongside answers creates bad incentives that then drive future model development. We saw Google Search go down this path.
However, I do think we need to take Anthropic's word with a grain of salt, too. To say they're fully working in the user's interest has yet to be proven. This trust would require a lot of effort to be earned. Once the companies intends to or becomes public, incentives change, investors expect money and throwing your users under the bus is a tried and tested way of increasing shareholder value.
https://x.com/ns123abc/status/2019074628191142065
In any case, they draw undue attention to openAI rather than themselves. Not good advertising
Both openAI and Anthropic should start selling compute devices instead. There is nothing stoping open-source LLMs from eating their lunch mid-term
Littering a potentially quality product with ads which one cannot easily separate is what the evil is.
But I’m happy with position and will cancel my ChatGPT and push my family towards Claude for most things. This taste effect is what I think pushes apple devices into households. Power users making endorsements.
And I think that excess margin is enough to get past lowered ad revenue opportunity.
> ...but including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking.
Sadly, with my disillusionment with the tech industry, plus the trend of the past 20 years, this smacks of Larry Page's early statements about how bad advertising could distort search results and Google would never do that. Unsurprisingly, I am not able to find the exact quote with Google.
In this animal farm Orwellian cycle we’ve been going through, at least they start here, unlike others.
I for one commend this, but stay vigilant.
I wonder how they can get away without showing Ads when ChatGPT has to be doing it. Will the enterprise business be that profitable that Ads are not required?
Maybe OpenAI is going for something different - democratising access to vast majority of the people. Remember that ChatGPT is what people know about and what people use the free version of. Who's to say that making Ads by doing this but also prodiding more access is the wrong choice?
Also, Claude holds nothing against ChatGPT in search. From my previous experiences, ChatGPT is just way better at deep searches through the internet than Claude.
It's great that Anthropic is targeting the businesses of the world. It's a little insincere to than declare "no ads", as if that decision would obviously be the same if the bulk of their (not paying) users.
There are, as far as ads go, perfectly fine opportunities to do them in a limited way for limited things within chatbots. I don't know who they think they are helping by highlighting how to do it poorly.
Great by Anthropic, but I put basically no long term trust in statements like this.
Very diplomatic of them to say "we respect that other AI companies might reasonably reach different conclusions" while also taking a dig at OpenAI on their youtube channel
A lot of people are ok with ad supported free tiers
(Also is it possible to do ads in a privacy respecting way or do people just object to ads across the board?)
(Props for them for doing this, don't know how this is long-term sustainable for them though ... especially given they want to IPO and there will be huge revenue/margin pressures)
sorry but this is silly, nothing suggests this at all.
Obviously it's a play, honing in on privacy/anti-ad concerns, like a Mozilla type angle, but really it's a huge ad buy just to slag off the competitors. Worth the expense just to drive that narrative?
Ads playlist https://www.youtube.com/playlist?list=PLf2m23nhTg1OW258b3XBi...
Looks like you're picking up LLM speak too!
https://www.theverge.com/openai/686748/chatgpt-linguistic-im...