When you're talking to an LLM about popular topics or common errors, the top results are often just blogspam or unresolved forum posts, so the you never get an answer to your problem.
More of an indicator that web search is more unusable than ever, but interesting that it affects the performance of generative systems, nonetheless.
LLMs are truly reaching human-like behavior then
The give simple jobs, like cleaning or painting, to people on the lower bottom of earnings. Most people in that plan are people with low formation, like those who left school in their mid teens.
This seems like a great idea to me! Making it cheaper for businesses to hire people for these jobs would lower prices for everyone, improving accessibility of the services.
How would this help lower prices? The taxes have to be paid for by someone, and that cost should largely end up landing on the consumer.
It seems like we'd be changing who's hands the money moves through, but it still has to be paid for one way or another. If that's the case we'd risk higher prices since taxes have to subsidize prices and cover all the costs of running the program in the first place.
In the end, you use money from the rich to pay for socially beneficial jobs. Exactly the sort of thing government is for: ensuring that social goods are provided.
Taxing the rich can have unintended consequences. First you have to change the tax code so they actually get taxed and can't dodge it, those rules alone would be difficult to write effectively and would likely mean changing other parts of our tax code that impact everyone. If the rich do get taxed enough to cover a good chunk of wages, demand for luxury items would go down so too then would the jobs that make those products and services.
Once subsidized by a UBI, at best workers will continue to work at the same levels they do now. There will be an incentive for them to work less though, potentially driving up the labor costs you are trying to reduce. How do we accurately predict how many workers will reduce their hours or leave the workforce entirely? And how do we predict what that would do to prices?
The idea of taxing the rich to bail out everyone else is too often boiled down to a simple lever that, when pulled, magically fixes everything without any risk of unintended side effects.
There's an obvious wealth gap that's increasing and the people up top are getting even less oversight as we speak. As you say in your post, you don't know what the effects will be because it's not simple. But I see no compelling reason to continue with the oligarchy
My point was that we can change taxes to a system that we think will work better today, but we can't claim to know what the actual results will be years from now.
The claim made earlier in the chains was that taxing the rich to subsidize wages would lower labor costs and lower prices. I don't think we can ever know well enough how a broad reaching change will land, and claiming to know prices will go down isn't reasonable.
I had to watch this office space clip again just to be sure. https://youtu.be/Fy3rjQGc6lA ah yes, the meaning of life. ha-ha I love the classics https://www.youtube.com/watch?v=ZBdU9v5nLKQ
A much more terrible issue we suffer from already is that without participating we forget how our civilization works. Having a job gives you at least a tiny bit of insight that may partially map to other jobs.
Very similar to how ultra hard core libertarians assume they’ll be the ones at the top of the food chain calling the shots and not be just another peasant.
But it doesn’t really matter because there is no way in hell any of these LLM’s will uproot all of society. I use LLMs all the time, they are amazing, but they aren’t gonna replace many jobs at all. They just aren’t capable of that.
The available work offers the entire spectrum but we have to divide and plan it.
I watch these historical farm documentary tv shows, and they show how everyone in a town had a purpose and worked together, the blacksmith, the tile maker.
And I do often think the limiting factor to a life like this is the “market” so if you could create these communities, and could be an artist/artisan/builder, without strictly having to worry about making enough to live.
I met someone recently who lived in the Galapagos islands, and she seemed to sort of live this community oriented, trading anarchocapitalist lifestyle, and I think most people would be happier if they're small capitalist or socialist community involved direct interaction with people rather than dealing with soulless corpo's all the time.
I can imagine loads of tasks or jobs that would be quite pleasant if it weren't for stressing over efficiency or business admin.
I mean think about it…when was the last time you heard of charity gutter cleaning services? People would much rather enjoy their leisure time on hobbies or with family/friends.
In terms of charity cleaning services, there are people who clean hoarder's houses or landscape unruly yards for free on YouTube... ;)
If the government gives out free money people will pocket it. Should not be controversial.
As for why: for purpose, for praise, for community, for mental health, for trade/contribution, for skill building, etc. Loads of examples of this already. Maybe none of these things are attractive to you but I don't think that's universal.
Like I said, it's just trying to add to the default UBI, not getting everyone volunteering in their community or else.
I imagine just like with existing benefits, the majority of people wouldn't feel great about being on UBI doing nothing, and they would pursue something that gives them a better social standing, a better sense of purpose, a good challenge, whatever motivates an individual. It's why lots of people do volunteer work, work on important open source software, and so on. Sure, there's outliers that actually proudly slack off, but you don't address specific problems with generic solutions.
But more importantly, having the _option_ to fall back on benefits means people need to take fewer risks to pursue their talents and likely be of more value to society than if they did whatever puts food on the table today. Case in point: People born into a family that can finance them through college are more likely to become engineers than people born into poor households. On the flip side, some people do white collar jobs vs something like being a medic to uphold their standard of living from the higher salary, not out of preference.
I think it would need careful management, but I believe there's every reason to be optimistic.
People work for money. If a job has no pay, you can't expect it to get done.
We need people to actually run hospitals, produce food, construct shelter/infrastructure, provide childcare/education, etc.
It’s a classic economic blunder that dictatorships love to make:
1. Create money & rack up debt.
2. Produce nothing.
3. Create inflationary crisis and exacerbate wealth inequality.
4. Highlight your good intentions and relish your new position as champion of the people.
Also, it’s fascinating that you say “no benefit to the taxpayer” as if the taxpayer not having to work is somehow not a benefit?
A conversation that starts like this is not going to go well.
The vast majority of people's passions are partying, sex, alcohol/drugs, watching sports, gossiping, generally wasting time. Things that mostly
This whole line of thought to me is embarrassingly clueless, naive and basically childish.
It is just mind blowing to me how smart people can't see what a bubble they live in.
I almost suspect, the higher a person's IQ, the more susceptible they are to living in a bubble that basically has nothing to do with the majority of people with an IQ of 100.
And why do you need money at all in that scenario, at least for the basic items the UBI intends to make affordable to all? Why not just make them free and available to everyone?
No UBI proposal I'm aware of proposes UBI replaces salaries or is high enough to satisfy everyone. The "B" is for basic. Most people are not satisfied with earning a basic salary.
I know a few people with small businesses in various manufacturing industries. They all had a really hard time finding enough people to work while stimulus checks were going out.
People wouldn't make quite as much, but they were happy to stay home and have the basics for "free" rather than have a job.
Historically, jobs or professions always existed around the intrinsic motivation of the person working and around the needs of the society around that person.
So you could become a poet, but if you do not write poems that people like you would starve. Or you could become a farmer and provide the best apples in your city and you will earn a more than deserve income.
That's why free economies have developed historically so much better than any centrally planned economy.
You can do more harm than good by implementing policies like “guaranteed free money”.
If it was voted down, I'm guessing it was because to the extent that it's a fact, it's trivially true, and there's nothing insightful about the defeatist take. It's possible to do more harm than good doing pretty much anything. And the world is littered with problems that are not "fully solvable" but that we've mitigated greatly.
lets say your car tires pop.
Person A: "I will paint your car tires red. That will fix them."
Person B: "painting my flat car tires red wont fix them."
Person C: "well youre just being defeatest. we have to do something".
Person B: "..."
https://www.mdpi.com/2071-1050/12/22/9459?ref=scottsantens.c...
Spawning money creates nothing.
When everyone in the economy has a minimum of say $3,000 per month the cost of necessities, and everything else, will go up roughly in line with that.
But fine, I'll bite.
> will go up roughly in line with that
Could you at least explain the logic that you believe implies this would occur with such certainty? I've thought about this before and I couldn't see this as a necessary outcome, though (depending on various factors) I do see it as a possible one.
Because we haven’t actually created anything. Supply is the same, demand is WAY up.
As long as we’re in a deficit, spending for this program would directly increase the money supply. Of course there are other factors like velocity of money and elasticity of good/services but at the end of the day we’re increasing the amount of money (aka cash + credit) with no change to supply AND we’re going into debt to do it.
Any increase in supply over time will eat up some of that price fluctuation, but for most products prices are more flexible than supply and a majority share of any capital increase will go towards prices rather than supply.
You actually made my point, I think: that the price increase need not necessarily be "roughly in line with that", but could be less.
This distinction is absolutely critical. Like I said in [1], if you put $3k in my pocket, and my expenses increase by $2k, that's a very different situation from if my expenses grow by $3k. It would mean there is a reachable equilibrium.
I forget the general rule when it comes to companies, but there's a general percentage that is often how much a price increase on a company is passed on to consumers. If a company's tax rate goes up by 10% something like 8% of that is passed on to the consumer through price increases. I'd expect something similar with a UBI.
If so, then explain how you're making the jump from "prices increase some" to "you would need Marx style price controls" or "otherwise UBI will fail to cover the necessities"? If you give me $X and I spend $X * r of it due to price increases, and r < 1, then don't I have (1 - r) * $X left in my pocket, meaning it could be made large enough to cover the basic necessities? This isn't complicated math.
I don't get why "prices increase" is seen as such a mic-drop phrase that shows the system would fall apart. Prices already increase for all sorts of reasons, it's not like the economy falls apart every time or we somehow add Marx style price controls every time. Sure, prices increase some here too. And then what? The sky falls?
With regards to my claim that we'd need strong price controls, a UBI needs prices to the basics to remain stable. I won't go down the road of trying to define what "the basics" are here, that's a huge rabbit hole so let's just leave it at the broad category in general.
If everyone can afford the basics, there is more demand for those items. Supply will likely increase eventually and eat up part of the demand increase, but the rest goes to prices. When those prices go up, the UBI would have to increase to match. The whole cycle would go on in a loop unless there's some lever for the government to control the prices of anything deemed a basic necessity.
No. Just because something increases forever that doesn't mean it won't stabilize. Asymptotes, limits, and convergence are also a thing. You're making strong divergence claims that don't follow from your assumptions.
Say you have a fire-department even though you personally might not be paying anything for it because you are so poor that you don't pay any taxes. You have police protecting you and the army. You have free primary school at least.
So I think the question is, would it help for the government to provide more, or less, or the same amount of free services as it does currently?
Would it "increase prices" if healthcare was free? Not necessarily I think. At least not the price of healthcare. Government would be in a much better position to negotiate drug-prices with pharmaceutical companies, than individuals are.
> Would it "increase prices" if healthcare was free?
That depends, who's ultimately footing the bill? If its paid for with taxes on businesses, yes most of that would be passed on to consumers in the form of price increases. If its paid for by consumer taxes, ultimately you will find consumers demanding higher wages and prices would again go up. If its paid for with tariffs, well we'll fins out soon but prices should go up there as well.
They are free for poor people. For instance, basic education must be free, so we can have a productive work-force that can read and write and pay taxes in the future, which will make us even richer.
Finally, we already do price controls and subsidies in many places, like food production. It's just that a big part of the advantage is soaked up by big companies.
But I also disagree with your assertion. Minimum wage increases are a great example. Opponents will constantly claim they will lead to massively increasing prices, but they never do. Moreover, a higher standard of employment rights and payment in first world countries like Norway doesn't seem to correlate well with higher Big Mac prices.
And our food quality in the US is garbage. We can't say if there is causation there since we can't compare against a baseline US food system without subsidies, but there is a correlation in timing between the increase in food subsidies and the decrease in quality.
> Opponents will constantly claim they will lead to massively increasing prices, but they never do.
The only times that really comes up is when an increase is proposed and the whole debate is over politicized. Claims on both sides at those times are going to be exaggerated.
Prices absolutely go up with minimum wage increases. How could they not? It'd be totally reasonable to argue the timeline that matters, prices aren't going to go up immediately. You could also argue the ratio, maybe wage is increased by 30% and prices are only expected to go up by 20%.
People earning a minimum wage almost certainly have pent up demand, they would buy more if they could afford it. Increasing their wages opens that door a bit, they will spend more which means demand, and prices, will go up in response.
And the point is that the income percentage increase is higher for those with lower incomes. Even if prices go up by 20%, somebody making $20k/year who gets an additional $10k from UBI is going to be much better off.
"They had useless make-work jobs and sent 4 emails a week and watched TikToks the rest of the time"
So?
There's FAR too many people and nowhere near enough jobs for a large portion of people to do something that is both "real", and provides actual economic value.
Far more important that people have some form of dignity and can pay to feed their families and live a life with some material standard.
Anyone who's been in a corporate role knows there's loads of people that have a dubious utility and value--and people with "tech skills" are NOT exceptions to this rule, at all.
If meaningless jobs are important because its the only way people can make money to pay for all the shit we think we need to pay for, or because they haven't yet been offered the time and freedom to find their own sense of purpose, let's focus on fixing the root cause(s) there.
> If meaningless jobs are important because its the only way people can make money to pay for all the shit we think we need to pay for, or because they haven't yet been offered the time and freedom to find their own sense of purpose, let's focus on fixing the root cause(s) there.
^^^ 100% yes! That! ^^^
Like, if you already got a car, you can drive it for 10-20 years easily, or more if you take well care of it. But advertising makes you think you "need" a new car every few years... because that keeps the economy alive. You buy a car and sell the old one to someone else who can't afford a new car but also wants a new one, so their old car goes off to Africa or whatever to be repaired until truly unrepairable. But other than the buyer in Africa who actually needed a new car, neither you nor the guy who bought your old car would have needed a car. And cars are a massive industry that employs many millions of people worldwide - so if you'd ban advertising for cars, suddenly the bubble would pop and you'd probably have a fifth of the size remaining, and most of it from China because the people in Africa can't afford what a brand new Western made car costs.
Or Temu, Shein, Alibaba and godknowswhat other dropshipping scammers. Utter trash that gets sold there, but advertising pushes people to buy the trash, wear it two times and then toss it.
A giant fucking waste of resources because our worldwide economy is based on the dung theory of infinite growth. It has worked out for the last two, three centuries - but it is starting to show its cracks, with the planet itself being barely able to support human life any more as a result of all that resource consumption, or with the economy and the public sector being blown up by "bullshit jobs".
We need to drastically reform the entire way we want to live as a species, but unfortunately the changes would hurt too many rich and influential people, so the can gets kicked ever further down the road - until eventually, in a few decades, our kids are gonna be the ones inevitably screwed.
Perhaps sleepy sinecures are more prevalent in the public sector (especially post FANNAG layoffs), but not unique to it.
In addition, there's plenty of jobs that are demanding, stressful, and technically difficult but are ultimately towards useless or futile ends, and this is known by parties with a sober perspective.
When i worked as a consultant, I was on MANY projects where everything was pants-on-fire important to deliver projects to clients for POCs and/or overpriced/overengineered junk that they were incapable of maintaining long-term (and in many cases, created more problems than it ostensibly solved)
All that work was pure bullshit; I was never once in denial of that fact. Fake deadlines, fake projects, fake urgency, real stress. Bullshit comes in many forms.
"the economy" = private sector / everything not government; "public sector" = government / fully government owned companies.
And both are horribly blown up due to all the bullshit and onerous bureaucracy that's mostly there because apparently you can't trust people that you do entrust a dozens-of-millions-of-euros worth train carriage to correctly deal with the cash register of the onboard restaurant.
Some computers from 20 years ago are still in a good shap, but...
(You can continue.)
The volume of things we buy but don't need (or necessarily want) drives a huge sector of the global economy. We're working to fill our lives with unnecessary things that bring us no happiness beyond the adrenaline hit when we hit "Buy Now" and the second one when the Prime box arrives at our door.
Consumerism masks the underlying problem and it's only going to get worse as more is automated. Producers will have an incentive to convince us we still need more.
Cars are - to me - a red herring in this argument except for the people who do literally trade in for a new car every few years. I drive whatever fairly boring Honda for as long as I can (usually 8-10 years) and don't feel a ton of regret about investing in comfort. But I've been as guilty as anyone about just buying stuff because it pops up in an ad or recommended on Amazon, etc.
At the top you get the people who are true pros, they write the books, the guides, they solve the hardest problems, and everyone looks up to them. But spin the wheel and get a random SWE to do some work? It's not gonna be far off from an random 1v1 lobby.
Continues to apply
Interesting read, but I feel like the author could've spent just one more minute on this sentence. How good you are at given activity often doesn't matter, because you're mostly going to encounter people around your own level. What I'm saying is, unless you're at the absolute top or the absolute bottom, you're going to have similar ratio of wins to loses regardless whether you're a pro or an amateur, simply because an amateur gets paired with other amateurs, while a pro gets paired with other pros. In other words, not being the worst is often everything you need, and being the best is pretty much unreachable anyway.
This can be very well extended to our discussion about SWEs. As long as you're not the worst nor the best, your skill and dedication have little correlation with your salary, job satisfaction, etc. Therefore, if you know you can't become the best, doing bare minimum not to get fired is a very sensible strategy, because beyond that point, the law of diminishing returns hits hard. This is especially important when you realize that usually in order to improve on anything (like programming), you need to use up resources that you could use for something else. In other words, every 15 minutes spent improving is 15 minutes not spent browsing TikTok, with the latter being obviously a preferable activity.
And it's very easy to forget when you're the guy going to the club just how bad most regular players are.
I'm in a table tennis club, my rating is solidly middle of the pack, and so I see myself as an average player. But the author is correct, I would destroy any casual player. I almost never play casual players, though.
Not sure how applicable this is to software engineering.
Now scale that up 10x, because reality is at least an order of magnitude more complex than a video game.
Overall economic productivity is high enough that a lot of positions could be split into 2 or 3 short shifts, at full pay - IF you don't factor in the various financial boondoggles that we've gotten ourselves wrapped up in. If you made the decision to wipe out a lot of these obligations (mostly to rich people), we could get to that kind of set-up, solvently.
Personally, I think that a receptionist as a building is useless, but I would be pretty pissed off if my packages kept getting stolen or I had to go get each one when it came at my place of business.
Big entities are such that if you take it all down, you feel the side effect of output (maybe value, maybe something else) but if you take Hugh chunks, you might not feel much because they're so extremely ineffictive and value creation doesn't correspond with value received for the individuals that created it.
There are a lot of useless employees out there. So, so much.
And a ton of bullshit jobs as well.
Do you include the private sector?
Why do corporations engage in this kind of charity? Do we need more competition?
Not as appropriate in a government setting where the impact goes far beyond personal profit and loss.
So I ended up posing the question to Claude and the response was “figure out how to work with me or pick a field I can’t do” which was pretty much a flex.
To impact the labor market, they don't have to be correct about AI's performance, just confident enough in their high opinions of it to slow or stop their hiring.
Maybe in the long term, this will correct itself after the AI tools fail to get the job done (assuming they do fail, of course). But that doesn't help someone looking for a job today.
- Ada's LLM chatbot does a good enough job to meet service expectations.
- AgentVoice lets you build voice/sms/email agents and run cold sales and follow ups (probably others better it was just the first one I found)
- Dot (getdot.ai) gives you an agent in Slack that can query and analyze internal databases, answering many entry level kinds of data questions.
Does that mean these jobs at the entry level go away? Honestly probably not. A few fewer will get hired in any company, but more companies will be able to create hybrid junior roles that look like an office manager or general operations specialist with superpowers, and entry level folks are going to step quickly up a level of abstraction.
Robotics is the big unlock of AI since the world is continuous and messy; not discrete. Training a massively complex equation to handle this is actually a really good approach.
For example you need them to:
- High energy requirements in varied env's: Run all day (and maybe all night too which MAY be advantage against humans). In many environments this means much better power sources than current battery technology especially where power is not provisioned (e.g. many different sites) or where power lines are a hazard.
- For failure rates to be low. Unlike software failing fast and iterating are not usually options in the physical domain. Failure sometimes has permanent and far reaching costs (e.g. resource wastage, environmental contamination, loss of lives, etc)
- Be light weight and agile. This goes a little against No 1 because batteries are heavy. Many environments where blue collar workers go are tight, have only certain weight bearings, etc
- Handle "snowflake" situations. Even in house repair there is different standards over the years, hacks, potential age that means what is safe to do in one residence isn't in another, etc. The physical world is generally like this.
- Unlike software the iteration of different models of robots is expensive, slow, capital intensive and subject to laws of physics. The rate of change will be slower between models as a result allowing people time to adapt to their disruption. Think in terms of efficient manufacturing timelines.
- Anecdotally many trades people I know, after talking to many tech people, hate AI and would never let robots on their site to teach them how to do things. Given many owners are also workers (more small business) the alignment between worker and business owner in this regard is stronger than a typical large organisation. They don't want to destroy their own moat just because "its cool" unlike many tech people.
I can think of many many more reasons. Humans evolved precisely for physical, high dexterity work requiring hand-eye co-ordination much more so than white collar intelligence (i.e. Moravec's Paradox). I'm wondering whether I should move to a trade in all honesty at this stage despite liking my SWE career. Even if robots do take over it will be much slower allowing myself as a human to adapt at pace.
Before a human physical worker can start being productive, they need to be educated for 10-16+ years, while being fed, clothed, sheltered and entertained. Then they require ongoing income to fund their personal food, clothing and shelter, as well as many varieties of entertainment and community to maintain long-term psychological well-being.
A robot strips so much of this down to energy in, energy out. The durability and adaptability of a robot can be optimized to the kinds of work it will do, and unit economics will design a way to make accessible the capital cost of preparing a robot for service.
Emotional opinions on AI aside, we will I think see many additional high-tech support options in the coming decade for physical trades and design trades alike.
I'm not saying the robots aren't coming - just that it will take longer and being disrupted last gives you the most opportunity to extract higher income for longer and switch to capital vs labor for your income. I wouldn't be surprised if robots don't make any inroads into the average person's live in the coming decade for example. As intellectual fields are disrupted purchasing power will transfer to the rest of society including people not yet affected by the robots making capital accumulation for them even easier at the expense of AI disrupted fields.
It is a MUCH safer path to provide for yourself and others assuming capitalism in a field that is comparatively scarce with high demand. Scarcity and barriers to entry (i.e. moats) are rewarded through higher prices/wages/etc. Efficiency while beneficial for society as a whole (output per resource increases) tends to punish the efficient since their product comparatively is less scarce than others. This is because, given same purchasing power (money supply) this makes intelligence goods cheaper and other less disrupted goods more expensive all else being equal. I find tech people often don't have a good grasp of how efficiency and "cool tech" interacts with economics and society in general.
In the age of AI the value of education and intelligence per unit diminishes relative to other economic traits (e.g. dexterity, social skills, physical fitness, etc). Its almost ironic that the intellectuals themselves, from a capitalistic viewpoint, will be the ones that destroy their own social standing and worth comparatively to others. Nepotism, connections and skilled physical labor will have a higher advantage in the new world compared to STEM/intelligence based fields. Will be telling my kids to really think before taking on a STEM career for example - AI punishes this career path economically and socially IMO.
AI rewards the skills it does not disrupt. Trades, sales people, deal makers, hustlers, etc will do well in the future at least relatively to knowledge workers and academics. There will be the disruptors that get rich for sure (e.g. AI developers) for a period of time until they too make themselves redundant, but on average their wealth gain is more than dwarfed by the whole industry's decline.
Another case of tech workers equating worth to effort and output; when really in our capitalistic system worth is correlated to scarcity. How hard you work/produce has little to do with who gets the wealth.
Governments will want to ban them, but there's just too much $$$ to be made from replacing employees, so things will get complicated fast.
But I don't see what governments can really do about it. I mean, sure, they can ban the models, but enforcing such a ban is another matter - the models are already out there, it's just a large file, easy to torrent etc. The code that's needed to run it is also out there and open source. Cracking down on top-end hardware (and note that at this point it means not just GPUs but high-end PCs and Macs as well!) is easier to enforce but will piss off a lot more people.
But there are lots of 'easy' development roles that could be mostly or entirely replaced by it nonetheless. Lots of small companies that just need a boring CRUD website/web app that an AI system could probably throw together in a few days, small agency roles where 'moderately customised WordPress/Drupal/whatever' is the norm and companies that have one or two tech folks in-house to handle some basic systems.
All of these feel like they could be mostly replaced by something like Claude, with maybe a single moderately skilled dev there to fix anything that goes wrong. That's the sort of work that's at risk from AI, and it's a larger part of the industry than you'd imagine.
Heck, we've already seen a few companies replacing copywriters and designers with these systems because the low quality slop the systems pump out is 'good enough' for their needs.
From experience dealing with a few of these companies, there's almost no chance that "vibe coding" whatever thing is going to be anything other than a massive improvement over what they'd otherwise deliver.
Thing is, the companies hiring these firms aren't competent to begin with, otherwise they'd never hire them in the first place. Maybe this actually disrupts those kinds of models (I won't hold my breath).
But honestly, LLMs are here to stay. I don't like them for zero verification + high trust requirements. IE when the answer HAS to be correct.
But generating viewpoints and ideas, and even code are great uses - for further discussion and work. A good rubber duck. Or like a fellow work colleague that has some funny ideas but is generally helpful.
LLMs also don't have the ego, arrogance and biases of humans.
I’ve spent a career dealing with the complete opposite. People with egos who can just not bare to admit when they don’t know and will instead just dribble absolute shit just as confidently as an LLM does until you challenge them enough that they just decide to pretend the conversation never happened.
It’s why I, someone fairly mediocre have been able to excel because despite not being the smartest person in the room, I can at least sniff bullshit.
> average humans understand the limit of their knowledge.
We’ll have to agree to disagree here. I’d call it a minority, not the average.
Which is why we live in a world where huge numbers of people think they know significantly more than they do and why you will find them arguing that they know more than experts in their fields. IT workers are particularly susceptible to this.
And if they don't suck at their job, they get promoted until they do: https://en.wikipedia.org/wiki/Peter_principle .
LLMs themselves don’t choose the top X.
That’s all regular flows written by humans run via tool calls after the intent of your message has been funneled into one of a few pre-defined intents.
I’ve built systems like it.
If it was something brand new, Anthropic would be bragging hard about it.
> You could 100% create the tool to search and chose results, go through links, read more pages, etc.
That’s exactly what I’m saying. _YOU_ could build a tool that does that. The LLM essentially acts as an intent detector, not a web crawler.
Not sure how OpenAIs version works, but grok's approach is to do multiple rounds of searches, each round more specific and informed by previous results
A lot of people who definitely were not intending to be nazies are driving swasticars, because they didn't know about how nazi the car company owner was. But here we are. You definitely know now. What you do now matters.
What the person should have said is "a Nazi made that car".
I’ll start doing what other people say for no good reason the day I switch off my brain.
I did a little experiment when Grok 3 came out, telling it that it has been appointed the "world dictator" and asking it to provide a detailed plan on how it would govern. It was pretty much diametrically opposite of everything Musk is doing right now, from environment to economics (on the latter, it straight up said that the ultimate goal is to "satisfy everyone's needs", so it's literally non-ironically communist).
In Elon's eyes it's probably based because it will happily answer "what are 10 good things about Hitler?" with a list of 10 things and only mention twice that Hitler was evil. With ChatGPT you have about a 50% chance of getting a lecture instead of a list. But that's just a lack of safeties and moral lectures, the actual answers seem fairly unbiased and don't agree with anything Musk currently does
If he had said “SIEG HEIL!” I would totally be on your side. But it was plain old American English, and it was about love.
Extreme political tribalism is absolutely destroying human discourse.
to Musk doing the nazi like gesture https://x.com/iam_smx/status/1881583500991889729
I think there's a difference.
To me him saying my heart goes out after the second one was trying to cover his arse which seems to have fooled few people who see the videos.
And I'm not sure about the tribalism thing - I was kind of a Musk fan and initially gave him the benefit of the doubt but the comparison of the videos plus his promotion of neo nazis in European politics, plus his mums parents leaving Canada for SA because they were kind of nazi and Canada was too liberal all seems to add up. (dad https://www.youtube.com/watch?v=B6e1ES4MLD0&t=200s)
I think he's been a bit influenced by alt right tweeters on x/twitter. I'm in the UK and he comes up with some strange things about the UK that probably come from there. He seems to feel that our alt rightish anti immigration party, Reform, run by Farage, which has never been in power is not anti immigrant enough and he should step down for someone who properly hates muslims like Tommy Robinson. But it's all a bit odd based seemingly on misinformation from people who have never been to the UK and make things up to tweet.
I'm guessing the salute thing came for interacting with neo nazi types on x and not really realising how negatively that stuff is viewed by many people and now seems bewildered that people would torch Teslas.
I was thinking a lot of the problems are down to misinformation, even going back to the original nazis and stuff about the jews being influenced by satan and causing all the problems which is obviously nonsense but kicked everything off.
The party leader of the party he promotes is a lesbian whose wife is from Sri Lanka.
Neo nazis surely have evolved from the angry, militaristic skinheads we normally picture.
Also, Elon Musk’s local bakery is a nazi bakery, mostly on account of selling bread to Elon Musk knowing he’s a nazi. This makes them nazis, and anyone who eats their bread are nazis, too.
In fact, having not given in to calling Elon Musk a nazi makes me a nazi. It is the fastest growing demography by virtue of absolute inflation of what it means.
As a Jew: He didn't. (This argument is absurd.)
He was interviewed about it, and he said he didn't.
How does being a German get you to jump to conclusions?
Are you born with a special ability to detect nazi salutes?
Like, did a mirror neuron and a nerve in your torso twitch?
When I saw it, I recognised him beating his heart, throwing it to the crowd, and immediately thought "This is going to get misunderstood." Here we are.
> I usually don't like cancel-culture but you have to have boundaries and I think the risk of another Holocaust and all the other Nazi cruelties is a boundary a functioning society should be able to agree on.
Assuming he's a nazi, but this narrative is fabricated.
You can argue that allowing free speech on X may risk an increase in extremism.
But that's not the same argument as saying "Elon Musk is the next Hitler, he wants to kill the jews, and all cars fabricated in his name should be destroyed for the betterment of humanity." There's simply too many emotions involved in this kind of reasoning.
Would it be better if they called Elon a fascist? He did the fascist salute, after all. And as other commenters have said: if it endorses authoritarian far-right parties like a duck, has controversial white-supremacist parents like a duck, and does the fascist salute like a duck, at which point do we start wondering wether he’s actually a duck?
No, you mean to say “nazi salute” because it was used by NSDAP during WWII. The point here is that “nazi” now means “baddie”, and “fascist” is even worse because most people who are called that have nothing to do with Mussolini, either.
> if it endorses authoritarian far-right parties like a duck, has controversial white-supremacist parents like a duck, and does the fascist salute like a duck, at which point do we start wondering wether he’s actually a duck?
Cute. You can wonder, of course. That seems extremely warranted. But you can’t conclude based on the current evidence.
Now, I am not convinced that people of the mentioned religion are anymore better than others to fight nazis. Or even detect them. And when you read the recent international news, it’s clear that many of them don’t really mind genocides after all.
Also you didn't read my comment correctly the whole point is that you don't have to assume he's a Nazi to condemn a Nazi-like salute.
The gesture was quite different to that he'd used previously for 'giving people his heart'.
He's known to be a white supremacist. That is apparently his heritage too.
He supports far right parties in Europe.
Other 'Republican' politicians have repeated the gesture from the dais; but they seem to have made other excuses.
None of the many videos or photos that supposedly show other politicians doing similar gestures actually pass scrutiny. It's possible to inadvertently end with the same hand position. But the full fascist salute, on video, multiple times in succession. That's no accident.
Someone who hadn't meant it would have come back on stage, when it was pointed out to them, and made an apology. Or at least immediately issued a statement/press release.
I would believe he'd planned it as a joke - 'I bought this election, I'm going to throw a Nazi salute for memes'. But I'm not sure that's ultimately any better.
Perhaps you believe he's just a catastrophically idiotic person with no-one around him helping him?
Hitler's was unlike the general population's, as it had a bend to it.
You can bend reality all you like, but the intent of giving the Hitler salute was not there, as he has said. He's not secretly a nazi, and he's not openly a nazi. He's right-wing, yes. That's not illegal, and it happens to be the majority vote in the US.
The most reasonable criticism is calling it a Roman salute and saying it bears connotations to imperialism, and that it was most recently practiced by Hitler.
I think, if you want to read into his deepest, unspoken intents, he probably compares himself to Caesar more than Hitler. Just like Zuckerberg, and all the other multi-billionaires who want to see themselves as the de-facto leaders of the world.
> He's known to be a white supremacist
No, a bunch of observations leads you to conclude it.
He never showed up at a white supremacist rally.
He lets them speak on his platform.
> He supports far right parties in Europe.
Most right-wing parties in Europe are still socialist by American standards.
For example, the most liberal parliamentary party in Denmark thinks a 40% tax is fine.
If you're a Republican, you're crazy in the eyes of a European.
Specifically, he supports a far-right party in Germany, which is controversial, since there hasn't been popular far-right parties (only fringe ones) since NSDAP.
The big, controversial subject is ending muslim immigration into Europe. The far right becomes the bannermen for this cause, because closing down on immigration is viewed as xenophobic. In the meantime, as this opinion is being suppressed instead of addressed, it continues to grow with the populist movements.
The fact that Elon Musk has opinions on European immigration policy doesn't make him a nazi. Just like being against muslim immigration doesn't make AfD nazis (the German party that he endorsed), just uncannily populist.
> Someone who hadn't meant it would have come back on stage, when it was pointed out to them, and made an apology. Or at least immediately issued a statement/press release.
That's how I read his sentence immediately after the salutes: "My heart goes out to all of you." -- it sounded remarkably like something someone would say when they realize what they did could be viewed as heiling. You don't need to apologize to be a good person.
https://xcancel.com/elonmusk/status/1724908287471272299
https://www.nytimes.com/2025/03/14/technology/elon-musk-x-po...
https://www.nbcnews.com/tech/social-media/elon-musk-x-twitte...
I could keep going but there's really no point
No, just post one good summary or obviously revealing incident. And if you point to the salutes, which triggered the whole thing, they’re obviously not sufficient by themselves. You have to at least hear what he has to say. Did you?
But no. He's The Douche.
It's also a balance of probabilities thing. He's leaning hard into the far-right at the moment, and he's a well known troll, so if you behave like a douchey troll Nazi, then people tend not to give you the benefit of the doubt when shit goes down. Like when they give the benefit of the doubt to absolutely everyone else in the world caught in a photo waving and it looking like a salute.
Either way ... The Douche won't ever get another penny from me. Bye Tesla. Fuck Starlink, glad I'm not in a situation where that's the only choice. SpaceX? That was always Shotwell's bag anyway and I don't plan on hitching a ride anytime soon.
I mean, DOGE.
I guess that makes him not a nazi.
Great.
> The most reasonable criticism is calling it a Roman salute and saying it bears connotations to imperialism, and that it was most recently practiced by Hitler.
For the past >100 years, it’s been the gesture representing the fascist party in Italy and the Nazi party in Germany. You sound like you want to defend the gesture for some reason.
> I think, if you want to read into his deepest, unspoken intents, he probably compares himself to Caesar more than Hitler. Just like Zuckerberg, and all the other multi-billionaires who want to see themselves as the de-facto leaders of the world.
Comparing oneself to Caesar is still a profoundly disturbing thing. He was an oligarch first, then a lifelong dictator, and later a literal deity (according to the Senate).
> He never showed up at a white supremacist rally.
I’m sure you’re smart enough to understand that if he actually showed up to a white supremacy rally, he would be financially destroyed. He’s already lost his public image completely in Europe. So not putting up a KKK hoodie is weak evidence for him not being a white supremacist.
But in any case, none of this matters. Whether or not he personally identifies with fascist ideology is secondary to the effect of his actions. Blurring the line between reasonable discourse and fascist apologism trivializes extremism and hate, and that’s the last thing we need.
Ok.
> You sound like you want to defend the gesture for some reason.
Not at all. I want to defend people who use it and don’t intend to associate with nazism.
> Whether or not he personally identifies with fascist ideology is secondary to the effect of his actions.
That is certainly true. But just because the pitchfork brigade has got riled up, there is no reason to applaud them.
If I hadn’t a principle, I’d have to consider whether the social suicide of doing so is worth it. Musk could have thought of that, but he didn’t.
That still doesn’t make him a nazi. You need to actually believe that the genocide of Jews is worth pursuing. Or anything remotely resembling outright hatred of jews, and an idealisation of The Third Reich.
I also won’t post a dick pic, and this similarly does not discredit the argument I’m making:
Just because I won’t heil in public (I’m polite, and I have no points to make at 45 degrees), I won’t read Hitler into Musk’s arm waving, when he clearly does not follow up by justifying that he did, in fact, acknowledge the great work of Adolf Hitler. He didn’t because he doesn’t think Hitler was that great, because he’s not a nazi.
He’s not a nazi until he apologizes for not distancing himself from Hitler when he never said Hitler was great to begin with.
Otherwise: you’re a nazi until you publicly apologise for not leaving the subject matter unambiguous. And just saying you’re not is not enough, you have to apologise.
I find myself much more often using their "Quick Answer" feature, which shows a brief LLM answer above the results themselves. Makes it easier to see where it's getting things from and whether I need to try the question a different way.
You can simply just pass it a direct link to some data, if you feel it's more appropriate. It works amazingly well in their multistep Ki model.
It's capable of creating code that does analysis I asked for with moderate amount of issues (mostly things like it used the wrong file extracted from .zip, but it's math/code is in general correct). Scraps url/downloads files/unarchives/analyses content/creates code to produce result I asked/runs that code.
This is the first time I really see AI helping me do tasks I would otherwise not attempt due to lack of experience or time.
I am always looking for Perplexity alternatives. I already pay for Kagi and would be happy to upgrade to the ultimate plan if it truly can replace Perplexity.
https://kagi.com/lenses/l7mPOuJp7zljHquBjsekFn6dM9Thw1A8
I'm not sure if adding that to your account will include the configuration I have set to access the lens with !guix, but if it does not, you might want to add it. The lens basically just uses this pattern for search result sources:
logs.guix.gnu.org/guix/, lists.gnu.org/archive/html/bug-guix/, lists.gnu.org/archive/html/info-guix/, lists.gnu.org/archive/html/help-guix/, lists.gnu.org/archive/html/guix-devel/*, guix.gnu.org
I don't think I can share the assistant directly, but if you have Kagi Ultimate, you can just go to the Assistant section in the sidebar of the settings page, and add a new assistant. You can set it to have access to web search, and you can specify to use the GNU Guix lens. You can pick any model, but I'm using Deepseek R1, and I set my system prompt to be:
> Always search the web for answers to the users questions. All answers should respond relating to the GNU Guix package manager and the GNU Guix operating system.
and that seems to work well for me. Let me know if you have trouble getting that set up!
I found Perplexity was slower and delivered lower quality results relative to Kagi. After a week of experimenting, I forgot about Perplexity until they charged my $200 to renew my free year. I promptly cancelled the heck out of it and secured a refund.
Just takes some prompt tweaking, redos, and followups.
It's like having a really smart human skim the first page of Google and give me its take, and then I can ask it to do more searches to corroborate what it said.
It's amazing that the post by Anthropic doesn't say anything about that. Do they maintain their own index and search infrastructure? (Probably not?) Or do they have a partnership with Bing or Google or some other player?
It gets even better. When I first tested this feature in Bard, it gave me an obviously wrong answer. But it provided two references. Which turned out to be AI generated web pages.
Oddly enough in my own Googles I could not even find those pages in the results.
Welcome to the Habsburg Internet.
I’m not sure if Claude does any reranking (see Cohere Reranker) where it reorders the top n results or just relies on Google’s ranking.
But a web search that does re-ranking should reduce the amount of blogspam or incomplete answers. Web search isn’t inherently a lost cause.
Yeah, this is one of my favorite use cases. Living in Europe, surrounded by different languages, this makes searching stuff in other countries so much more convenient.
Yes, it is that bad.
Website of Nike? Website of Starbucks? Likely position number one.
Every product, category etc., e.g. what rice cooker should I buy? Is diseased by link and affiliate spam. There is a reason why people put +reddit on search terms.
But bonappetit.com is exactly an example of affiliate link spam. Even their budget option is awful.
Until then, my Zojirushi is very simple to clean.
Expensive sure, but it's only difficult to clean if you're a double amputee.
There are other good rice cookers like Cuckoo, and cheaper options like Tiger or Tatung, or really budget options like Aroma, but you pretty much can’t go wrong with Zojirushi if you can afford it.
This is a case of HN cynicism and contrarianism working against oneself.
BTW - the search you suggested gives you Reddit links first followed by other trusted sites trying to make an affiliate buck. There’s no spam on the first page.
> Reddit · r/google Is Google Search getting worse? Latest research and ...
The whole "Click here to find ten reasons why it is bad" style I've only come across in HN comments attacking what may be a bit of a straw man?
To choose the best rice cooker, consider these factors:
Top Brands: Zojirushi is often considered the best brand, with Cuckoo and Tiger as close contenders. Aroma is considered a good budget brand 1. Types: Basic on/off rice cookers: These are good for simple white or brown rice cooking and are usually affordable and easy to use 2. Considerations: When buying a rice cooker, also consider noise levels, especially from beeping alerts and fan operation 3. Specific Recommendations: Yum Asia Panda Mini Advanced Fuzzy Logic Ceramic Rice Cooker is recommended for versatility 4. Yum Asia Bamboo rice cooker is considered the best overall 5. Russell Hobbs large rice cooker is a good budget option 5. For one to two people, you don't need a large rice cooker unless cost and space aren't a concern 6. Basic one-button models can be found for under $50, mid-range options around $100-$200, and high-end cookers for hundreds of dollars 6. References What is the best rice cooker brand ? : r/Cooking - Reddit www.reddit.com The Ultimate Rice Cooker Guide: How to Choose the Right One for Your Needs www.expertreviewsbestricecooker.com Best Rice Cooker UK | Posh Living Magazine posh.co.uk Best rice cookers for making perfectly fluffy grains - BBC Good Food www.bbcgoodfood.com The best rice cookers for gloriously fluffy grains at home www.theguardian.com Do You Really Need A Rice Cooker? (The Answer Is Yes.) - HuffPost www.huffpost.com
With no pins, bon appetit (decent) and nbc news (would be fine if it wasn’t littered with ads) were the top results. For NBC news, Kagi also marked the result with a red shield, indicating that it has too many ads/trackers.
Which really goes to show that Kagi is great if you’re really willing to shell out for better content. Having the ability to mark sources as trusted, or indicate that I’ve paid for premium sources makes a completely different side of the web searchable.
Then, the following two links appear as normal search results: https://www.bonappetit.com/story/best-rice-cookers and https://www.bbcgoodfood.com/review/best-rice-cookers (I don't know those websites, so I can't judge them).
Followed by Listicles (a short-form writing that uses a list as its thematic structure). All just one entrance, in this case, Best rice cooker 2024: Top tried and tested models for perfect results expertreviews.com 9 Best Rice Cookers | The Strategist - New York Magazine nymag.com The 8 Best Rice Cookers of 2025, Tested and Approved - The Spruce Eats thespruceeats.com 6 Best Rice Cookers 2025 Reviewed - Food Network foodnetwork.com Best rice cookers 2025, tested for perfect grains - The Independent independent.co.uk 29 Rice cooker meals ideas | rice cooker recipes, cooking recipes... de.pinterest.com 43 Crockpot ideas | cooking recipes, rice cooker recipes, cooker... de.pinterest.com
Followed by Quick Peek (questions with hidden answers that you can display).
Followed by normal search results again: ryukoch.com, reddit/r/Coooking, expertreviewsbestricecooker.com, tiktok, and then many more 'normal' websites.
This search reminded me that I have yet to configure my Kagi account to ignore tiktok.
Quick Answer
To choose the best rice cooker, consider these factors:
Capacity: Rice cookers range from small (1-2 cups) to large (6-8 cups or even 10-cup models) [1][2]. Keep in mind that one cup of uncooked rice yields about two cups cooked [2].
Budget: Basic one-button models can be found for under $50, mid-range options around $100-$200, and high-end cookers can cost more [3].
Features: Many rice cookers include a steaming insert [4]. Some have settings for different types of rice [5][1].
Brand Recommendations:
Zojirushi: Often considered the best brand, but pricier [6][7]. The Zojirushi Neuro Fuzzy 5.5-Cup Rice Cooker is considered best overall [8].
Cuckoo & Tiger: These are the next best brands after Zojirushi [6].
Aroma: Considered the best budget brand [6]. The Aroma ARC-914SBD Digital Rice Cooker is a good option [9].
Toshiba: The Toshiba Small Rice Cooker stands out for innovative features that cater to a variety of cooking needs [5].
References
[1] Five Best Rice Cookers In 2023. More than half of the... | Medium medium.com
[2] Which Rice Cooker Should You Buy? - HomeCookingTech.com www.homecookingtech.com
[3] Do You Really Need A Rice Cooker? (The Answer Is Yes.) - HuffPost www.huffpost.com
[4] The 8 Best Rice Cookers of 2025, Tested and Approved www.thespruceeats.com
[5] The Ultimate Guide to Choosing the Perfect Rice Cooker | Medium medium.com
[6] What is the best rice cooker brand ? : r/Cooking - Reddit www.reddit.com
[7] What are actually good rice cookers? I feel like all the ... - Reddit www.reddit.com
[8] 6 Best Rice Cookers of 2025, Tested and Reviewed - Food Network www.foodnetwork.com
[9] 9 Best Rice Cookers | The Strategist - New York Magazine nymag.com
It should be noted that individual search results on Kagi are likely to be skewed depending on the user because it gives you so many dials to score specific domains up or down. E.g. my setup gives a boost to Reddit while downscoring Quora and outright blocking Instagram and Pinterest.
...if you're blocking ads and/or they're paying big advertisement bucks.
If I were looking for a song, I would type in something like “song used at beginning of X movie indie rock”
He would type in “X songs.”
I basically find everything in Google in one search and it takes him several. I type in my thought straight whereas he seems to treat Google like a dumb keyword index.
Actually, typing out "what a novice normie means" made me realize what is the probable reason Google turned out the way it is: optimizing for new users. However, a growing userbase means most users are new to Internet in general, and (with big enough growth) most queries are issued by people who are trying a search engine out for the first time, and have no clue how or why it works - and those queries are exactly the kind of queries Google is now good at, queries like example you provided.
But if you insist on a dumb keyword search, Google still does that fine if you use quotation marks now in addition to the operator (e.g. +"band"). But I just tried +"band" with my band-vs-song example and all I got were worse results that excluded the artist's website because the artist didn't write the word "band" anywhere on the page -- as expected for a dumb keyword search.
There was no easy way to perform my band-vs-song search back then because Google didn’t understand context and the website doesn’t have the correct keywords. But modern Google knows context and I employ this fact regularly, allowing me to find stuff with modern Google like a magician compared to old Google or even Altavista.
I was around as well and my memories do not confirm this. But google search definitely degraded a lot.
Google in the past was written by a human because that was really the only option. Once other humans figured out to how automate producing trace Google has gone downhill simply because of the bullshit asymmetry effect. Even if google was totally customer based, it would still be much worse than the past because of the total amount of crap that exists.
This is also why no other competitor just completely blows them away either.
That said I also use Perplexity which does things Google never really did.
I've got a theory that people just like to be negative about stuff, especially market leaders, and are a bit in denial as to how it still has the majority search share in spite of many billions spent trying to compete with it and ernest HN posts saying Google is crap use Kagi. For amusement I tried to find their share of search and Google is approx 90%, Kagi approx 0.01% by my calculations.
It used to be SO much less likely to return junk.
First decade of the 2000's if I had to guess.
It's a shame, because Page Rank was a smart idea.
https://web.archive.org/web/20200801000000*/https://www.goog...
https://web.archive.org/web/20200801000000*/https://www.goog...
Actually, it's astounding to me that companies haven't created a more user friendly customization interface for models. The only way to "customize" things would be through the chat interface, but for some reason everyone seems to have forgotten that configuration buttons can exist.
To be fair, LLM technology in its current form, is still relatively new. I would also like to see what you are suggesting, though.
Perplexity certainly already approximates this (not sure if it's at a token level, but it can cite sources. I just assumed they were using a RAG.)
This is ultimately google's problem: They are making money from the fact that the page is now mostly ads and not necessarily going to lead to a good, quick answer, leading to even more ads. They probably lose money if they make their search better
I’m curious why I’m seeing a lot of people thinking this lately. Google definitely made the algorithm worse for customers and better for ads, but I’m almost always able to find what I’m looking for in the working day still. What are other people’s experiences?
For example, when searching for product information, Google results in top 50 to 100 listed items titled “the 10 best …“ full of vapid articles that provide little to no insight beyond what is provided in a manufacturers product sheet. Many times I have to add “Reddit” to my search to try and find real opinions about a product or give up and go to Youtube review videos from trusted sources.
For technical searches like programming questions, AI is basically immediately nailing most basic questions while Google results require scanning numerous somewhat related results from technical discussion forums, many of which are outdated.
RAG was dead on arrival because it uses the same piss-poor results a human would, wrapped in more obfuscation and unwanted tangents.
My question is why the degradation of search wouldn't affect LLMs. These chatbot god-oracle businesses are already unprofitable because of their massive energy footprint, now you expect them to build their own search engine in-house to try to circumvent SEO spam? And you expect SEO spam to not catch up with whatever tricks they use? Come on, people.
top results are blogspam but the LLM isn't? /s
OpenAI is so annoying in this aspect. They will regularly give timelines for rollout that not met or simply wrong.
Edit: "Everyone" = Everyone who pays. Sorry if this sounds mean but I don't care about what the free tier gets or when. As a paying user for both Anthropic and OpenAI I was just pointing out the rollout differences.
Edit2: My US-bias is showing, sorry I didn't even parse that in the message.
I have empathy for the engineers in this case. You know it’s a combination of sales/marketing/product getting WAY ahead of themselves by doing this. Then the engineers have to explain why they cannot in fact reach an arbitrary deadline.
Meanwhile the people not in the work get to blame those working on the code for not hitting deadlines
It is for all paid users, something OpenAI is slow on. I pay for both and I often forget to try OpenAI's new things because they roll out so slow. Sometimes it's same-day but they are all over the map in how long it takes to roll out.
- Brave is now listed as a subprocessor on the Anthropic Trust Center portal
- Search results for "interesting pelican facts" from Claude and Brave were an exact match
- If you ask Claude for the definition of its web_search tool one of the properties is called "BraveSearchParams"
If you’re unhappy about something, try to first think of a solution before expressing your discontent.
I don't use the desktop app and I don't want to use the desktop app or jump through a bunch of hoops to support basic functionality without having my data sent to a sketchy company.
I can recommend a Rust crate for accessing PostgreSQL with Arrow support. The primary crate you'll want to use is arrow-postgres, which combines the PostgreSQL connectivity of the popular postgres crate with Apache Arrow data format support. This crate allows you to:
Query PostgreSQL databases using SQL Return results as Arrow record batches Use strongly-typed Arrow schemas Convert between PostgreSQL and Arrow data types efficiently
Is that how you actually use llms? Like a Google search box?
Some people aren't very good at using tools. You can usually identify them without much difficulty, because they're the ones blaming the tools.
"Answer as if you're a senior software engineer giving advice to a less experienced software engineer. I'm looking for a Rust crate to access PostgreSQL with Apache Arrow support. How should I proceed? What are the pluses and minuses of my various options?"
Think about it, how much marginal influence does it really have if you say OP’s version vs a fully formed sentence? The keywords are what gets it in the area.
To mix clichés, "I'm feeling lucky" isn't compatible with "Attention is all you need."
If I am not careful, and "asking the question" in a way that assumes X, often X is assumed by the LLM to be true. ChatGPT has gotten better at correcting this with its web searches.
I am able to get better results with Claude when I ask for answers that include links to the relevant authoritative source of information. But sometimes it still makes up stuff that is not in the source material.
If you’re having to explain an existing problem with edge cases, then sure, the context window needs the edge cases defined as well.
The problem with this prompt to me is not that it is not in a full sentence but that it isn't exact enough.
Probabilistically, "rust" is not about the programming language but the corrosion of metal. Then arrow.
Give the model basically nothing to work with then complain it doesn't do exactly what you want. Good luck with that.
I've not yet found much value in the LLM itself. Facts/math/etc are too likely incorrect, i need them to make some attempt at hydrating real information into the response. And linking sources.
It's still a present issue whenever I go light on prompt details and I _always_ get caught out by it and it _always_ infuriates me.
I'm sure there are endless discussions on front running overconfident false positives and being better at prompting and seeding a project context, but 1-2 years into this world is like 20 in regular space, and it shouldn't be happening any more.
1. Treat it like regular software dev where you define tasks with ID prefixes for everything, acceptance criteria, exceptions. Ask LLM to reference them in code right before impl code
2. “Debug” by asking the LLM to self reflect on its decision making process that caused the issue - this can give you useful heuristics o use later to further reduce the issues you mentioned.
“It” happening is a result of your lack of time investment into systematically addressing this.
_You_ should have learned this by now. Complain less, learn more.
I really wish Claude had something similar.
1) would give you more time to pause when you’re talking before it immediately launches into an answer
2) would actually try to say the symbols in code blocks verbatim - it’s basically useless for looking up anything to do with code, because it will omit parts of the answer from its speech.
Pro tip; if you’re preparing for a big meeting eg an interview, tell ChatGPT to play the part of an evil interviewer. Give it your CV and the job description etc. ask it to find the hardest questions it can. Ask it to coach you and review your answers afterwards, give ideal answers etc
after a couple of hours grilling the real interview will seem like a doddle.
[0] https://youtu.be/snkOMOjiVOk 01:30
One of my websites that gets a decent amount of traffic has pretty close to a 1-1 ratio of Googlebot accesses compared to real user traffic referred from Google. As a webmaster I'm happy with this and continue to allow Google to access the site.
If ChatGPT is giving my website a ratio of 100 bot accesses (or more) compared to 1 actual user sent to my site, I very much should have to right to decline their access.
are you trying to collect ad revenue from the actual users? otherwise a chatbot reading your page because it found it by searching google and then relaying the info, with a link, to the user who asked for it seems reasonable
- Ability to prevent their crawlers from accessing URLs via robots.txt
- Ability to prevent a page from being indexed on the internet (noindex tag)
- Ability to remove existing pages that you don't want indexed (webmaster tools)
- Ability to remove an entire domain from the search engine (webmaster tools)
It is really impolite for the AI chatbots to go around and flout all these existing conventions because they know that webmasters would restrict their access because it's much less beneficial than it is for existing search engines.
In the long run, all this is going to lead to is more anti-bot countermeasures, more content behind logins (which can have legally binding anti-AI access restrictions) and less new original content. The victim will be all humans who aren't using a chatbot to slightly benefit the ones who are.
And again, I'm not suggesting that AI chatbots should not be allowed to load webpages, just that webmasters should be able to opt out of it.
> It is really impolite for the AI chatbots to go around and flout all these existing conventions because they know that webmasters would restrict their access because it's much less beneficial than it is for existing search engines.
I agree with you about the long run effects on the internet at large, but I still don't understand the horse you have in it personally. I read you as saying (1) it's less about ad revenue than content control, but (2) content control is based on analysis of benefits, i.e. ad revenue?
Technically you don’t, but there are still laws that affect what you can legally do when accessing the web. Beyond the copyright issues that have been outlined by people a lot more qualified than me, I think you could also make the point that AI crawlers actively cause direct and indirect financial harm.
It's a search engine. You 'ask it to read the web' just like you asked Google to, except Google used to actually give the website traffic.
I appreciate the concept of an AI User-agent, but without a business model that pays for the content creation, this is just going to lead to the death of anonymously accessible content.
Edit: Maybe that's fine, maybe that's bad. Maybe new models will emerge and things will reshape. But I'm just supporting the case that AI agents will pressure the current "free" content economy.
Is that a world we actually want?
As for funding "content creation" itself, you have patronage.
Did all those old sites have “business models”? What did the web feel like back then?
(This is rhetorical - I had niche hobby sites back then, in the same way some people put out free zines, and wouldn’t give a damn about today’s AI agents so long as they were respectful.
The web was better back then, and I believe AI slop and agents brings us closer to full circle)
"What," he was asked, "is the business model for free WiFi?"
"What," he retorted, "is the business model for free washrooms?"
Many of these sites business model was simply "don't cost too much". The moment the web got big a lot of these sites died. Now add DDOS for fun and profit became a thing, most people moved to huge advertising based providers/hosters (think FB).
Simply put, we're never getting the old web back. Now, we may get something new, but it will be different and still far more commercial.
https://abcnews.go.com/Business/story?id=88041&page=1
As were punch the monkey and similar banner ads
https://www.computerworld.com/article/1360466/i-refuse-to-pu...
When was the great age of the web that wasn’t inundated with ads and SEO?
It was really easy on old school search engines like Altavista.
You’d already be blocking me as I’d guess I now search via AI >90% of the time between perplexity, chatgpt, deep research, and google search AI.
If that happens a big majority of websites will go bankrupt and won't exist anymore to be searched. Problem solved!
I think that is funny considering it is likely going to have the exact opposite effect.
Low effort blog spam is cheap to make. And it is often part of content marketing strategies where brand visibility is all that matters, so not much harm if the viability is directly on your site or in an AI chatbit interface.
Quality content on the other hand is hard to make. And there are two groups of people who make such content:
1. individuals or small groups that like to share for the sake of sharing. They likely won’t care about the AI crawlers stealing their content, although I think there is a big overlap between people who still run blogs and those who dislike AI.
2. small organizations that are dedicated to one specific topic and are often largely ad financed. These organizations would likely stop to exist in such an AI search dominated world.
> Especially since website hosting is close to being free these days.
It is under specific circumstances. The problem is that those AI crawlers don’t check by once in a while like Google does but instead they hit the site very frequently. For a static site this won’t be much of an issue except for maybe bandwidth. For more complex sites like - say - the GitLab instances for OSS projects, reality paints a different picture
Another point you're missing is that there's a 3rd group of people sharing content: experts who are there to establish their expertise. Small companies and individuals generate the highest quality content these days. I work on a blog for our SAAS company and it has been a great success in terms of organic growth (even people coming from LLMs) and to simply establish authority and signal expertise in the field. I can imagine a future where this is majority of expert content on the web and it seems quite sustainable imo.
If that's websites want, they should have that option.
robots.txt is not a security mechanism, and it doesn’t “control bots.” It’s a voluntary convention mainly followed by well behaved search engine crawlers like Google and ignored by everything else.
If you’re relying on robots.txt to prevent access from non human users, you’re fundamentally misunderstanding its purpose. It’s a polite request to crawlers, not an enforcement mechanism against any and all forms of automated access.
So, similarly, LLM companies can see this as a signal to crawl to whole site to add to their training sets and learn from it, if the same URL is hit for a couple of times in a relatively short time period.
Doesn't matter. The robots-exclusion-standard is not just about webcrawlers. A `robots.txt` can list arbitrary UserAgents.
Of course, an AI with automated websearch could ignore that, as can webcrawlers.
If they chose do that, then at some point, some server admins might, (again, same as with non-compliant webcrawlers), use more drastic measures to reduce the load, by simply blocking these accesses.
For that reason alone, it will pay off to comply with established standards in the long run.
Absolutely nothing has to obey robots.txt. It’s a politeness guideline for crawlers, not a rule, and anyone expecting bots to universally respect it is misunderstanding its purpose.
And absolutely no one needs to reply to every random request from an unknown source.
robots.txt is the POLITE way of telling a crawler, or other automated system, to get lost. And as is so often the case, there is a much less polite way to do that, which is to block them.
So, the way I see it, crawlers and other automated systems have 2 options: They can honor the polite way of doing things, or they can get their packets dropped by the firewall.
I mean, currently the AI request comes from the datacenter running the AI, but eventually one of two things will happen.
AI models will get small/fast enough to run on user hardware and use the users resources: End result? You lose. The user will set their own headers and sites will play the impossible game of identifying AI.
AI sites will figure out how to route the requests via any number of potential methods so the requests appear to come from the user anyway: End result? You lose. The sites attempting to block will play the cat and mouse game of figuring out what is AI or not AI.
Note, this doesn't mean AI blocking isn't worth doing, if nothing else to reduce load on the servers. It's just not a long term winning strategy.
You may not be able to stop AIs from crawling web sites through technological means. But you can confiscate all the resources of the company that owns the AI.
Where do we stop here? at "please drink a verification can and maintain eye contact at all times"?
This is ridiculous and plain evil.
The agent should respect robots.txt no matter who is using the Robot.
robots.txt is intended to control recursive fetches. It is not intended to block any and all access.
You can test this out using wget. Fetch a URL with wget. You will see that it only fetches that URL. Now pass it the --recursive flag. It will now fetch that URL, parse the links, fetch robots.txt, then fetch the permitted links. And so on.
wget respects robots.txt. But it doesn’t even bother looking at it if it’s only fetching a single URL because it isn’t acting recursively, so robots.txt does not apply.
The same applies to Claude. Whatever search index they are using, the crawler for that search index needs to respect robots.txt because it’s acting recursively. But when the user asks the LLM to look at web results, it’s just getting a single set of URLs from that index and fetching them – assuming it’s even doing that and not using a cached version. It’s not acting recursively, so robots.txt does not apply.
I know a lot of people want to block any and all AI fetches from their sites, but robots.txt is the wrong mechanism if you want to do that. It’s simply not designed to do that. It is only designed for crawlers, i.e. software that automatically fetches links recursively.
Without recursive crawling, it will not possible for a engine to know what are valid urls[1]. They will otherwise either have to brute-force say HEAD calls for all/common string combinations and see if they return 404s or more realistically have to crawl the site to "discover" pages.
The issue of summarizing specific a URL on demand is a different problem[2] and not related to issue at hand of search tools doing crawling at scale and depriving all traffic
Robots.txt does absolutely apply to LLMs engines and search engines equally. All types of engines create indices of some nature (RAG, Inverted Index whatever) by crawling, sometimes LLM enginers have been very aggressive without respecting robots.txt limits, as many webmasters have reported over the last couple of years.
---
[1] Unless published in sitemap.xml of course.
[2] You need to have the unique URL to ask the llm to summarize in the first place, which means you likely visited the page already, while someone sharing a link with you and a tool automatically summarizing the page deprives the webmaster of impressions and thus ad revenue or sales.
This is common usage pattern in messaging apps from Slack to iMessages and been so for a decade or more, also in news aggregators to social media sites, and webmasters have managed to live with this one way or another already.
It does not. It applies to whatever crawler built the search index the LLM accesses, and it would apply to an AI agent using an LLM to work recursively, but it does not apply to the LLM itself or the feature being discussed here.
The rest of your comment seems to just be repeating what I already said:
> Whatever search index they are using, the crawler for that search index needs to respect robots.txt because it’s acting recursively. But when the user asks the LLM to look at web results, it’s just getting a single set of URLs from that index and fetching them – assuming it’s even doing that and not using a cached version. It’s not acting recursively, so robots.txt does not apply.
There is a difference between an LLM, an index that it consults, and the crawler that builds that index, and I was drawing that distinction. You can’t just lump an LLM into the same category, because it’s doing a different thing.
Yes it does. I am the one controlling robots.txt on my server. I can put whatever user agent I want into my robots.txt, and I can block as much of my page as I want to it.
People can argue semantics as much as they want...in the end, site admins decide what's in robots.txt and what isn't.
And if people believe they can just ignore them, they are right, they can. But they are gonna find it rather difficult to ignore when fail2ban starts dropping their packets with no reply ;-)
No it doesn’t. It politely requests to crawlers that they do not, and if said crawlers choose to honour it than those specific crawlers will not crawl. That’s it. It can and is ignored without penalty or enforcement.
It’s like suggesting that putting a sign in your front yard saying “please don’t rob my house” prevents burglaries.
> Robots.txt does absolutely apply to LLMs engines and search engines equally
No it doesn’t because again, it’s a request system. It applies only to whatever chooses to pay attention to it, and further, decides to abide by any request within it which there is no requirement to do.
From google themselves:
“The instructions in robots.txt files CANNOT ENFORCE crawler behavior to your site; it's up to the crawler to obey them.”
And as already pointed out, there is no requirement a crawler follow them, let alone anything else.
If you want to control access, and you’re using robots.txt, you’ve no idea what you’re doing and probably shouldn’t be in charge of doing it.
(I noticed Claude, OpenAI and a couple of others whose names were less familiar to me.)
https://github.com/bluesky-social/proposals/tree/main/0008-u...
So they sometimes hit bollards and turnstiles made for other types of code which executes HTTP requests. So they're bots basically, but better (or suitably) behaving ones.
What is the difference if I use a browser or a LLM tool (or curl, or wget, etc) to make those requests?
LLM finds out about it from me, when I ask it to go to the link.
You don’t accuse browsers of “somehow find[ing] the existence of those pages”. How does a browser know what page to visit?
The user tells it to.
If I prompt an LLM “go to example.net and summarize the page” how is that any different from me typing example.net in a browser URL bar?
I have been talking about the latter, agree the former is abusive.
Why would that be an issue?
I thought they were just machine code running on part GPU and part CPU.
There's some gray area though, and the search engine indexing in advance (not sure if they've partnered with Bing/Google/...) should still follow robots.txt.
But if I say, "Search the web for a low-carb chicken casserole recipe that takes squash and cottage cheese," then it's either going to A) send queries to a search engine like Google, in which case robots.txt already should have been respected, or B) check its own repository of information it's spidered before I asked the question, in which case it should have respected robots.txt itself.
The entire web was built on the understanding that humans generally operate browsers, and robots.txt is specifically for scenarios in which they do not.
To pretend that the automated reading of websites by AI agents is not something different…is quite a stretch.
Should I not be able to execute curl to download a webpage because the "understanding that humans generally operate browsers"?
Isn't this a bit of an oversimplification, though? Especially when the tool you're using completely alters the relationship between the content author and the reader?
I hear this argument often: "it's just another tool and we've always used tools". But would you acknowledge that some tools change the dynamics entirely?
> Should I not be able to execute curl to download a webpage because the "understanding that humans generally operate browsers"?
Executing curl to download a webpage is nothing new, and compared to a traditional browser, has about the same impact. This is still drastically different than asking an AI agent to gather information and one of the pages it happens to "read" is the one you were previously navigating to with a browser or downloading with curl.
If you're a content creator who built a site/business based on a pre-LLM understanding of the dynamics of the ecosystem, doesn't it seem reasonable to see these types of "readers" differently?
If the scale bothers you, block it, just like how you would block any other crawlers.
Other than that, we all wanted "ease-of-access" (not me though), and now we have it. It does not change anything.
so not seem to or apparently but matter of fact like. robots.txt works for the intended audience
[1] https://blog.google/technology/ai/an-update-on-web-publisher...
they're literally asking to break laws to train AI for national security. A sentence in a press release from 2 years ago is worthless... look at what they're actually doing
I'm just not sure if legal would love me doing that on our corporate servers...
Hotels would much rather show you the outside, the lobby, and a conference room, so finding what the actual living space will look like is often surprisingly difficult.
For more in-depth stuff, it is LLMs by default and I only goto Google when the LLM isn't getting me what I need.
I had subscribed to Perplexity for a month to use their deep research. I think it ran out earlier this week but I am really missing it Saturday morning here.
That thing is awesome. Sonnet 3.7 is more in the middle of this to me. It can help me understand all the things I found from my deep research requests.
I am surprised the hype is not more for Sonnet 3.7 honestly.
Do they not care about typical search users? Only developers?
At least in my circle SWE's are either excited or completely fearful of the new technology; and every other profession feels like it is just hype and hasn't really changed anything. They've tried it sure; but it didn't really have the data to help with even simpler domain's than SWE. Anecdotally I've had the comment from people around me - my easy {insert job here} will last longer than your tech job from many people I know from both white and blue collar workers. Its definitely reduced the respect for SWE's in general at least where I'm located.
I would like to see improvements in people's quality of life and new possibilities/frontiers from the technology, not just "more efficiencies" and disruption. It feels like there's a lack of imagination with the tech.
I would guess that Anthropic wants developers talking about how good Claude is in their company Slack channels. That's the smart thing to do.
I on the other side reduced my googling by 95%
I’m referring to average people who may not be average users because they’re barely using LLMs in the first place, if at all.
They have maybe tried ChatGPT a few times to generate some silly stories, and maybe come back to it once or twice a month for a question or two, but that’s it.
We’re all colored by our bubbles, and that’s not a study, but it’s something.
A lot of the reasoning model improvements of late are in domains where RL, RLHF and other techniques can be both used and verified with data and training; in particular coding and math as "easy targets" either due to their determinism or domain knowledge of the implementers. Hence it has been quite disruptive to those industries (e.g. AI people know and do a lot of software). I've heard a lot of comments in my circles from other people saying they don't want AI to have the data/context/etc in order to protect their company/job/etc (i.e. their economic moat/value). They look at coding and don't want that to be them - if coding is that hard and it can get automated like that imagine my job.
Any other use of it is a case of "I have a hammer, so that's a nail".
Now I can prompt Claude to ping PubMed and make sure that its suggested references are verified. Each citation/claim should be accompanied by a PMID or a DOI.
I hope this works!
It's also fun to ask the same question to multiple AI tools and see how the answers differ. Usually Claude is the most accurate and helpful, though.
They already cost people time, money, and their mental health by using adversarial tactics to evade blocking and ignoring robots.txt
https://drewdevault.com/2025/03/17/2025-03-17-Stop-externali...
the main issue i find with Claude is, he fights you. He refuses so many requests and i need 3 or 4 replies to get what i want vs deepseek/grok. i've kept the monthly subscription to help anthropic, but it's trounced by the free options imo.
I have used grok a bit and it did what I needed it too, so I can't really compare. But 3.7 thinking is crazy strong for coding.
Back when it was 3.5 you could actually talk and learn things and it felt humane, but now it sounds like a McKinsey-corpo in a suit who sounds all fancy but is only right half the time.
I’ve switched back (rather regretfully) to chatgpt, and holy hell is its personality much better. For example just try asking it to explain differences between Neo Grotesque and Geometrical Sans Serif fonts/typefaces. One sounds like a friend trying to explain, the other sounds like a soulless bot. (And if you have 3.5 access, try asking it too.)
For general inference I use 4.5
I think OpenAI (and likely others) are on the right to track to acknowledge that different model tunings are best for different uses, and they interned to add a discriminator that can direct prompts to the best tuned model/change model tuning in real time.
"Look what I synthesise is correct and true because when I use the same top 10 priming responses which informed my decision I find these INDEPENDENT RESULTS which confirm what I modelled" type reasoning.
None of us have a problem with an LLM which returns 2+2 = 4 and shows you 10 sites which confirm. What worries me is when the LLM returns 2+2 = 5 and shows 10 sites which confirm. The set of negative worth content sites is semi infinite and the set of useful confirmed fact (expensive) sites is small so this feels like an outcome which is highly predictable (please don't beat me up for my arithmetic)
e.g. "Yes Climate science is bunk" <returns 10 top sites from paid shills in the oil sector which have been SEO'd up the top>"
Perplexity's "Explore" tab translates its news to your local language, and its curated news items are all pretty interesting, but the problem is that there are so few of them. I seem to get maybe a dozen stories in a day. I paid their subscription for a month just to listen to the news on my walk, but didn't renew because of this.
A foreign news site like BBC Mundo (Spanish) on the other hand barely has any stories outside of a few niches. Its tech section only has a few stories per week.
Hmm, maybe I want a sort of RSS reader that AI-translates stories for me. But I don't really want to maintain a feed myself either.
Apple News would probably do it since they also have good curation, but afaict they still don't support foreign news sources (why???).
ground.news includes sources from all sorts of countries, and also auto-translate headline and the intro, while you can still click to access the source article. Not affiliated, just happy user.
Example with sources in English, German and French: https://ground.news/article/accident-on-the-a13-in-the-yveli...
Although I'm not sure how useful it is for language learning, as you cannot (afaik) configure it to only display articles in Spanish or something similar, but if you filter by stories about France, you'll get a lot of French sources (obviously).
I'm surprised that they only expect performance to improve for tasks involving recent information. I thought it was widely accepted that using an LLM to extract information from a document is much more reliable than asking it to recall information it was trained on. In particular, it is supposed to lead to fewer instances of inventing facts out of thin air. Is my understanding out of date?
As an example, I recently travelled abroad to a popular vacationing spot and asked ChatGPT for local recommendations on what to do. When it gave me answers directly, they were pretty solid. But when it “searched the web” instead, the answers were awful. Every single result it suggested had terrible ratings. It did this repeatedly. One of those times I asked it to pick something with better ratings and it sort of improved but not by much.
Of course this is another tool and maybe Claude uses better sources or a better algorithm, but in this case where there was a concrete number tied to the results, that while not perfect, aims to rate the quality of a result, it still did not filter out low quality answers. I’m not sure I trust these LLMs to do any better when there aren’t such ratings available. The available input data is just not very good, and now LLMs are being used to feed that low quality, SEO machine.
"I believe they're usually available from November through March, but I'm not completely certain about the exact timing for this year's crop. Would you like me to search for more current information about the 2025 tangelo season?"
It doesn't just search, it wants me to confirm. This has happened a lot for me.
The results based on giving the source URL directly were better. Still a bit generic and high-level and vague, as LLMs tend to be, but better than the text-download version a couple days ago. And of course much easier to generate!
https://github.com/modelcontextprotocol/servers/tree/main/sr...
The page itself describes a --ignore-robots-txt and customizing the user agent. Guess we can just all copy OpenAI and continue to make SourceHut's life miserable /s
This is a cool tool, thanks for sharing
OpenAI Deep Research
Grok Deep Search
Gemini Deep Research
Grok + Search
Gemini + Search
ChatGPT + Search
These are just my opinions, but I do use this feature all the time. Haven't used Claude enough to get a sense of where it would fit in.
It wasn't long ago that a uni senior who worked for a decade+ on Google Search told me that it was hopeless anyone tries to compete with Google not because it sees a tonne of signals that helps with IR but because of its in-house AI/ML.
It turns out that the org that built the ultimate AI/ML that runs rings around anything that came before it for NLP (and thus IR) was a sister team at Google Translate.
It isn't inconceivable that a kid might be able to build a Google-quality web search, scalability aside, on CommonsCrawls data in a weekend. As someone who built re-ranking algorithms for a search engine built atop Yahoo! and Wikipedia (REST/SOAP) APIs back in the late 2000s as a side project (and experienced the launch and subsequent iterations of Echo/Alexa up close at Amazon), the current capabilities (of even the open weight multi-modal models) seem too good to be true.
Google itself though is saved by its enormous distribution advantages afforded by Chrome (3B to 5B users) and Android (3B+), aside from its search deals with Apple and other browser vendors.
1. I generally prefer that an LLM not search the web. The top N results are often either SEO spam, excessively long articles created solely to rank well, or long-established websites that gained authority years ago, when Google's crawler and ranking algorithms were less sophisticated.
2. Web search by LLMs is likely here to stay, so I'm curious whether there's an agent-friendly web format. For example, when an RSS reader visits a website, the site responds with an RSS feed. I think we need something similar for agents - an open standard that all websites would support. This could reduce processing overhead and potentially improve the accuracy of the information retrieved. Thoughts?
https://www.anthropic.com/news/claude-3-family?_hsenc=p2ANqt...
I stopped paying for Perplexity a year ago, but a month ago I started using Perplexity's combined search+LLM APIs - reasonably priced and convenient.
Caveat: Mistral reasoning model on free tier is super slow(2-5 token/sec).
It’s just breaks my head. We’ve build LLMs that can process millions of pages at a time. But what we give them is a search engine that is optimized for humans.
It’s like giving a humanoid robot access to a keyboard with a mouse to chat with another humanoid robot.
Disclaimer: I might be biased as we’re kind of building the fact search engine for LLMs.
Anyone know if there is something better? I was thinking of trying Perplexity maybe.
So this limitation is a bit arbitrary anyway.
Their user-facing product at https://mistral.ai/ seems good to me - it uses Brave for search (same as Claude does) and has a "canvas" feature similar to Claude Artifacts. I've not spent enough time with that to evaluate if it could be a good daily-driver or not though.
My hunch is that Claude 3.7 Sonnet is still _massively_ better for code, based on general buzz online and a few benchmarks I've seen.
For questions about events and problems that arose after 2025, where would LLMs get information to solve those? and who would be asking those at a forum LLMs can access going forward?
Is the snake eating it's own tail?
2. Best LLMs today answer questions better than 90% of people who comment on forums. So if these LLMs have been able to train on all the crap posted on internet so far, they should only get better as they are being trained on high quality output from the latest (and future) LLMs.
https://github.com/xemantic/claudine/
It costed roughly 30 lines of code: https://github.com/xemantic/claudine/blob/main/src/commonMai...
""" i need a bashrc command that will map the alias "logg" to open macvim to the file at ~/log.txt, then execute the macro defined by "<leader>z" """
Note <leader>z ends with user in insert mode, Claude provides solution below but puts me in edit mode. (I still have to press "i")
alias loggg='mvim ~/log.txt -c "normal \<leader>z"'
We finally went full circle? LLM is used as a search engine?
Eg say I want to build an agent to make decisions, shall I write some code to insert the data that informs the decision into the prompt, return structured data, and then write code to implement the decision?
Or should I empower the llm do those things with function calls?
The energy efficiency of most models has improved by an order of magnitude since the most widely cited CO2 usage papers were published.
(It remains frustratingly difficult to get accurate numbers though: at this point I think more transparency would help rather than hurt the big AI labs)
The internet consumed itself. Telling someone to, "Just Google it," is now terrible general advice.
i hope you're trolling mate, got an account since 18 years and never wondered what the [-] button does? :D
I wonder if Claude’s API will match Perplexity’s dynamic answers. Is there API rate limiting. If so, then the older API pricing would be preferable. Can users switch between the two?
Oh, sure, it hallucinates a lot, and in dangerous ways, but even if I have to manually corroborate all the citations, I'm still saving time, especially insofar as it reveals whether or not I'm barking, broadly, up the wrong tree.
It's especially good for comparisons, because the results of two disparate search terms can be collated into the results.
Could this be done without LLMs, but only vector embeddings? Hm, maybe. Algolia is maybe the 80 for 20, but does Algolia have a web index?
https://glama.ai/mcp/servers?searchTerm=search
What's the benefit of bringing native integration?
MCP has the capability to add this functionality.
It would be nice to see MCP getting adoption in their web UI, as well easier UX, rather than more ad hoc features being added natively.