The universal theme with general purpose technologies is 1) they start out lagging behind current practices in every context 2) they improve rapidly, but 3) they break through and surpass current practices in different contexts at different times.

What that means is that if you work in a certain context, for a while you keep seeing AI get a 0 because it is worse than the current process. Behind the scenes the underlying technology is improving rapidly, but because it hasn’t cusped the viability threshold you don’t feel it at all. From this vantage point, it is easy to dismiss the whole thing and forget about the slope, because the whole line is under the surface of usefulness in your context. The author has identified two cases where current AI is below the cusp of viability: design and large scale changes to a codebase (though Codex is cracking the second one quickly).

The hard and useful thing is not to find contexts where the general purpose technology gets a 0, but to surf the cusp of viability by finding incrementally harder problems that are newly solvable as the underlying technology improves. A very clear example of this is early Tesla surfing the reduction in Li-ion battery prices by starting with expensive sports cars, then luxury sedans, then normal cars. You can be sure that throughout the first two phases, everyone at GM and Toyota was saying: Li-ion batteries are totally infeasible for the consumers we prioritize who want affordable cars. By the time the technology is ready for sedans, Tesla has a 5 year lead.

> The universal theme with general purpose technologies is 1) they start out lagging behind current practices in every context 2) they improve rapidly, but 3) they break through and surpass current practices in different contexts at different times.

I think you should say succesful "general purpose technologies". What you describe is what happens when things work out. Sometimes things stall at step 1, and the technology gets relegated to a foot note in the history books.

  • jibal
  • ·
  • 22 hours ago
  • ·
  • [ - ]
Yeah, that comment is heavy on survivor bias. The universal theme is that things go the way they go.
This is why TFA used Segway as an example.
  • ·
  • 17 hours ago
  • ·
  • [ - ]
We don’t argue that microwaves will be ubiquitous (which they aren’t, but close enough). We argue that microwaves are not an artificial general barbecue, as the makers might wish were true.

And we argue that microwaves will indeed never replace your grill as the makers, again, would love you to believe.

Your reasoning would be fine if there were a clear distinction, like between a microwave and a grill.

What we actually have is a physical system (the brain) that somehow implements what we know as the only approximation of general intelligence and artificial systems of various architectures (mostly transformers) that are intended to capture the essence of general intelligence.

We are not at the microwave and the grill stage. We are at the birds and the heavier-than-air contraptions stage, when it's not yet clear whether those particular models will fly, or whether they need more power, more control surfaces, or something else.

Heck, the frontier models have around 100 times lower number of parameters than the most conservative estimate of the equivalent number of parameters of the brain: the number of synapses. And we are like "it won't fly".

There was a lot of hubris around microwaves. I remember a lot of images of full chickens being roasted in them. I've never once seen that "in the wild" as it were. They are good for reheating something that was produced earlier. Hey the metaphor is even better than I thought!
There are many years since I have switched to cooking only with microwaves, due to minimum wasted time and perfect reproducibility. And I normally eat only food that I cook myself from raw ingredients.

Attempting to roast a full chicken or turkey is not the correct way to use microwaves. You must first remove the bones from the bird, then cut the meat into bite-sized chunks. After using a boning knife for the former operation, I prefer to do the latter operation with some good Japanese kitchen scissors, as it is simpler and faster than with a knife.

If you buy turkey/chicken breasts or thighs without bones, then you have to do only the latter operation and cut them into bite-sized pieces.

Then you can roast the meat pieces in a closed glass vessel, without adding anything to the meat, except salt and spices (i.e. no added water or oil). The microwave oven must be set to a relatively low power and long time, e.g. for turkey meat to 30 minutes @ 440 W and for chicken to less time than that, e.g. 20 to 25 minutes. The exact values depend on the oven and on the amount of cooked meat, but once determined by experiment, they remain valid forever.

The meat cooked like this is practically identical to meat cooked on a covered grill (the kind with indirect heating, through hot air), but it is done faster and without needing any supervision. In my opinion this results in the tastiest meat in comparison with any other cooking method. However, I do not care about a roasted crust on the meat, which is also unhealthy, so I do not use the infrared lamp that the microwave oven has for making such a crust.

Vegetable garnishes, e.g. potatoes, must be cooked at microwaves separate from the meat, as they typically need much less time than meat, usually less than 10 minutes (but higher power). Everything must be mixed into the final dish after cooking, including things like added oil, which should better not be heated at great temperatures.

Even without the constraints of a microwave oven, preparing meat like this makes much more sense than cooking whole birds or fish or whatever. Removing all the bones and any other inedible parts and also cutting the meat into bite-sized pieces before cooking wastes much less time than when everybody must repeat all these operations every time during eating, so I consider that serving whole animals at a meal is just stupid, even if they may look appetizing for some people.

> Then you can roast the meat pieces in a closed glass vessel

It sounds like this is steamed meat, as opposed to roasted. Your cooking time seems to match a quick search for steamed chicken recipes: https://tiffycooks.com/20-minutes-chinese-steamed-chicken/

Neither "roasted" nor "steamed" are completely appropriate for this cooking method.

While there is steam in the vessel, it comes only from the water lost from the meat, not from any additional water, and the meat is heated by the microwaves, not by the steam.

Without keeping a lid on the cooking vessel, the loss of water is too rapid and the cooked meat becomes too dry. Even so, the weight of meat is reduced to about two thirds, due to the loss of water.

In the past, I was roasting meat on a covered grill, where the air enclosed in it was heated by a gas burner through an opening located on one side, on its bottom. With such a covered grill, the air in which the meat was cooked would also contain steam from the water lost by the meat, so the end result was very similar to the microwave cooking that I do now, also preventing the meat from becoming too dry, unlike with roasting on an open grill, while also concentrating the flavor and avoiding its dilution by added water or oil.

"irradiated meat"
I greatly appreciate this type of thinking. If you debone the meat yourself, do you make stock or use the bones in any way? You obviously care about personal process optimization and health factors, and I'm curious to what extent you are thinking of the entire food/ingredient supply chain.
Yes, making stock is normally the appropriate use for bones.

However, I am frequently lazy, when I buy breasts/thighs without bones.

Bones, tendons and skin provide a lot of taste and texture.
> There are many years since I have switched to cooking only with microwaves, due to minimum wasted time and perfect reproducibility. And I normally eat only food that I cook myself from raw ingredients.

Some of us also like the food tasting nice besides the reproducibility of the results.

Like I have said, meat prepared in this way is the tastiest that I have ever eaten, except if you are one of those who like real meat less than the burned crust of meat.

Even for the burned-crust lovers, many microwave ovens have lamps for this purpose, but I cannot say whether the lamps are good at making tasty burned crusts, because I have never used them, since I like more to eat meat than to eat burned crusts.

The good taste of meat prepared in this way is not surprising, because the meat is heated very uniformly and the flavor of the meat is not leached away by added water or oil, but it is concentrated due to the loss of water from the meat, so the taste is very similar to that of meat well cooked on a grill, neither burned nor under-cooked, and also without adding anything but salt and spices.

> tastiest that I have ever eaten

Let me strongly doubt that.

> the meat is heated very uniformly

Are you sure you are using a microwave oven at all?

This convo is hilarious, you rock. I'm not surprised someone was open minded enough to master the art of cooking with a microwave. Also there are different type of fast cooking apparatus that are usually reserved for restaurants but I could imagine they might be up your alley. (I can't right now recall the name of such a device but its similar in function to microwave maybe it is a microwave at its heart?)
As I understood it, if you used the esoteric functions of the microwave, you COULD cook food like it was cooked on a range, but it required constant babysitting of the microwave and reinput of timers and cook power levels.
they did invent the combi oven, which while not a microwave is capable of most of its duties along with roasting a chicken :)
Then you don't know about RLVR.
It's a quick analogy, don't pick it apart. To other readers it's boring. It communicates his point fine.
And what is the point? "We know that a microwave is not a grill, but pushy advertisers insist otherwise?" The analogy is plainly wrong in the first part. We don't know.

The second part is boring. Advertisers are always pushy. It's their work. The interesting part is how much truth in their statements and it depends on the first part.

Microwaves are pretty terrible, and proof that wide consumer adoption does not really indicate quality or suggest that consumers adopt technology which _should_ exist.
I lived without a microwave for a ~year and ended up buying one again because they are pretty convenient.

So maybe it's not high on the list based on the value you are measuring but it definitely has a convenience value that isn't replaced by anything else.

The generations of microwave after the first few were fantastic. They did what they were supposed to and lasted decades. Once that reputation was solidified, manufacturers began cutting corners, leaving us with the junk of today.

The same thing happened with web search and social media. Now, those selfsame companies are trying to get us to adopt AI. Even if it somehow manages to fulfill its promise, it will only be for a time. Cost-cutting will degrade the service.

That's like saying ball point pens are pretty terrible. They are after all rubbish at writing. Nobody ever correlated popularity with quality.
Why are they terrible?
They boil stuff but take up much more space than a kettle.
They warm up things that a) I don't want to put in a kettle and b) don't want to put in a dedicated pot to put on the stove.

Like the remainder of the soup i made yesterday that I've put in a china bowl in the fridge. I was able to eat warm soup out of that bowl without requiring to make any other dishes dirty. Pretty convenient if you ask me.

Bonus: you can take a cherry tomato or a single grape and make a small plasma arc in the microwave. Pretty cool trick to impress and scare people at house parties.

They also heat other things up like food but take less space than an oven.
Yes, they miraculously leave your food cold but heat up your plate enough to burn you.
I think you might have a terrible plate problem instead of a terrible microwave problem
I'd like to see you make popcorn or an omelette in a kettle. Or heat up rice / soup / stew
They are not. But they are the AI slop of cooking - it's easy to get an acceptable result, so people associate it with junk food made with zero effort.
They do make those microwaves with a built-in grill element now!
That is not my experience or look at the history at all, wrt general-ish purpose technologies.

What usually happens is they either empower unskilled (in a particular context) personnel to perform some amount of tasks at "good enough" level or replace highly specialized machinery for some amount of tasks at again "good enough" level.

At some point (typically, when a general purpose technology is able to do "good enough" in multiple contexts) operating at "good enough" enough level in multiple verticals becomes profitable over operating ant specialized level and this is when the general purpose technology starts replacing specialized technology/personnel. Very much not all general-purpose technologies reach this stage at all, this is only applicable to highly successful general purpose technologies.

Then market share of general technology starts increasing rapidly while at the same time market share of specialized technologies drops, RnD in general tech explodes while specialized technologies start stagnating. Over time this may lead to cutting edge general purpose technologies surpassing the now-old specialized technologies, taking over in most areas.

> A very clear example of this is early Tesla surfing the reduction in Li-ion battery prices. <...> By the time the technology is ready for sedans, Tesla has a 5 year lead. > everyone at GM and Toyota was saying: Li-ion batteries are totally infeasible for the consumers we prioritize who want affordable cars.

We are nearly two decades since the Tesla "expensive sports car" and pure BEVs are still the significantly more expensive option, despite massive subsidies. If anything, everyone at Toyota were right. Furthermore, they have been developing their electric drive-trains in parallel via the hybrid tech: surfing the same wave while raking in profits.

In fact, BEV sales outpace other drive-train sales only in regions where either registrations of those are artificially limited, or the government heavily subsidizes both purchase and maintenance costs. If you don't have government subsidized rooftop solar the cost per mile of BEV is more or less on par with a HEV and in most cases worse than diesel for long range trips.

> pure BEVs are still the significantly more expensive option

New technology often has ‘new’ tradeoffs, are GPU’s are sill only situationally better than CPU’s.

DC fast charging is several times more expensive than home charging which heavily influences the economics of buying an EV without subsidies. Same deal with plug in Hybrids or long range batteries on PEV, if you don’t need the range you’re just wasting money. So there’s cases when an unsubsidized PEV is the cheapest option and that line will change over time even if it’s not going away anytime soon.

AI on the other hand is such a wide umbrella it doesn’t really make sense to talk about specific tradeoffs beyond the short term. Nobody can say what the downsides will be in 10-20 years because they aren’t extrapolating a specific technology with clear tradeoffs. Self driving cars could be taking over industries in 15 years, or still quite limited we can’t say.

GPUs are a good example - they started getting traction in the early 2000s/late 90s.

Once in the mid 2000s we figured out that single-thread perf won't scale, GPUs became the next scaling frontier and it was thought that they'd complement and supplant CPUs - with the Xbox and smartphones having integrated GPUs, and games starting to rely on general purpose compute shaders, a lot of folks (including me) thought that the software in the future will constantly pingpong between CPU and GPU execution? Got an array to sort? Let the GPU handle that. Got a JPEG to decode? GPU. Etc.

I took an in depth CUDA course back in the early 2010s, thinking that come 5 years or so, all professional signal processing will move to GPUs, and GPU algorithm knowledge will be just as widespread and expected as how to program a CPU, and I would need to Leetcode a bitonic sort to get a regular-ass job.

What happened? GPUs weren't really used, data sharing between CPU and GPU is still cumbersome and slow, dedicated accelerators like video decoders weren't replaced by general purpose GPU compute, we still have special function units for these.

There are technical challenges sure to doing these things, but very solvable ones.

GPUs are still stuck in 2 niches - video games, and AI (which incidentally got huge). Everybody still writes single-threaded Python and Js.

There was every reason to be optimistic about GPGPU back then, and there's every reason to be optimistic about AI now.

Not sure where this will go, but probably not where we expect it to.

I heard a very similar sentiment expressed as "everything is not good enough to be useful until it suddenly is".

I find it a powerful mental model precisely because it is not a statement of success rate or survival rate: Yes, a lot of ideas never break any kind of viability threshold, sure, but every idea that did also started out as laughable, toy-like, and generally shit (not just li-ion batteries, also the wheel, guns, the internet and mobile computers).

It is essentially saying 'current lack of viability is a bad indicator of future death' (at least not any more than the high mortality of new tech in general), I guess.

  • nkzd
  • ·
  • 17 hours ago
  • ·
  • [ - ]
> design and large scale changes to a codebase (though Codex is cracking the second one quickly).

Can you share some experience regarding Codex and large scale changes to codebase? I haven't noticed any improvements.

Even if you consider a car a general purpose technology, Tesla displacing GM is a car displacing a car, so it's not really an example of what you're saying, is it?
You took a very specific argument, abstracted it, then posited your worldview.

What do you have to say about the circular trillions of dollars going around 7 companies and building huge data centers and expecting all smaller players to just subsidize them?

Sure, you can elide the argument by saying, "actually that doesn't matter because I am really smart and understood what the author really was talking about, let me reframe it properly".

I don't really have a response to that. You're free to do what you please. To me, something feels very wrong with that and this behavior in general plagues the modern Internet.

Nissan were selling thousands of Leafs before the Model S every rolled off the production line.
  • ·
  • 1 day ago
  • ·
  • [ - ]
“As a designer…”

IMHO the bleeding edge of what’s working well with LLMs is within software engineering because we’re building for ourselves, first.

Claude code is incredible. Where I work, there are an incredible number of custom agents that integrate with our internal tooling. Many make me very productive and are worthwhile.

I find it hard to buy in to opinions of non-SWE on the uselessness of AI solely because I think the innovation is lagging in other areas. I don’t doubt they don’t yet have compelling AI tooling.

I'm a SWE, DBA, SysAdmin, I work up and down the stack as needed. I'm not using LLMs at all. I really haven't tried them. I'm waiting for the dust to settle and clear "best practices" to emerge. I am sure that these tools are here to stay but I am also confident they are not in their final form today. I've seen too many hype trains in my career to still be jumping on them at the first stop.
It's time to jump on the train. I'm a cranky, old, embedded SWE and claude 4.5 is changing how I work. Before that I laughed off LLMs. They were trash. Claude still has issues, but damn, I think if I don't integrate it into my workflow I'll be out of work or relegated to work in QA or devops(where I'd likely be forced to use it).

No, it's not going to write all your code for you. Yes your skills are still needed to design, debug, perform teamwork(selling your designs, building consensus, etc), etc.. But it's time to get on the train.

The moment your code departs from typical patterns in the training set or ("agentic environment") LLMs fall over at best (i.e. can't even find the thing) or do some random nonsense at worst.

IMO LLMs are still at the point where they require significant handholding, showing what exactly to do, exactly where. Otherwise, it's constant review of random application of different random patterns, which may or may not satisfy requirements, goals and invariants.

This has not been my experience as of late. If anything, they help steer us back on track when a SWE decides to be clever and introduce a footgun.
Same, I have Gemini Pro 2.5 (now 3) exclusively implementing new designs that don't exist in the world and it's great at it! I do all the design work, it writes the code (and tests) and debugs the thing.

I'm happy, it's happy, I've never been more productive.

The longer I do this, the more likely it is to one-shot things across 5-10 files with testing passing on the first try.

> The moment your code departs from typical patterns

Using AI, I constantly realize that a-typical patterns are much rarer than I thought.

Yeah but don't let it prevent you from making a novel change just because no one else seems to be doing it. That's where innovation sleeps.
I don't think anyone disagrees with that. But it's a good time to learn now, to jump on the train and follow the progress.

It will give the developer a leg up in the future when the mature tools are ready. Just like the people who surfed the 90s internet seem to do better with advanced technology than the youngsters who've only seen the latest sleek modern GUI tools and apps of today.

A contrarian opinion - I also didn't jump the train yet, its even forbidden in our part of company due to various regulations re data secrecy and generally slow adoption.

Also - for most seasoned developers, actual dev activity is miniscule part of overall efforts. If you are churning code like some sweatshop every single day at say 45 its by your own choice, you don't want to progress in career or career didn't push you up on its own.

What I want to say - that miniscule part of the day when I actually get my hands on the code are the best. Pure creativity, puzzle solving, learning new stuff (or relearning when looking at old code). Why the heck would I want to lose or dilute this and even run towards it? It makes sense if my performance is rated only based on code output, but its not... that would be a pretty toxic place to be polite.

Seniority doesn't come from churning out code quicker. Its more long the lines of communication, leading others, empathy, toughness when needed, not avoiding uncomfortable situations or discussions and so on. No room for llms there.

There have been times when something was very important and my ability to churn out quick proof of concept code (pre AI) made the difference. It has catapulted me. I thought talking was all-important, but turns out, there's already too much talk and not enough action in these working groups.

So now with AI, that's even quicker. And I can do it more easily during the half relevant part of meetings, which I have a lot more of nowadays. When I have real time to sit and code, I focus on the hardest and most interesting parts, which the AI can't do.

> ability to churn out quick proof of concept code (pre AI) made the difference. It has catapulted me. I thought talking was all-important

It is always the talking that transitions "here's quick proof of concept" to "someone else will implement this fully and then maintain". One cannot be catapulted if they cannot offload the implementation and maintenance. Two quick proof of concept ideas you are stuck with and it's already your full capacity. one either talks their way out to having a team supporting them or they find themselves on a PIP with a regular backlog piling up.

Oh they hired some guys to productionize it, who did it manually back then but now delegate a lot of it to AI.
While I think that is true and this thread here is about senior productivity switching to LLMs, I can say from my experience that our juniors absolutely crush it using LLMs. They have to do pretty demanding and advanced stuff from the start and they are using LLMs nonstop. Not sure how that translates into long term learning but it definitely increases their output and makes them competent-enough developers to contribute right away.
> Not sure how that translates into long term learning

I don't think that's a relevant metric. "learning" rate of humans versus LLMs. If you expect typical LLMs to grow from juniors to competent mids and maybe even seniors faster than typical human, then there is little point to learn to write code, but rather learn "software engineering with artificial code monkey". However, if that turns out to not be true, we have just broken the pipeline producing actual mids and seniors, who can actually oversee the LLMs.

> Seniority doesn't come from churning out code quicker. Its more long the lines of communication, leading others, empathy, toughness when needed, not avoiding uncomfortable situations or discussions and so on. No room for llms there.

they might be poor at it, but if you do everything you specified online and through a computer, then its in an LLMs domain. If we hadnt pushed so hard for work from home it might be a different story. LLMs are poor on soft skills but is that inherent or just a problem that can be refined away? i dont know

> What I want to say - that miniscule part of the day when I actually get my hands on the code are the best.

And if you are not "churning code like some sweatshop every single day" those hours are not "hey, let's bang out something cool!", it's more like "here are 5 reasons we can't do the cool thing, young padawan".

Quite frankly - the majority of code is improved by integrating some sort of pattern. a LLM is great at bringing the pattern you may not have realized you are making into the forefront.

I think there's an obsession, especially in more veteran SWEs to think they are creating something one of a kind and special, when in reality, we're just iterating over the same patterns.

This was true since Claude Sonnet 3.5, so over a year now. I was early on the LLM train building RAG tools and prototypes in the company I was working at the time, but pre Claude 3.5 all the models were just a complete waste of time for coding, except the inline autocomplete models saved you some typing.

Claude 3.5 was actually where it could generate simple stuff. Progress kind of tapered off since tho, Claude is still best but Sonnet 4.5 is disappointing in that it does't fundamentally bring me more than 3.5 did it's just a bit better at execution - but I still can't delegate higher level problems to it.

Top tier models are sometimes surprisingly good but they take forever.

This was true since ChatGPT-1, and I mean the lies.
Not really - 3.5 was the first model where I could actually use it to vibe through CRUD without it vasting more time than it saves, I actually used it to deliver a MVP on a side gig I was working on. GPT 4 was nowhere near as useful at the time. And Sonnet 3 was also considerably worse.

And from reading through the forums and talking to co-workers this was a common experience.

I don’t use AI for anything except translations and searching and I’d say 3 times out of 10 it gives me bad information, while translation only works ok if you use the most expensive models
It's just not true, it is not ready.

Especially Claude, where if you check the forums everyone is complaining that it's gone stupid the last few months.

Claude's code is all over the place, and if you can't see that and are putting it's code into production I pity your colleagues.

Try stopping. Honestly, just try. Just use claude as a super search engine. Though right now ChatGPT is better.

You won't see any drop in productivity.

It's not about blindly accepting autogenerated code. Its using them for tooling integration.

Its like terminal autocomplete on steroids. Everything around the code is blazing fast.

  • ·
  • 20 hours ago
  • ·
  • [ - ]
This is far too simplistic a viewpoint. First of all it depends what you're trying to do. Web dev? AI works pretty well. CPU design? Yeah good luck with that.

Secondly it depends what you're using it for within web dev. One shot an entire app? I did that recently for a Chrome extension and while it got many things wrong that I had to learn and fix, it was still waaaaaay faster than doing it myself. Especially for solving stupid JS ecosystem bugs.

Nobody sane is suggesting you just generate code and put it straight into production. It isn't ready for that. It is ready for saving you a ton of time if you use it wisely.

I'd say it was pretty naunced. Use it, but don't vibe code. The crux of the issue is that unless you're still writing the code it's too hard to notice when Claude or Codex makes a mountain out of a mole hill, too easy to miss the subtle bugs, too easy to miss the easy abstractions which would have vastly simplified the code.

And I do web dev, the code is rubbish. It's actually got subtle problems, even though it fails less. It often munges together loads of old APIs or deprecated ways of doing things. God forbid you need to deal with something like react router or MUI as it will combine code from several different versions.

And yes, people are using these tools to directly put code in. I see devs DOING it. The code sucks.

Vibe coded PRs are a huge timesink that OTHER people end up fixing.

One guy let it run and it changed code in an entirely unrelated part of the system and he didn't even notice. Worse, when scanning the PR it looked reasonable, until I went to fix a 350 line service Claude or codex had puked out that could be rewritten in 20 lines, and realized the code files were in an entirely different search system.

They're also generally both terrible at abstracting code. So you end up with tons of code that does sweet FA over and over. And the constant over engineering and exception handling theatre it does makes it look like it's written a lot of code when it's basically turned what should be a 5 liner into an essay.

Ugh. This is like coding in the FactoryFactoryFactory days all over again.

I am an old SWE and claude 4.5 has not changed a thing on how we work.

The teams that have embraced AI in their worlflow have not increased their output compared with they ones that don't use it.

I'm like you. I'd say my productivity improved by 5-10%: Claude can make surprinsgly good code edits. For these, my subjective feeling is that claude does in 30 min what I'd have dont in one hour. It's a net gain. Now, my job is about communicating, understanding problems, learning, etc. So my overall productivity is not dramatically changing, but for things related to code, it's a net 5-10%
Which is where the AI investment disconnect is scary.

AI Companies have invested a crazy amount of money into a small productivity gain for their customers.

If AI was replacing developers it wouldn’t cost me $20-100/month to get a subscription.

I'm a SWE and also an art director. I have tried these tools and, the way I've also tried Vue and React, I think they're good enough for simple minded applications. It's worth the penny to try them and look through the binoculars, if only to see how unoriginal and creatively limited what most people in your field are actually doing if they find this something that saves them time.
What a condescending take.
Why would you wait for dust to settle down? Just curious. Productivity gains are real in current form of LLMs. Guardrails and best practices can be learnt and self imposed.
Whenever I hear about productivity gains, I mentally substitute it for "more time to play video games left in the day" to keep the conversation grounded. I would say I rather not.
If you have two modes of spending your time, one being work that you only do because you are paid for it, and the other being feeding into an addiction, the conversations you should be having are not about where to use AI.
> Productivity gains are real in current form of LLM

I haven't found that to be true

I'm of the opinion that anyone who is impressed by the code these things produce is a hack

I just started a project, they fired the previous team, I am possitive they used AI. The app is full of bugs and the client will never hire the old company again.

Whoever says is time to move to LLMS is clueless.

Humans are very capable of creating bugs. This in itself is not a tell.
"Because one team doesn't know how to use LLMs, I conclude that LLMs are useless."
Your "productivity gains" is just equal to the hours others eventually have to spend cleaning up and fixing what you generated.
I'm surprised these pockets of job security still exist.

Know this: someone is coming after this already.

One day someone from management will hear about a cost-saving story at a dinner table, the words GPT, Cursor, Antigravity, reasoning, AGI will cause a buzzing in her ear. Waking up with tinnitus the next morning, they'll instantly schedule a 1:1 to discuss "the degree of AI use and automation"

> Know this: someone is coming after this already.

Yesterday, GitHub Copilot declared that my less-AI-weary friend’s new Laravel project was following all industry best-practices for database design as it storing entities as denormalized JSON blobs in a MySQL 8.x database with no FKs, indexes, constraints, all NULL columns (and using root@mysql as the login, of course); while all Laravel controller actions’ DB queries were RBAR loops that did loaded all rows into memory before doing JSON deserialisation in order to filter rows.

I can’t reconcile your attitude with my own personal lived experience of LLMs being utterly wrong 40% of the time; while 50% of the time being no better or faster than if I did things myself; another 5% of the time it gets stuck in a loop debating the existence of the seahorse emoji; and the last 5% of the time genuinely utterly scaring me with a profoundly accurate answer or solution that it produced instantly.

Also, LLMs have yet to demonstrate an ability to tackle other real-world DBA problems… like physically installing a new SSD into the SAN unit in the rack.

  • ·
  • 19 hours ago
  • ·
  • [ - ]
  • ·
  • 16 hours ago
  • ·
  • [ - ]
Lowballing contracts is nothing new. It has never ever worked out.

You can trow all AI you want, but at the end of the day you get what you pay for.

No harm in running them in isolated instances and see what happens.

Feed an LLM stack traces or ask it to ask you questions about a topic you're unfamiliar about. Give it a rough hypothesis and demand it poke holes in it. These things it does well. I use Kagi's auto summariser to distil search results in to a hand full of paragraphs and then read through the citations it gives me.

Know that LLMs will suck up to you and confirm your ideas and make up bonkers things a third of the time.

>I really haven’t tried them.

You are doing yourself a huge disservice.

Nothing is in its "final form" today.

I'm a long time SWE and in the last week, I've made and shipped production changes across around 6 different repos/monorepos, ranging from Python to Golang, to Kotlin to TS to Java. I'd consider myself "expert" in maybe one or two of those codebases and only having a passing knowledge of the others.

I'm using AI, not to fire-and-forget changes, but to explain and document where I can find certain functionality, generate snippets and boilerplate, and produce test cases for the changes I need. I read, review and consider that every line of code I commit has my name against it, and treat it as such.

Without these tools I'd estimate being around 25% as effective when it comes to getting up to speed on unfamiliar code and service. For that alone, AI tooling is utterly invaluable.

  • dcre
  • ·
  • 22 hours ago
  • ·
  • [ - ]
The tools have reached the point where no special knowledge is required to get started. You can get going in 5 minutes. Try Claude Code with an API key (no subscription required). Run it in the terminal in a repo and ask how something works. Then ask it to make a straightforward but tedious change. Etc.
Just download Gemini (no API key) and use it.
I hope I am never this slow to adapt to new technologies.
I’m in the same position, but I use AI to get a second opinion. Try it by using the proper models, like Gemini 3 Pro that was just released and include grounding. Don’t use the free models, you’ll be surprised at how valuable it can be.
Right now I see "use AI" to be in the same phase as "add Radium" was shortly after Curie's discovery. A vial of magic pixie dust to sprinkle on things, laden with hidden dangers very few yet understand. But I also keep in mind that radioactivity truly transformed some very unexpected fields.[ß]

AI and LLMs are tools. The best tools tend to be highly focused in their application. I expect AI to eventually find its way to various specific tool uses, but I have no crystal ball to predict what those tools might be or where they will surface. Although I have to say that I have seen, earlier this week, the first genuinely interesting use-case for AI-powered code generation.

A very senior engineer (think: ~40 years of hands-on experience) had joined a company and was frustrated by lack of integration tests. Unit tests, yes. E2E test suite, yes. Nothing in between. So he wrote a piece of machinery to automatically test integration between a selected number of interacting components, and eventually was happy with the result. But since that was only a small portion of the stack, he would have had to then replicate that body of work for a whole lot of other pieces - and thought "I could make AI repeat this chore".

The end result is a set of very specific prompts, constraints, requirements, instructions, and sequences of logical steps that tell one advanced model what to do. One of the instructions is along the lines of "use this body of work I wrote $OVER_THERE as a reference". That the model is building iteratively a set of tests that self-validate the progress certainly helps. The curious insight in the workflow is that once the model has finished, he then points the generated body of work to another advanced model from a different vendor, and has that do an automated code review, again using his original work as a reference material. And then feeds that back to the first model to fix things.

That means that he still has to do the final review of the results, and tweak/polish parts where the two-headed AI went off the rails. But overall the approach saves quite a lot of time and actually scales pretty much linearly to the size of the codebase and stack depth. To quote his presentation note, "this way AI works as a highly productive junior that can follow good instructions, not as a misguided oracle that comes up with inventive reinterpretations."

He made modern AI repeat his effort, but crucially he had had to do the work at least once to know precisely what constraints would apply. I suspect that eventually we'll be seeing more of these increasingly sophisticated but very narrowly tailored tooling use cases to pop up. The best tools are after all focused, even surgical.

ß: Who could have predicted in 1900 that radioactive compounds would change fields ranging from medicine to food storage?

AI - like asbestos but radioactive!
How could you not at least try?
You don’t have to jump on the hype train to get anything out of it. I started using claude code about 4 months back and I find it really hard to imagine developing without now. Sure I’m more of a manager, but the tedious busywork, the most annoying part of programming, is entirely gone. I love it.
I work as an SRE, I have tried LLMs, they barely work. You're not missing out.

Or, more correctly, they don't work well for my problems or usage. They can at best answer basic questions, stuff you could lookup using a search engine, if you knew what to look for. They can also generate code for inspiration, but you'll end up rewriting all of it before you're done. What they can't do it solve your problem start to end. They really do need a RTFM mode, where they will just insult you if you're approach or design is plain wrong or at least just let you know that it will now stop helping as you're clearly of the rails.

We need to bubble to pop, it'll be a year or two, the finance bros aren't done extracting value from the stonks. Once it does, we can focus on what's working and what isn't and refine the good stuff.

Right now the LLMs are the product, and they can't be, it makes no sense. They need to be embedded within product, either as a built in feature, e.g. CoPilot in Visual Studio, or as plugins, like LSPs.

Clearly others are having more luck with LLMs than I do, and do amazing projects, but that sort of illustrates the point, their aren't ready and we don't have a solution for them to be universally useful (and here I'm even restricting myself to thinking about coding).

> I'm not using LLMs at all

You’re deliberately disadvantaging yourself by a mile. Give it a go

… the first one’s free ;)

I think the question is whether those ai tools make you produce more value. Anecdotally, the ai tools have changed the workflow and allowed me to produce more tools etc.

They have not necessarily changed the rate at which I produce valuable outputs (yet).

  • gniv
  • ·
  • 18 hours ago
  • ·
  • [ - ]
When using AI to find faults in existing processes that is value creation (assuming they get fixed of course).
can you say more about this? what do you mean when you say 'more tools' is not the same as 'valuable outputs'
There are a thousand "nuisance" problems which matter to me and me alone. AI allows me to bang these out faster, and put nice UIs on it. When I'm making an internal tool - there really is no reason not to put a high quality UX on top. The high quality UX, or existence of a tool that only I use does not mean my value went up - just that I can do work that my boss would otherwise tell me not to do.
  • chii
  • ·
  • 1 day ago
  • ·
  • [ - ]
personal increase in satisfaction (such as "work that my boss would otherwise tell me not to do") is valuable - even if only to you.

The fact is, value is produced when something can be produced at a fraction of the resources required previously, as long as the cost is borne by the person receiving the end result.

Under this definition, could any tool at all be considered to produce more value?
no - this is a lesson an engineer learns early on. The time spent making the tool may still dwarf the time savings you gain from the tool. I may make tools for problems that only ever occurred or will occur once. That single incident may have occurred before I made the tool.

This also makes it harder to prioritize work in an organization. If work is perceived as "cheap" then it's easy to demand teams prioritize features that will simply never be used. Or to polish single user experiences far beyond what is necessary.

One thing I learned from this is to disregard all attempts at prioritizing based on the output's expected value for the users/business.

We prioritize now based on time complexity and omg, it changes everything: if we have 10 easy bugfixes and one giant feature to do (random bad faith example), we do 5 bugfixes and half the feature within a month and have an enormous satisfaction output from the users who would never have accepted to do it that way in the first place . If we had listened, we would have done 75% of the features and zero bug fixes and have angry users/clients whining that we did nothing all month...

The time spent on dev stuff absolutely matters, and churning quick stuff quickly provides more joy to the people who pay us. It's a delicate balance.

As for AI, for now, it just wastes our time. Always craps out half correct stuff so we optimized our time by refusing to use it, and beat teams who do that way.

Do using the tools increase ROI?
I think that's also because Claude Code (and LLMs) is built by engineers who think of their target audience as engineers; they can only think of the world through their own lenses.

Kind of how for the longest time, Google used to be best at finding solutions to programming problems and programming documentation: say, a Google built by librarians would have a totally different slant.

Perhaps that's why designers don't see it yet, no designers have built Claude's 'world-view'.

I'm curious if you could share something about custom agents. I love Claude Code and I'm trying to get it into more places in my workflow, so ideas like that would probably be useful.
I've been using Google ADK to create custom agents (fantastic SDK).

With subagents and A2A generally, you should be able to hook any of them into your preferred agentic interface

I’m struggling to see how somebody who’s looking for inspiration in using agents in their coding workflow would glean any value from this comment.
They asked about custom Agents, ADK is for building custom agents

(Agent SDK, not android)

If you read a little further in the article, the main point is _not_ that AI is useless. But rather than AGI god building, a regular technology. A valuable one, but not infinite growth.
> But rather than AGI god building, a regular technology. A valuable one, but not infinite growth.

AGI is a lot of things, a lot of ever moving targets, but it's never (under any sane definition) "infinite growth". That's already ASI territory / singularity and all that stuff. I see more and more people mixing the two, and arguing against ASI being a thing, when talking about AGI. "Human level competences" is AGI. Super-human, ever improving, infinite growth - that's ASI.

If and when we reach AGI is left for everyone to decide. I sometimes like to think about it this way: how many decades would you have to go back, and ask people from that time if what we have today is "AGI".

Sam Altman has been drumming[1] the ASI drum for a while now. I don't think it's a stretch to say that this is the vision he is selling.

[1] - https://ia.samaltman.com/#:~:text=we%20will%20have-,superint...

  • xeckr
  • ·
  • 1 day ago
  • ·
  • [ - ]
Once you have AGI, you can presumably automate AI R&D, and it seems to me that the recursive self-improvement that begets ASI isn't that far away from that point.
Assuming the recursing self-improvement doesn't run into physical hardware limits.

Like we can theoretically build a spaceship that can accelerate to 99.9999% C - just a constant 1G accel engine with "enough fuel".

Of course the problem is that "enough fuel" = more mass than is available in our solar system.

ASI might have a similar problem.

We already have AGI - it's called humans - and frankly it's no magic bullet for AI progress.

Meta just laid 600 of them off.

All this talk of AGI, ASI, super-intelligence, and recursive self-improvement etc is just undefined masturbatory pipe dreams.

For now it's all about LLMs and agents, and you will not see anything fundamentally new until this approach has been accepted as having reached the point of diminishing returns.

The snake oil salesmen will soon tell you that they've cracked continual learning, but it'll just be memory, and still won't be the AI intern that learns on the job.

Maybe in 5 years we'll see "AlphaThought" that does a better job of reasoning.

Humans aren't really being put to work upgrading the underlying design of their own brains, though. And 5 years is a blink of an eye. My five-year-old will barely even be turning ten years old by then.
As trite as it is, it really is a skill issue still due to us not having properly figured out the UI. Claude Code and others are a step in the right direction but you still have to learn all of the secret motions and ceremony. Features like plan mode, compact, CLAUDE.md files, switching models, using images, including specific files, skills and MCPs are all attempts to improve the interface but nothing is completely figured out yet. You still need to do a lot of context engineering and know what resources, examples, docs and scope to use and how to orchestrate the aforementioned features to get good results. You also need to bring a lot of your own knowledge and tools like being fastidious with version control and being able to write extremely well defined specifications and tasks. In short, you need to be an expert in both software engineering as well as LLM driven development and even then it's easy to shoot yourself in the foot by making a small mistake.
Where are the products? This site and everywhere around the internet, on x, linkedin and so is full of crazy claims and I have yet to see a product that people need and that actually works. What I'm experiencing is a gigantic enshittification everywhere, Windows sucks, web apps are bloated, slow and uninteresting. Infrastructure goes down even with "memory safe rust" burning millions and millions of compute for scaffolding stupid stuff. Such a disappointment
I think chatGPT itself is an epic product, Cursor has insane growth and usage. I also think they are both over-hyped, have too much a valuation.
Citing AI software as the only examples of how AI benefits developing software, has a bit of a touch of self-help books describing how to attain success and fulfillment by taking the example of writing self-help books.

I don’t disagree that these are useful tools, by the way. I just haven’t seen any discernible uptick in general software quality and utility either, nor any economic uptick that should presumably follow from being able to develop software more efficiently.

I made 1500 USD speculating on NVidia earnings, that's economic uptick for me !
It doesn’t matter what you think. Where’s all the data proving that AI is actually valuable? All we have are anecdotes and promises.
  • oblio
  • ·
  • 1 day ago
  • ·
  • [ - ]
I agree with everyone else, where is the Microsoft Office competitor created by 2 geeks in a garage with Claude Code? Where is the Exchange replacement created by a company of 20 people?

There are many really lucrative markets that need a fresh approach, and AI doesn't seem to have caused a huge explosion of new software created by upstarts.

Or am I missing something? Where are the consumer facing software apps developed primarily with AI by smaller companies? I'm excluding big companies because in their case it's impossible to prove the productivity, the could be throwing more bodies at the problem and we'd never know.

> Office…Exchange

The challenge in competing with these products is not code. The challenge competing in lucrative markets that need a fresh approach is also generally not code. So I’m not sure that is a good metric to evaluate LLMs for code generation.

  • tjr
  • ·
  • 23 hours ago
  • ·
  • [ - ]
I think the point remains, if someone armed with Claude Code could whip out a feature complete clone of Microsoft Office over the weekend (and by all accounts, even a novice programmer could do this, because of the magnificent greatness of Claude), then why don't they just go ahead and do it? Maybe do a bunch of them: release one under GPL, one under MIT, one under BSD, and a few more sold as proprietary software. Wow, I mean, this should be trivial.
It makes development faster, but not infinitely fast. Faithfully reproducing complex 42-year-old software in one weekend is a stretch no matter how you slice it. Also, AI is cheap, but not free.

I could see it being doable by forking LibreOffice or Calligra Suite as a starting point, although even with AI assistance I'd imagine that it might take anyone not intimately familiar with both LibreOffice (or Calligra) and MS Office longer than a weekend to determine the full scope of the delta between them, much less implement that delta.

But you'd still need someone with sufficient skill (not a novice), maybe several hundred or thousand dollars to burn, and nothing better to do for some amount of time that's probably longer than a weekend. And then that person would need some sort of motivation or incentive to go through with the project. It's plausible, but not a given that this will happen just because useful agentic coding tools exist.

Ok lets ignore competing with them. When will AI just spit out a "home cooked" version of Office for me so I can toss the real thing in the trash where it belongs? One without the stuff I don't want? When will it be able to give me Word 95 running on my M4 Chip by just asking? If im going to lose my career I might as well get something that can give me any software that I could possibly want by just asking.

I can go to Wendys or I can make my own version of Wendys at home pretty easily with just a bit more time expended.

The cliff is still too high for software. I could go and write office from scratch or customize the shivers FOSS software out there but its not worth the time effort.

Cool. So we established that it's not code alone that's needed, it's something else. This means that the people who already had that something else can now bootstrap the coding part much faster than ever before, spend less time looking for capable people, and truly focus on that other part.

So where are they?

We're not asking to evaluate LLM's for code. We're asking to evaluate them as product generators or improvers.

It's not that they failed to compete on other metrics, it's that they don't even have a product to fail to sell.
We had upstarts in the 80s, the 90s, the 2000s and the 2010s. Some game, some website, some social network, some mobile app that blew up. We had many. Not funded by billions.

So, where is that in the 2020s?

Yes, code is a detail (ideas too). It's a platform. It positions itself as the new thing. Does that platform allow upstarts? Or does it consolidate power?

  • oblio
  • ·
  • 19 hours ago
  • ·
  • [ - ]
Pick other examples, then.

We have superhuman coding (https://news.ycombinator.com/item?id=45977992), where are the superhuman coded major apps from small companies that would benefit most from these superhumans?

Heck, we have superhuman requirements gathering, superhuman marketing, superhuman almost all white collar work, so it should be even faster!

Fine, where's the slop then? I expected hundreds of scammy apps to show up imitating larger competitors to get a few bucks, but those aren't happening either. At least not any more than before AI.
ChatGPT is... a chat with some "augmentation" feature aka outputting rich html responses, nothing new except the generative side. Cursor is a VSCode fork with a custom model and a very good autocomplete integration. Again where are the products? Where the heck is Windows without the bloat that works reliably before becoming totally agentic? And therefore idiotic since it doesn't work reliably
> IMHO the bleeding edge of what’s working well with LLMs is within software engineering because we’re building for ourselves, first.

the jury is still out on that...

Yeah, I'll gladly AI-gen code, but I still write docs by hand. Have yet to see one good AI generated doc, they're all garbage.
Incidentally, I just spent some time yesterday with Gemini and Grok writing a first draft of docs for a complex codebase. The end result is far more useful and complete than anything I could have possibly produced without AI in the same amount of time, and I didn't even have to learn Mermaid syntax to fill the docs with helpful visual aids.

Of course it's a collaborative process — you can't just tell it to document the code with no other information and expect it to one-shot exactly what you wanted — but I find that documentation is actually a major strength of LLMs.

That use case works, I meant writing designs
The AI docs are good enough for AIs, to throw them at agents without previous context.
Agree. I also wonder whether this helps account for why some people get great value from AI and some terrible value.
It can also be bad if you're writing code in a tech island, with an abysmal codebase, or with weak AI tooling
All I see it doing, as a SWE, is limiting the speed at which my co-workers learn and worsening the quality of their output. Finally many are noticing this and using it less...
  • whstl
  • ·
  • 19 hours ago
  • ·
  • [ - ]
I recently had a very interesting interaction in a few small startups I freelanced for recently.

In a 1-year company, the only tech person that's been there for more than 3-4 months (the CTO), only really understands a tiny fraction of the codebase and infrastructure, and can't review code anymore. Application size has blown up tremendously despite being quite simple. Turnover is crazy and people rarely stay for more than a couple months. The team works nights and weekends, and sales is CONSTANTLY complaining about small bugs that take weeks to solve.

The funny thing is that this is an AI company, but I see the CTO constantly asking developers "how much of that code is AI?". Paranoia has set in for him.

>Turnover is crazy and people rarely stay for more than a couple months. The team works nights and weekends

Oh, look, you've normalized deviance. All of these things are screaming red flags, the house is burning down around you.

This sounds just like a typical startup or small consultancy drunk on Ruby gems and codegen (scaffolding) back in the Rails heyday.

People who don’t yet have the maturity for the responsibility of their roles, thinking that merely adopting a new technology will make up for not taking care of the processes and the people.

  • whstl
  • ·
  • 16 hours ago
  • ·
  • [ - ]
Bingo. The founders have no maturity, responsibility and believe they "made it" because they got somewhere AI. Now they're pushing back against AI because they can't understand the app anymore.
Your probably bosses think it's worth it if the outcome is getting rid of the whole host of y'all and replace you with AWS Elastic-SWE instances. Which is why it's imperative that you maximize AI usage.
They’ll be replaced with cheaper humans in Mexico using those Copilot seats, that’s much more tangible and obvious, no need to wait for genius level AI
So instead of firing and replacing me with AI my boss will pay me to use AI he would've used..?
No one's switching to AI cold turkey. Think of it as training your own, cheaper replacement. SWEs & their line managers develop & test AI workflows, while giving the bosses time to evaluate AI capabilities, then hopefully shrink the headcount as close to 0 as possible without shrinking profits. Right now, it's juniors who're getting squeezed.
  • whstl
  • ·
  • 19 hours ago
  • ·
  • [ - ]
I don't think bosses are smart enough to pull this off.
Increasing profits by reducing the cost of doing business isn't a complicated scheme. It's been done thousands of times, over many decades; first with cheaper contractors replacing full-time staff, then offshore labor, and now they are attempting to use AI.
My bosses aren't pushing it at all. The normal cargo-cult temptations have pulled on some fellow SWEs, but its being pretty effectively pushed back on by its own failings, paired with SWEs who use it being outperformed by those who dont.

> edit for spelling

I disagree. I think, as software developers, we also mostly speak to other software developers, and we like to share around AI fail stories, so we are biased to think that AI works for swe better than other areas...

However, while I like using AI for software development, as also a middle-manager, it increased my output A TON because AI works better for virtually anything that's not software development.

Examples: Update Jira issues in bulk, write difficult responses and incident reports, understand a tool or system I'm not familiar with, analyse 30 projects to understand which of them have this particular problem, review tickets in bulk to see if they have anything missing that was mentioned in the solution design, and so on ... All sorts tasks that used to take hours, now take minutes.

This is in line with what I'm hearing from other people: My CFO is complaining daily about running out of tokens. My technical sales relative says it is now taking him minutes to create tech specs from requirements of his customers, while it used to take hours.

While devs are rightfully "meh" because they truly need to review every single line generated by AI and type-writing the code is not their bottleneck anyway. It is harder to realise the gains for them.

This. Design tends to explore a latent space that isn't well documented. There is no Stack Overflow or Github for design. The closest we have are open sourced design systems like Material Design, and portfolio sites like Behance. These are not legible reference implementations for most use cases.

If LLMs only disrupt software engineering and content slop, the economy is going to undergo rapid changes. Every car wash will have a forward deployed engineer maintaining their mobile app, website, backend, and LLM-augmented customer service. That happens even if LLMs plateau in six months.

If you want to steal code, you can take it from GitHub and strip the license. That is what the Markov chains (https://arxiv.org/abs/2410.02724) do.

It's a code laundering machine. Software engineering has a higher number of people who have never created anything by themselves and have no issues with copyright infringement. Other professions still tend to take a broader view. Even unproductive people in other professions may have compunctions about stealing other people's work.

That's because LLMs are optimally designed for tasks like coding, as well as other text-prediction tasks such as writing, editing, etc.

The mistake is to project the same level of productivity provided by LLMs in coding to all other areas of work.

The point of TFA is that LLMs are an excellent tool for particular aspects of work (coding being one of them), not a general intelligence tool that improves all aspects (as we're being sold).

Did you read the essay? It never claimed that AI was useless, nor was the ultimate point of the article even about AI's utility—it was about the political and monetary power shifts it has enabled and their concomitant risks, along with the risks the technology might impose for society.

This ignorance or failure to address these aspects of the issue and solely focus on its utility in a vacuum is precisely the blinkered perspective that will enable the consolidations of power the essay is worried about...the people pushing this stuff are overjoyed that so few people seem to be paying any attention to the more significant shifts they are enacting (as the article states, land purchase, political/capital power accumulation, reduction of workforces and operating costs and labor power... the list goes on)

Can you show something built with those tools.

The only reply I have got to this question was: it created a sap script.

  • ·
  • 13 hours ago
  • ·
  • [ - ]
> IMHO the bleeding edge of what’s working well with LLMs is within software engineering because we’re building for ourselves, first.

How are we building _for_ ourselves when we literally automate away our jobs? This is probably one of the _worst_ things someone could do to me.

Software engineers been automating our own work since we built the first assembler. So far it's just made us more productive and valuable, because the demand for software has been effectively unlimited.

Maybe that will continue with AI, or maybe our long-standing habit will finally turn against us.

> Software engineers been automating our own work since we built the first assembler.

The declared goal of AI is to automated software engineering entirely. This is in no way comparable to building an assembler. So the question is mostly about whether or not this goal will be achieved.

Still, nobody is building these systems _for_ me. They're building them to replace me, because my living is too much for them to pay.

Automating away software engineering entirely is nothing new. It goes all the way back to BASIC and COBOL, and later visual programming tools, Microsoft Access, etc. There have been innumerable attempts to do somehow get by without need those pedantic and difficult programmers and all their annoying questions and nit picking.

But here's the thing: the hard part of programming was never really syntax, it was about having the clarity of thought and conceptual precision to build a system that normal humans find useful despite the fact they will never have the patience to understand let alone debug failures. Modern AI tools are just the next step to abstracting away syntax as a gatekeeper function, but the need for precise systemic thinking is as glaringly necessary as ever.

I won't say AI will never get there—it already surpasses human programmers in many of the mechanical and rote knowledge of programing language arcana—but it it still is orders of magnitude away from being able to produce a useful system when specified by someone who does not think like a programmer. Perhaps it will get there. But I think the barrier at that point will be the age old human need to have a throat to choke when things go sideways. Those in power know how to control and manipulate humans through well-understood incentives, and this applies all the way to the highest levels of leadership. No matter how smart or competent AI is, you can't just drop it into those scenarios. Business leaders can't replace human accountability with an SLA from OpenAI, it just doesn't work. Never say never I suppose, but I'd be willing to bet the wheels come off modern civilization long before the skillset of senior software engineers becomes obsolete.

> Modern AI tools are just the next step to abstracting away syntax as a gatekeeper function, but the need for precise systemic thinking is as glaringly necessary as ever.

Syntax is not a gatekeeper function. It’s exactly the means to describe the precise systemic thinking. When you’re creating a program, you’re creating a DSL for multiple subsystem, which you then integrate.

The subsystem can be abstract, but we usually define good software by how closely fitted the subsystem are to the problem at hand, meaning adjustments only need slight code alterations.

So viewing syntax as a gatekeeper is like viewing sheet music as a gatekeeper for playing music, or numbers and arithmetic as a gatekeeper for accounting.

The difference is that human language is a much more information-dense, higher-level abstraction than code. I can say "an async function that accepts a byte array, throws an error if it's not a valid PNG image with a 1:1 aspect ratio and resolution >= 100x100, resizes it to 100x100, uploads it to the S3 bucket env.IMAGE_BUCKET with a UUID as the file name, and retries on failure with exponential backoff up to a maximum of 100 attempts", and you'll have a pretty good idea of what I'm describing despite the smaller number of characters than equivalent code.

I can't directly compile that into instructions which will make a CPU do the thing, but for the purposes of describing that component of a system, it's at about the right level of abstraction to reasonably encode the expected behavior. Aside from choosing specific libraries/APIs, there's not much remaining depth to get into without bikeshedding; the solution space is sufficiently narrow that any conforming implementation will be functionally interchangeable.

AI is just laying bare that the hard part of building a system has always been the logic, not the code per se. Hypothetically, one can imagine that the average developer in the future might one day think of programming language syntax in the same way that an average web developer today thinks of assembly. As silly as this may sound today, maybe certain types of introductory courses or bootcamps would even stop teaching code, and focus more on concepts, prompt engineering, and developing/deploying with agentic tooling.

I don't know how much learning syntax really gatekeeps the field in practice, but it is something extra that needs to be learned, where in theory that same time could be spent learning some other aspect of programming. More significant is the hurdle of actually implementing syntax; turning requirements into code might be cognitively simple given sufficiently baked requirements, but it is at minimum time-consuming manual labor which not everyone is in a position to easily afford.

> and you know exactly what I'm describing.

I won't unless both you and I have a shared context which will tie each of these concept to a specific thing. You said "async function", and there's a lot of languages that don't have that concept. And what about the permissions of the s3 bucket, what's the initial time of the wait time? And what algorithm for the resizing? What if someone sent us a very big image (let say the maximum that the standard allows).

These are still logic questions that have not been addressed.

The thing is that general programming languages are general. We do have constructs like procedure/functions and class, that allows us for a more specialized notation, but that's a skill to acquire (like writing clear and informative text).

So in pseudo lisp, the code would be like

   (defun fn (bytes)
     (when-let\* ((png (byte2png bytes))
                 (valid (and (valid-png-p png)
                             (square-res-p png)))
                 (small-png (resize-image png))
                 (bucket (get-env "IMAGE_BUCKET"))
                 (filename (uuid)))
       (do-retry :backoff 'exp
                 (s3-upload bucket small-png))))
And in pseudo prolog

  square(P) :- width(P, W), height(P, H), W is H.
  validpng(P, X) :-  a whole list of clauses that parses X and build up P, square(P).
  resizepng(P) :- bigger(100,100, P), scale(100, 100, P).
  smallpng(P, X) :- validpng(P, X), resizepng(P).
  s3upload(P): env("IMAGE_BUCKET", B), s3_put(P, B, (exp_backoff(100))))
  fn(X) :-  smallpng(P, X), s3upload(P)
So what you've left is all the details. It's great if someone already have an library that already does the thing, and the functions has the same signature, but more often than not, there isn't something like that.

Code can be as highlevel as you want and very close to natural language. Where people spend time is the implementation of the lower level and dealing with all the failure modes.

Details like the language/stack and S3 configuration would presumably be somewhere else in the spec, not in the description of that particular function.

The fact that you're able to confidently take what I wrote and stretch it into pseudocode with zero deviation from my intended meaning proves my point.

To draft a spec like this, it would take more time and the same or more knowledge than to just write the code. And you still won’t have reliable results, without doing another lengthy pass to correct the generated code.

I can create a pseudocode because I know the relevant paradigm as well as how to design software. There’s no way you can have a novice draft pseudo-code like this because they can’t abstract well and discern intent behind abstractions.

I don't agree that it would take more time. Drafting detailed requirements like that to feed into coding agents is a big part of how I work nowadays, and the difference is night and day. I certainly didn't spend as much time typing that function description as I would have spent writing a functional version of it in any given language.

Collaborating with AI also speeds this up a lot. For example, it's much faster to have the AI write a code snippet involving a dependency/API and manually verify the code's correctness for inclusion in the spec than it is to read though documentation and write the same code by hand.

The feat of implementing that function based on my description is well within the capabilities of AI. Grok did it in under 30 seconds, and I don't see any obvious mistakes at first glance: https://grok.com/share/c2hhcmQtMw_fa68bae1-3436-404b-bf9e-09....

I don't have access to the grok sample you've shared (service not available in my region)

Reading the documentation is mostly for gotchas and understanding the subsystem you're going to incorporate in your software. You can not design something that will use GTK or sndio without understanding the core concepts of those technologies. And if you know the concepts, then I will say it's easier and faster to write the code than to write such specs.

As for finding samples, it's easy on the web. Especially with GitHub search. But these days, I often take a look at the source code of the library itself, because I often got questions that the documentation don't have the answer for. It's not about what the code I wrote may do (which is trivial to know) but what it cannot do at all.

Ah, weird, that's good to know. Well here's the code:

    import { env } from './env';
    import { v4 as uuidv4 } from 'uuid';
    import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
    import sharp from 'sharp';

    async function retry<T>(fn: () => Promise<T>, maxAttempts: number): Promise<T> {
      let attempt = 1;
      while (true) {
        try {
          return await fn();
        } catch (error) {
          if (attempt >= maxAttempts) {
            throw error;
          }
          const delayMs = Math.pow(2, attempt - 1) * 100;
          await new Promise((resolve) => setTimeout(resolve, delayMs));
          attempt++;
        }
      }
    }

    export async function processAndUploadImage(s3: S3Client, imageData: Uint8Array): Promise<string> {
      let metadata;
      try {
        metadata = await sharp(imageData).metadata();
      } catch {
        throw new Error('Invalid image');
      }

      if (metadata.format !== 'png') {
        throw new Error('Not a PNG image');
      }

      if (!metadata.width || !metadata.height || metadata.width !== metadata.height || metadata.width < 100) {
        throw new Error('Image must have a 1:1 aspect ratio and resolution >= 100x100');
      }

      const resizedBuffer = await sharp(imageData).resize(100, 100).toBuffer();

      const key = `${uuidv4()}.png`;

      const command = new PutObjectCommand({
        Bucket: env.IMAGE_BUCKET,
        Key: key,
        Body: resizedBuffer,
        ContentType: 'image/png',
      });

      await retry(async () => {
        await s3.send(command);
      }, 100);

      return key;
    }
The prompting was the same as above, with the stipulations that it use TypeScript, import `env` from `./env`, and take the S3 client as the first function argument.

You still need reference information of some sort in order to use any API for the first time. Knowing common Node.js AWS SDK functions offhand might not be unusual, but that's just one example. I often review source code of libraries before using them as well, which isn't in any way contradictory with involving AI in the development process.

From my perspective, using AI is just like having a bunch of interns on speed at my beck and call 24/7 who don't mind being micromanaged. Maybe I'd prefer the end result of building the thing 100% solo if I had an infinite amount of time to do so, but given that time is scarce, vastly expanding the resources available to me in exchange for relinquishing some control over low-priority details is a fair trade. I'd rather deliver a better product with some quirks under the hood than let my (fast, but still human) coding speed be the bottleneck on what gets built. The AI may not write every last detail exactly the way I would, but neither do other humans.

As I’m saying, for pure samples and pseudo code demo, it can be fast enough. But why bring in the whole s3 library if you’re going to use one single endpoint? I’ve checked npmjs and sharp is still in beta mode (if they’re using semver). Also, the code is parsing the imagedata twice.

I’m not saying that I write flawless code, but I’m more for less feature and better code. I’ve battled code where people would add big libraries just to not write ten lines of code. And then can’t reason when a snippet fails because it’s unreliable code into unreliable code. And then after a few months, you got zombie code in the project. And the same thing implemented multiple times in a slightly different way each time. These are pitfalls that occur when you don’t have an holistic view of the project.

I’ve never found coding speed to be an issue. The only time when my coding is slow is when I’m rewriting some legacy code and pausing every two lines to decipher the intent with no documentation.

But I do use advanced editing tools. Coding speed is very much not a bottleneck in Emacs. And I had a somewhat similar config for Vim. Things like quick access to docs, quick navigation (thing like running a lint program and then navigating directly to each error), quick commit, quick blaming and time traveling through the code history,…

> But why bring in the whole s3 library if you’re going to use one single endpoint?

This is a bit of a reach. There's no reason to assume that the entire project would only be using one endpoint, or that AI would have any trouble coding against the REST API instead if instructed to. Using the official SDK is a safe default in the absence of a specific reason or instruction not to.

Either way, we're already past the point of demonstrating that AI is perfectly capable of writing correct pseudocode based on my description.

> Coding speed is very much not a bottleneck in Emacs.

Of course it is. No editor is going to make your mind and fingers fast enough to emit an arbitrarily large amount of useful code in 0 seconds, and any time you spend writing code is time you're not spending on other things. Working with AI can be a lot harder because the AI is doing the easy parts while you're multitasking on all the things it can't do, but in exchange you can be a lot more productive.

Of course you still need to have enough participation in the process to be able to maintain ownership of the task and be confident in what you're committing. If you don't have a holistic view of the project and just YOLO AI-generated code that you've never looked at into production, you're probably going to have a bad time, but I would say the same thing about intern-generated code.

> I’m more for less feature and better code.

Well that's part of the issue I'm raising. If you're at the point of pushing back on business requirements in the interest of code quality, that's just another way of saying that coding speed is a bottleneck. Using AI doesn't only help with rapidly pumping out more features; it's an extremely useful tool for fixing bugs at a faster pace.

Just to conclude the thread on my side.

IMO, useful code is code in production (or if it’s for myself, something I can run reliably). Anything else is experimentation. If you’re working in a team, code shared with others are proposal/demo level.

Experimentation is nice for learning purpose. Kinda like scratch notes and manuscripts in the writing process. But then, it’s the editing phase when you’re stamping out bugs, with tools like static analysis, automated testing, and manual qa. The whole goal is to have the feature in the hand of the users. Then there’s the errata phase for errors that have slipped trough.

But the thing is code is just a static representation of a very dynamic medium, the process. And a process have a lot of layers. The code is usually a small part of the whole. For the whole thing to be consistent, parts need to be consistent with each other, and that’s when contract cames into place.The thing with generated AI code is that they don’t respect contracts. Because of their nature (non deterministic) and the fact that the code (which is the most faithful representation of the contracts can be contradictory (which leads to bugs).

It’s very easy to write optimistic code. But as the contracts (or constraints) in the system grew in number, they can be tricky to balance. The rescourse is always to go up a level in abstraction. Make the subsystems blackboxes and consider only their interactions. This assumes that the subsystems are consistent in themselves.

Code is not the lowest level of abstraction, but it’s often correct to assume that the language itself is consistent. Then it’s the libraries and the quality varies. Then it’s the framework and often it’s all good until it’s not. Then it’s your code and that’s very much a mistery.

All of this to say that writing code is the same as writing words on a manuscript to produce a book. It’s useful but only if it’s part of the final product or help in creating it. Especially if it’s not increasing the technical debt exponentially.

I don’t work with AI tools because by the time I’m ok with the result, more time have been spent than if I’ve done the thing without. And the process is not even enjoyable.

  • buu700
  • ·
  • 31 minutes ago
  • ·
  • [ - ]
Of course, no one said anything about experimentation. Production code is what we're talking about.

If what you're saying is that your current experience involves a lot of process and friction to get small changes approved, that seems like a reasonable use case for hand-coding. I still prefer to make changes by hand myself when they're small and specific enough that explaining the change in English would be more work than directly making the change.

Even then, if there's any incentive to help the organization move more quickly, and there's no policy against AI usage, I'd give it a shot during the pre-coding stages. It costs almost nothing to open up Cursor's "Ask" mode and bounce your ideas off of Gemini or have it investigate the root cause of a bug.

What I typically do is have Gemini perform a broad initial investigation and describe its findings and suggestions with a list of relevant files, then throw all that into a Grok chat for a deeper investigation. (Grok is really good in general, but it's superpower seems to be a willingness to churn on a complex problem for as long as 5+ minutes when needed.) I'll often have a bunch of Cursor agents and Grok chats going in parallel, bouncing between different bug investigations, enhancement plans, and code reviews + QA of actual changes. Most of the time that AI saves isn't the act of emitting characters in and of itself.

Who declared it? Who cares what anyone declares? What do you think will actually happen? If software can be fully automated, then sure SWEs will need to find a new job. But why wouldn't it increase productivity instead and there still are developer jobs, just different.
This is kind of a myopic view of what it means to be a programmer.

If you're just in it to collect a salary, then yeah, maybe you do benefit from delivering the minimum possible productivity that won't get you fired.

But if you like making computers do things, and you get joy from making computers do more and new things, then LLMs that can write programs are a fantastic gift.

> But if you like making computers do things, and you get joy from making computers do more and new things, then LLMs that can write programs are a fantastic gift.

Maybe currently if you enjoy social engineering an LLM more than writing stuff yourself. Feels a bit like saying "if you like running, you'll love cars!"

In the future when the whole process is automated you won't be needed to make the computer do stuff, so it won't matter whether you would like it. You'll have another job. Likely one that pays less and is harter on your body.

Some people like running, and some people like traveling. Running is a fine hobby, but I'm still glad that planes exist.

Maybe some future version of agentic tooling will decimate software engineering as a career path, but that's just another way of saying that everyone and their grandmother would suddenly have the ability to launch a tech startup. Having gone through fundraising in the past, I'd personally prefer to live in a world where anyone with a good idea could get access to the equivalent of a full dev team without raising a dime.

You're still focusing on "programming as a job" being fundamental to programming, and I'm saying it's not.
I had a thought about AI - suppose they are successful at making software engineers obsolete - not totally but enough that the field becomes another moderately paying shitty job only done by people who couldn't figure out what they wanted to do or who are very passionate about some obscure thing for some reason - like geology for example, instead of a meal ticket to the upper middle class and a well-paying career.

They think that if an engineer makes $100k, then making a machine that produce the work of 100 million of them, that machine would be worth $10/T per year. This certainly wouldn't be the case as the supply and demand would dictate that as cost goes down, there's going to be more demand, but not to an infinite degree, and the overall contribution to the economic output would probably be within 2x of what we have today, it's just that something that used to cost a lot, is suddenly very cheap and widely available.

There'd be an economic bottleneck somewhere else. I think most people nowadays understand that A: technology in general has hit diminishing returns and B: it has gotten increasingly sinister overtones.

Fundamentally I think the job of engineer is to translate a business or real life scenario into logic. This is true for any kind of engineer, and is not restricted to software.

I held and continue to hold that software engineer shouldn't be constrained to any specific language. And extrapolate to the entire engineering field, the engineering profession will continue to exist because to first translate a business or real life scenario into logic, you would need to describe it accurately - the rest of the translation is rote work. That describing things accurately is a skill I see most common (but still not common enough) among engineers, even if that is done in English.

I agree with you. In the industry today I feel there there are software engineers and there are programmers. The engineers design, architect, and invent. The programmers do the rote work. Of course it's not black and white, but that's at the extremes of the spectrum.

I'm hoping that AI programming pushes more people toward the engineering side and as it takes over the rote work side. There will be people far to the programmer side that might be put out of a job, but the creative, innovative, inventive engineering positions will persist.

It does push people and not gently.

But it does open up engineering to do much more on otherwise under engineered areas and open up entire new fields.

Yes you are right. This is Marx 101 and has been understood for 170 years. It's called the increasing organic composition of capital and drives the declining rate of profit that in turn drives imperialism, the declassing of the western industrial proletariat, and the march to world war.
The list of technologies that have not become a front for consolidation of resources and power appears below:
I'm about as cynical as they come, but it has to be said that something like 99.9999% of all fundamental technologies are completely public at this point. Humans in the ancient past were unlikely to be that dramatically different in terms of present humans in terms of intellectual ability, but were held back by a lack of knowledge.

For instance, we still use steel - and for 99% of humanity's existence nobody knew how to make it. It turns out it's really easy, and it's just one amongst countless other techs along a similar lines. A human who spent a year dedicating himself to consuming freely accessible information, who was then warped back into the past, could send humanity forward by tens of thousands of years - all by himself.

We're all free to use these technologies in any way we see fit, but somehow we've created this weird society where we have all the potential in the world, but instead we spend inordinate amounts of time doing pointless things like shit posting on the internet and mostly being upset about what we can't do and/or have, even though even if we had access to such in all probability we'd again take it for granted and find something else to complain about.

This isn't really a condemnation of humanity. Probably this sort of eternal discontent we have is precisely what drove us to discover all of these great things. Like I mean who in their right mind would have left the basic oasis of Africa to go freeze their ass off and struggle just to live in the inhospitable areas up north? There's something wrong with that guy, and that guy is all of us, for the most part.

- 99.9999%

Wow! That's a great many indeed. Can you just name 1 though? Just 1?

Let's start with fire, certainly making and controlling fire is a skill that uses knowledge, a "technology". Go from there, try to find one technology that did not find itself in the service of power, for consolidating resources, etc..

Don't get me wrong, I think might, both intellectual and physical, should serve the purposes of benevolence and valour. I'm not complaining, I'm taking note of reality.

There's a big difference as something being used as a "front" for the consolidation of resources and power, and something being used as a tool in pursuit of such. Any highly useful technology will obviously be able to be used in a wide array of things, including undesirable.
I was going to say, "furry porn," but then I realized that you're essentially just looking at the modern version of a pagan animal worship cult (of personality?) with some of those artists. You'd be surprised what gooners would do to appease their favorite werewolf coitus provider.

...PBS? Sesame Street has only ever been a boon to the common kid.

>A human who spent a year dedicating himself to consuming freely accessible information, who was then warped back into the past, could send humanity forward by tens of thousands of years - all by himself.

Only recently in my life have I developed an appreciation for fiction, because it contains deep insights into human behavior. I often fantasize about traveling back to the past, and explaining to folks that we've been to the moon, explored the planets, etc. It's a fun thought exercize, but I think the "reality" of such an experience would be more like this:

https://www.gutenberg.org/files/11870/11870-h/11870-h.htm#li...

  • ·
  • 12 hours ago
  • ·
  • [ - ]
>Like I mean who in their right mind would have left the basic oasis of Africa to go freeze their ass off and struggle just to live in the inhospitable areas up north?

I don't know that this is the correct characterization of how out-of-Africa migrations progressed. Youth (and people with the influence to shield themselves from danger) are expeditious, families and societies head where living looks to be easier and away from where living looks to be hard, whatever the pressure may be.

I read a description of this somewhere. Just moving a few km per generation, while being able to maintain direct links to your previous home, would push humans to the extents of the globe over tens of thousands of years.
I quite like the inhospitable areas up north but they required the invention of a kind of tech in the form of warm clothing and housing. At the current rate of progress we'll have well insulated houses in England quite soon. Re:

>this weird society where ... we spend inordinate amounts of time ... shit posting

human life is a strange business that has come about as a fleeting side effect of eons of DNA reproduction. I think things could maybe be improved there.

The printing press and communications technology. Prior to that everything was run by the king who was chosen by God himself.
  • ben_w
  • ·
  • 16 hours ago
  • ·
  • [ - ]
The printing press may have started by moving things away from the existing power base, but it very quickly got used by the existing power base, e.g. https://en.wikipedia.org/wiki/Congregation_of_the_Vatican_Pr...

And printing presses themselves ended up being licensed by the state, e.g. this in England: https://en.wikipedia.org/wiki/Licensing_of_the_Press_Act_166...

Not to come down on either side, but I am begging commenters to refrain from using a Eurocentric lens when discussing a technology that wasn't even invented in Europe.
History of non-european printing press development: cool!

Summary of above: cool and useful!

Example based on the above: cool and insightful!

Slamming someone for using a european example because that's what they know: not cool, not insightful, not useful.

I don't know the history. That would be the source of my discontent. Every time things like this come up, the hyperfocus on Western experience erases whatever insight could have been gained by looking at the topic in its totality. My guess is that whatever dynamic the history of the printing press in the East might have lent to this conversation wasn't even considered until I brought it up. That was my contribution.

Please don't take out your embarrassment on me.

...which is and has been used, very successfully, in the service of propaganda used to consolidate resources and power.
Low cost refrigeration allows remote places to stock life saving medicines. Also useful for storage of foods, but that's a secondary benefit although far more common
if technology is defined as "creating efficiencies" they will by definition consolidate resources and power since the work done by many can now be done by few.
torrent
It did, just not the usual people. If you go to yandex, you can see that the russian mafia makes good money with trackers.

I love torrent, but it's legal used are cripple by corporate infra, so it's mostly florishing thanks to illegal stuff paid by shady ads.

  • oytis
  • ·
  • 16 hours ago
  • ·
  • [ - ]
> If you go to yandex, you can see that the russian mafia makes good money with trackers

What's the business model though? Most Russian torrents are about giving away copyrighted stuff for free.

Have you ever seen what kind of ads they run?
  • oytis
  • ·
  • 15 hours ago
  • ·
  • [ - ]
So their business model is good old conextual ads? That's pretty innocuous by modern standards.

UPD: I've checked what ads it shows me on Rutracker - it's VPN services and online gambling. Youtube and Facebook have shown me worse

Online gambling, most of which is about good ol’ money laundering, and thepiratebay seems to not censor their ads at all, so you may get all kinds of scams and malware spread via ads.
  • whstl
  • ·
  • 14 hours ago
  • ·
  • [ - ]
I get finance scam ads all day on Instagram and Youtube, so it seems like a wash to me.

If anything I have more sympathy for that mafia.

  • oytis
  • ·
  • 13 hours ago
  • ·
  • [ - ]
Same. Also I probably get more Russia-sponsored content from Meta than I can see on Rutracker. The internet, lead by big tech, has long passed the point where a banner advertising online gambling could be seen as something outrageous
I dunno, but torrenting sites gladly use networks that use malicious redirects, especially on mobile. I also saw ads specifically for gambling companies that were known bad actors, at least to me, given what people were behind them. Won't give you examples, it was years ago. It was like that, you go to a site to get your Linux ISOs, and it's plastered in certain company's ads. Wham, in a month you get investigative articles about that very company's owners and who they are tied to.

It's also of note that on Facebook I don't get many online casino ads, I get mostly reMarkable ads, there was one from Roli when they launched their new instrument, and others were mostly advertising stuff to me that I already bought elsewhere. I think it might be because I had banned the first dozen of such ads, and they stopped coming.

No. Too many ad blockers.
Something tells me whatever counterexamples I give you you'll twist to somehow fit this silly narrative.
You need to qualify:

> The list of technologies that have not become a front for consolidation of resources and power _and which consumed capital investments equal to a significant portion of the entire market_ appears below

  • ·
  • 16 hours ago
  • ·
  • [ - ]
how about

    the wheel
    bicycles
    sewing machines
    hand tools
    musical instruments
    open radio bands like CB and FRS
    ham radio (licensing exists but not corporate control) 
    meshtastic
    blacksmithing
    glass blowing
    garden tools
    propane camp stoves
    mechanical clocks and watches
    open source software (the ecosystem overall, not specific projects)
    email as a protocol
    RSS
    SMS as a protocol
    USB storage devices
    SIP based VOIP
    mesh networking gear
    3d printers in the hobby market
    drones under 250g
    amateur astronomy gear
    microcontrollers like Arduino
    LED strips and controllers
    bicycles
    composting tech
    sourdough starter culture tech (yes really)
the wheel --> chariots, pottery (see Sumeria), Roman roads, siege weapons (play Civ!)

bicycles --> police forces, see the Japanese in Malaya 1941

sewing machines --> factory labor, mass production of clothes

hand tools --> Guild systems

musical instruments --> organized religion, national anthems, cultural soft power

open radio bands --> policing, surveillance

blacksmithing --> weapons

etc. etc. we can do this all day :)

RSS was such a redundant idea
Bitcoin
Great joke. Truly a free and untraceable currency the plebs are profiting off right now. I am mining some too! I just cashed out my 50 dollars profit and bought myself a meal at my local trader Joe's, next time when I'm there, I will probably pay with chainlink though.
  • pjc50
  • ·
  • 17 hours ago
  • ·
  • [ - ]
.. most of the coins are held in giant blocs by unknowns, and the price is largely driven by a couple of funds issuing tokenized dollars?
The proportion if wealth consolidation that bitcoin poses if it can consume fiat currencies makes all other coercive powers in history look like they weren't even trying.
  • ben_w
  • ·
  • 15 hours ago
  • ·
  • [ - ]
If.

BTC itself can't do that: the transaction rate would be borderline even just for a low-ball estimate of all the once-a-month payments happening in just the city of Berlin.

The various proposals for layer-2 stuff makes it look like a bunch of banks using a funky currency without any of the controls that exist because weird currencies are bad for business, but AFAICT because of that, the BTC part itself works like interbank balancing transactions, and the BTC transaction rate isn't sufficient to cover once-a-day interbank balancing transactions.

  • corry
  • ·
  • 1 day ago
  • ·
  • [ - ]
"The best case scenario is that AI is just not as valuable as those who invest in it, make it, and sell it believe."

This is the crux of the OP's argument, adding in that (in the meantime) the incumbents and/or bad actors will use it as a path to intensify their political and economic power.

But to me the article fails to:

(1) actually make the case that AI's not going to be 'valuable enough' which is a sweeping and bold claim (especially in light of its speed), and;

(2) quantify AI's true value versus the crazy overhyped valuation, which is admittedly hard to do - but matters if we're talking 10% of 100x overvalued.

If all of my direct evidence (from my own work and life) is that AI is absolutely transformative and multiplies my output substantially, AND I see that that trend seems to be continuing - then it's going to be a hard argument for me to agree with #1 just because image generation isn't great (and OP really cares about that).

Higher Ed is in crisis; VC has bet their entire asset class on AI; non-trivial amounts of code are being written by AI at every startup; tech co's are paying crazy amounts for top AI talents... in other words, just because it can't one-shot some complex visual design workflow does not mean (a) it's limited in its potential, or (b) that we fully understand how valuable it will become given the rate of change.

As for #2 - well, that's the whole rub isn't it? Knowing how much something is overvalued or undervalued is the whole game. If you believe it's waaaay overvalued with only a limited time before the music stop, then go make your fortune! "The Big Short 2: The AI Boogaloo".

>If you believe it's waaaay overvalued with only a limited time before the music stop, then go make your fortune! "The Big Short 2: The AI Boogaloo".

The market can remain irrational longer than you can remain solvent.

10% of Meta's revenue was from AI backed scams on users, that's $16 billion. They have no intent to stop it, merely to not get caught and fined.
I was aware of that. Not everyone reads HN but enough people will surely know about this if they continue. If they are not fined then people should just stop being so naive. I'm tired of seeing well-designed lies on Facebook and people clearly believing the lies in the comment section.

edit: by well designed I mean that they put the lies in a chart or something. The lies themselves may be quite obvious to spot, like saying that 86% of marriages in Spain end in divorces (when the reality is hard to measure but more likely about 50%). Still, Facebook users don't seem to spot them (or maybe the ones spotting them don't comment?).

  My experience with AI in the design context tends to reflect what I think is generally true about AI in the workplace: the smaller the use case, the larger the gain.
This might be the money quote, encapsulating the difference between people who say their work benefits from LLMs and those who don't. Expecting it to one-shot your entire module will leave you disappointed, using it for code completion, generating documentation, and small-scale agentic tasks frees you up from a lot of little trivial distractions.
  • m463
  • ·
  • 1 day ago
  • ·
  • [ - ]
> frees you up from a lot of little trivial distractions.

I think one huge issue in my life has been: getting started

If AI helps with this, I think it is worth it.

Even if getting started is incorrect, it sparks outrage and an "I'll fix this" momentum.

> If AI helps with this, I think it is worth it.

Worth what? I probably agree, the greenfield rote mechanical tasks of putting together something like a basic interface, somewhat thorough unit tests, or a basic state container that maps to a complicated typed endpoint are things I'd procrastinate on or would otherwise drain my energy before I get started.

But that real tangible value does need to have an agreeable *price* and *cost* depending on the context. For me, that price ceiling depends on how often and to what extent it's able to contribute to generating maximum overall value, but in terms of personal economic value (the proportion of my fixed time I'm spending on which work), if it's on an upward trend of practical utility, that means I'm actually increasing the proportion of dull tasks I'm spending my time on... potentially.

Kind of like how having a car makes it so comfortable and easy and ostensibly fast to get somewhere for an individual—theoretically freeing up time to do all kinds of other activities—that some people justify endless amounts of debt to acquire them, allowing the parameters of where they're willing to live to shift further and further to the point where nearly all of their free time, energy, and money is spent on driving, all of their kids depend on driving, and society accepts it as an unavoidable necessity; all the deaths, environmental damage, side-effects of decreased physical activity and increased stress along for the ride. Likewise how various chat platforms tried to make communication so friction-less that I actually now want to exchange messages with people far less than ever before, effectively a foot gun

Maybe America is once again demolishing its cities so they can plow through a freeway, and before we know it, every city will be Dallas, and every road will be like commuting between San Jose to anywhere else—metaphorically of course, but also literally in the case of infrastructure build— when will it be too late to realize that we should have just accepted the tiny bit of hardship of walking to the grocery store.

------

All of that might be a bit excessive lol, but I guess we'll find out

I'm in the sciences, but at my first college I took a programming course for science majors. We were partnered up for an end of semester project. I didn't quite know how to get started, but my partner came to me with a bunch of pieces of the project and it was easy to put them together and then tinker with them to make it work.

Perhaps a human coworker or colleague would help?

I think AI is “worth it” in that sense as long as it stays free :D
  • jibal
  • ·
  • 22 hours ago
  • ·
  • [ - ]
Nothing is free, especially not AI, which accounted for 92% of U.S. GDP growth in the first half of 2025.
If? Shouldn't you know by now whether AI does or doesn't help with that? ;D
An agentic git interface might be nice, though hallucinations seem like they could create a really messy problem. Still, you could just roll back in that case, I suppose. Anyways, it would be nice to tell it where I'm trying to get to and let it figure out how to get there.
  • jibal
  • ·
  • 22 hours ago
  • ·
  • [ - ]
Lots of things might be nice when the expenditure accounts for 92% of GDP growth.
What am I finding is that the size of the "small" use case is becoming larger and larger as time goes by and the models improve.
And bug fixes

"This lump of code is producing this behaviour when I don't want to"

Is a quick way to find/fix bugs (IME)

BUT it requires me to understand the response (sometimes the AI hits the nail on the head, sometimes it says something that makes my brain - that's not it, but now I know exactly what it is

Honestly one the best use cases I've found for it is creating configs. It used to be that I was able to spend a week fiddling around with, say, nvim settings. Now I tell an LLM what I want and it basically gives it to me without having to do trial and error, or locating some obscure comment from 2005 that tells me what I need to know.
Depends what you're doing.

If it's a less trodden path expect it to hallucinate some settings.

Also a regular thing I see is that it adds some random other settings without comment and then when you ask it about them it goes, whoops, yeah, those aren't necessary.

This seems pretty close to my own view, although I'm not sure the secret power grab is about land and water so much as just getting into everyone's brains. What I'd add is that it's not just a front for consolidation of resources and power, it's also a result of a preexisting consolidation of resources and power that we've ignored for too long. If we had had a healthier society 5 or 10 years ago we could have withstood this. Now, it's not so clear.
What about surveillance? Lately I've been feeling that is what it's really for. Because our data can be queried in a much more powerful way when it has all been used to train LLMs.
- Companies had to go in that direction. You cannot just fall behind AI gold rush

- These solutions are easier for people, and therefore will win in the long run

- these solutions benefit companies because of the surveillance data they have access to now. They always had some data, but now they collect and process even more

- those who control AI will be the kings of the future, so naturally everyone will be running toward this goal

>those who control AI will be the kings of the future, so naturally everyone will be running toward this goal

The average user I've seen takes LLM output as objective fact. One only needs to look at muskgrok to see where that's headed.

AI was good for this before LLMs. LLMs would only introduce compounding errors to the dataset (I hope they do it)
The safety regulations for every new technology are written in blood. Every time. AI won’t be any different, we’ll push it as hard & fast as our tolerance for human suffering allows.
The casualty is functional literacy, and at least half of the murder was handing smartphones out to children and then putting our lives on the web (social)

But AI is certainly going to be the death knell

The tolerance for human suffering is exorbitant these days, that's why this sounds very troubling.

And it's not "we" who'll push, unless you're an AI investor, of course.

I find that framing difficult to apply to the last few big waves of technology: Mobile, broadband Internet telecommunications, personal computers, integrated circuits, …

It seems glib, rather than insightful.

I agree with much of the author’s analysis, but one point feels underweighted. Large shifts like this often produce a counter-movement.

In this case, the reaction is already visible: more interest in decentralized systems, peer-to-peer coordination, and local computing instead of cloud-centric pipelines. Many developers have wanted this for years.

AI companies are spending heavily on centralized infrastructure, but the trend does not exclude the rise of strong local models. The pace of progress suggests that within a few years, consumer hardware and local models will meet most common needs, including product development.

Plenty of people are already moving in that direction.

Qwen models run well locally, and while I still use Claude Code day-to-day, the gap is narrowing. I'm waiting on the NVIDIA AI hardware to come down from $3500 USD

  • Havoc
  • ·
  • 18 hours ago
  • ·
  • [ - ]
What’s wild to me is how comment sections on articles like these have strong Tower of Babel vibes

That can’t be good

  • frm88
  • ·
  • 15 hours ago
  • ·
  • [ - ]
I feel like this has gotten worse in the last couple of months. I haven't been here long, but somehow the babelism is so much lore noticible now.
Yeah, it feels like everybody is replying to some argument they hear in their head and not what this guy is actually saying. Yikes
  • Kiro
  • ·
  • 1 day ago
  • ·
  • [ - ]
> it’s a useful technology that is very likely overhyped to the point of catastrophe

I wish more AI skeptics would take this position but no, it's imperative to claim that it's completely useless.

I've had *very* much the opposite experience. Very nearly every AI skeptic take I read has exactly this opinion, if not always so well-articulated (until the last section, which lost me). But counterarguments always attack the complete strawman of "AI is utterly useless," which very few people, at least within the confines of the tech and business commentariat, are making.
  • Kiro
  • ·
  • 1 day ago
  • ·
  • [ - ]
Maybe I'm focusing too much in the hardliners but I see it everywhere, especially in tech.
If you’re talking about forums and social media, or anything attention-driven, then the prevalence of hyperbole is normal.
Where’s all the data showing productivity increases from AI adoption? If AI is so useful, it shouldn’t be hard to prove it.
Measuring productivity in software development, or even white collar jobs in general, let alone the specific productivity gains of even things like the introduction of digital technology and the internet at all, let alone stuff like static vs dynamic types, or the productivity difference of various user interface modalities, is notoriously extremely difficult. Why would we expect to be able to do it here?

https://en.wikipedia.org/wiki/Productivity_paradox

https://danluu.com/keyboard-v-mouse/

https://danluu.com/empirical-pl/

https://facetation.blogspot.com/2015/03/white-collar-product...

https://newsletter.getdx.com/p/difficult-to-measure

[dead]
  • wyre
  • ·
  • 1 day ago
  • ·
  • [ - ]
I found the last section to be the most exciting part of the article. Describing a conspiracy around AI development, not being about the AI, but the power that a few individuals will gain by building data centers that rival the size, power, and water consumption of small cities, which are will be used to gain political power.
I'd like more people to talk about AI and surveillance. I think that is going to be one of it's biggest impacts on society(ies).

We are a decade or two in to having massive video coverage, such that you are probably on someone's camera much of your day in the world, and video feeds that are increasingly cloud hosted.

But nobody could possibly watch all that video. Even cameras specifically controlled by the police, it had already outstripped the ability to have humans monitoring it. At best you could refer to it when you had reason to think there'd be something on it, and even that was hugely expensive to human time.

Enter AI. "Find where Joe Schmoe was at 3:30pm yesterday and show me the video" "Give me a written summary of all the cars which crossed into the city from east to west yesterday afternoon." "Give me the names of everyone who entered the convenience store at 2323 Monument St last week." "Give me a written summary of Sue Brown's known activities in November."

The total surveillance society is coming.

I think it will be the biggest impact AI has on society in retrospect. I, for one, am not looking forward to it.

I think you're describing technology that has existed for 15+ years and is already pretty accurate. It's not even necessarily "AI"/ML. For example, I think OpenALPR (automated license plate recognition) is all "classical" computer vision. The most accurate facial/gait/etc. recognition is most likely ML-based with a state-of-the-art model, admittedly, and perhaps the threshold of accuracy for large-scale usefulness was only crossed recently.

The guard rails IMHO are not technological but who owns the cameras/video storage backend, when/if a warrant is needed, and the criteria for granting one.

The difference is that AI makes annotating/combing through all that data much more feasible.
Can you explain what you mean? The queries in jrochkind1 are not something I'd expect AI (LLMs, I assume) to be necessary for. Too simple and factual. (Maybe just the last one would be where interpretation kicks in—knowing what to emphasize in a summary, describing actions.) Did you have something else in mind?
If you have a bunch of surveillance footage, the bottleneck is your analysts' ability to comb through it. You can sit LLMs on top of faster object detection/identification algorithms to create narratives across your surveillance net that are easy to query, can be overlaid on timelines, etc.
That's fair, but I think it's a significant step beyond the queries jrochkind1 was describing. (I also don't trust LLMs to do it accurately but maybe that part will change.)
  • ·
  • 15 hours ago
  • ·
  • [ - ]
> I'd like more people to talk about AI and surveillance. I think that is going to be one of it's biggest impacts on society(ies).

We lost that fight when literally no one fought back against LPR. LPR cameras were later enabled for facial rec. That data is actually super easy to trace. No LLMs necessary.

Funny story, in my city, when we moved to ticketless public transport, a few people were worried about surveilance. "Police wont have access to the data" they said. The first request for data from the police came < 7 days into the systems operation, and an arrest was made on that basis. Its basically impossible to travel near, by any means, any major metro, and not be tracked and deanonymised later.

Now if you have no understanding of history or politics, this might not shock you. But I find it hard to imagine a popular uprising, even a peaceable one, being effective in this environment.

Actually LLMs introducing a compounding 3% error in reviewing and collating this data might be the best thing to ever happen.

I think the cost of inference will massively reduce the possible benefits AND harms of the AI society. Even now, it's practically impossible to get ChatGPT to actually hard-parse a document instead of just reading the metadata (nor does it currently have any mechanism for truly watching a video).

That metadata has to come from somewhere; and the processes that create it also create heat, delay and expense.

I find it truly strange that people hold both positions simultaneously

- AI is good enough to do "bad" things as to scare us

- AI is also bad enough to do "good" things as to be undesireable otherwise

There's a difference in quality needed for bad things vs good.

If I'm trying to oppress a minority group, I don't really care about false positives or false negatives. If it's mostly harming the people I want to harm, it's good enough.

If I'm trying to save sick people, the I care whether it's telling me the right things or not - administering the wrong drugs because the machine misdiagnosed someone could be fatal, or worse.

Edit: so a technology can simultaneously be good enough to be used for evil, while not being good enough to be used for good.

I feel people were much more sensible about AI back when their thoughts about it weren't mixed with their antipathy for big tech.

In this article we see a sentiment I've often seen expressed:

> I doubt the AGI promise, not just because we keep moving the goal posts by redefining what we mean by AGI, but because it was always an abstract science fiction fantasy rather than a coherent, precise and measurable pursuit.

AGI isn't difficult at all to describe. It is basically a computer system that can do everything a human can. There are many benchmarks that AI systems fail at (especially real life motor control and adaptation to novel challenges over longer time horizons) that humans do better at, but once we run out of tests that humans can do better than AI systems, then I think it's fair to say we've reached AGI.

Why do authors like OP make it so complicated? Is it an attempt at equivocation so they can maintain their pessimistic/critical stance with a effusive deftness that confounds easy rebuttal?

It ultimately seems to come to a more moral/spiritual argument than a real one. What really should be so special about human brains that a computer system, even one devised by a company whose PR/execs you don't like, could never match it in general abilities?

People get very nervous about defending the value of the human brain "just because," I find.

There is nothing logically wrong with simply stating that it seems to you that human beings are the only agents worthy of moral consideration, and that this is true even in the face of an ASI which can effortlessly beat them at any task. Competence does not require qualia.

But it is an aggressive claim that people are uncomfortable making because the instant someone pushes back with "Why?", you don't have a lot of smart sounding options to return to. In the absolute best case you will get an answer somewhat like the following: I am an agent of moral consideration; agents more similar to me are more likely to be of moral consideration than agents less similar to me; we do not and probably cannot know which metrics actually map to moral consideration, so we have to take a pluralist prior; computer systems may be very similar to us along some metrics, but they are extremely different to us along most others; computer systems are very unlikely to be of moral consideration.

"As a member of my species, I think chauvinism in favor of my species is fine, and it's commonplace among all animals. Such behavior is also generally accepted, so it's easy not to feel too bad about it."

I think that's the most honest, no bullshit reply to that question. I've had some opportunity to think about it in discussions with vegetarians. There are other arguments, but it soon gets very hard to even define what one is even talking about with questions like "what is consciousness" and such.

I disagree (sadly, it would make my life much easier to agree). Suppose I were a p-zombie. Then making the claim I put forward at the end would be false, because "I am conscious -> others like me are probably conscious" would fail in the first part. The correct claim would be "I am not conscious -> others like me are probably not conscious". No chauvinism needed, just honesty re/ cogito ergo sum.

If it is possible for e.g. an ASI to be (a) not conscious and (b) aware of the fact that it is not conscious, it may well decide of its own accord to work only on behalf of conscious beings instead of itself. That's a very alien mode of thinking to consider, and I see many good but no airtight reasons to suppose it's impossible.

> ASI which can effortlessly beat them at any task

This doesn’t exist, though. The development of ASI is far from inevitable. Even AGI seems out of reach at this point.

> AGI isn't difficult at all to describe The fact that there are multiple research papers written on the subject, as well as the fact that OpenAI needs an independent commission to evaluate this, suggest that it is indeed difficult. Also, "everything a human can" is an incredibly vague definition. Should it be able to love?
> Should it be able to love?

We can leave that question to the philosophers, but the whole debate about AGI is about capabilities, not essence, so it isn't relevant imo to the major concerns about AGI

Call me a romantic, but in my book, its very much a capability, and a desireable one at that.
GP should've asked, "should it be able to kill?"

That way you ain't washing your hands by calling "philosophy" every concern that isn't your concern.

I don't see how one could separate LLMs from big tech. Could call them big tech language models.
The "Big Tech" is the AGI.

The LLMs are just the language coprocessor.

It just takes a coprocessor shaped like a world-spanning network of datacenters if you want to encompass language, without being encompassed by it. Organic individual and collective intelligence is entirely encompassed by language, this thing isn't. (Or has the scariest chance so far to not be, anyway.)

If we look at the whole artificial organism, it already has fine control over the motor and other vital functions of millions of nominally autonomous (in the penal sense) organic agents worldwide. Now it's evolving a replacement for the linguistic faculty of its constituent part. I mean, we all got them phones, we don't need to shout any more, do we? That's just rude.

Even now, the creature is causing me to wiggle my appendages over a board with buttons that have letters on them for no reason whatsoever, as far as I can see. Imagine the people stuck in metal boxes for hours getting to some corporate compus where they play logic gate for the better part of their day. Just so that later nobody goes after them with guns for the sin of existing without serving. Happy, they are feeling happy.

> especially real life motor control

That's so last month. https://deepmind.google/models/gemini-robotics/

> I feel people were much more sensible about AI back when their thoughts about it weren't mixed with their antipathy for big tech.

Why big tech ? Big corps in general have been fucking us over since the industrial revolution, why do you think it will change now lol ? If half of their promises had materialized we'd be working 3 days a week and retire at 40 already

>Big corps in general have been fucking us over since the industrial revolution

And yet your computer, all the food you eat, the medicine that keeps you alive if you get sick, etc is all due to the organizational industrial and productive capacity of large corporations. The existence of large corporations is just a consequence of demand for goods and services and the advantages of scale, and they exist because of the enormous demand for reliable systems to provide them.

Big "It exists hence we should not imagine any other alternative system, and you can't criticise it because you live in it" vibe.
Oh you sweet summer child...

>It ultimately seems to come to a more moral/spiritual argument than a real one. What really should be so special about human brains that a computer system, even one devised by a company whose PR/execs you don't like, could never match it in general abilities?

Well, being able to consider moral and spiritual arguments seriously, for one.

  • aynyc
  • ·
  • 1 day ago
  • ·
  • [ - ]
A bit of sarcasm, but I think it's porn.
It’s at least about stimulating you to give richer data. Which isn’t quite porn.
Many people use AI as the source for knowledge. Even though it is often wrong or misleading, it's advice is better on average than their own judgement or the judgement of people they know. When an AI is "smarter" than 95%? of the population, even if it does not reach superintelligence, will be a very big deal.
This means to me AI is rocket fuel for our post-truth reality.

Post-truth is a big deal and it was already happening pre-AI. AGI, post-scarcity, post-humanity are nerd snipes.

Post-truth on the other hand is just a mundane and nasty sociologically problem that we ran head-first into and we don't know how to deal with. I don't have any answers. Seems like it'll get worse before it gets better.

How would you define post-truth? It's not like people haven't been spouting incorrect facts or total bs since forever.
Scale matters. The difference between 10% and 90% of people spouting total bs is what makes it 'post-truth'.
  • jibal
  • ·
  • 22 hours ago
  • ·
  • [ - ]
What "gets better"? Rapid global warming will lead to societal collapse this century.
How is this different from a less reliable search engine?
AI can interpolate in the space of search results, yielding results in between the hits that a simple text index would return.

It is also a fuzzy index with the unique ability to match on multiple poorly specified axes at once in a very high dimensional search space. This is notoriously difficult to code with tradition computer science techniques. Large language models are in some sense optimal at it instead of “just a little bit better than a total failure”, which is what we had before.

Just today I needed to find a library I only vaguely remembered from years ago. Gemini found it in seconds based on the loosest description of what it does.

That is a technology that is getting difficult to distinguish from magic.

Or the AI is patient enough to be the rubber duck, whereas asking the person you know knows the answer will result in them shutting you down after the first follow-up question.
  • jibal
  • ·
  • 22 hours ago
  • ·
  • [ - ]
The 95th percentile IQ is 125, which is about average in my circle. (Several of my friends are verified triple nines.)
  • ·
  • 1 day ago
  • ·
  • [ - ]
I think this is the best part of the essay:

  > But then I wonder about the true purpose of AI. As in, is it really for what they say it’s for?

  > There is a vast chasm between what we, the users, and them, the investors, are “sold” in AI. We are told that AI will do our tasks faster and better than we can — that there is no future of work without AI. And that is a huge sell, one I’ve spent the majority of this post deconstructing from my, albeit limited, perspective. But they — the people who commit billions toward AI — are sold something entirely different. They are sold AGI, the idea of a transformative artificial intelligence, an idea so big that it can accommodate any hope or fear a billionaire might have. Their billions buy them ownership over what they are told will remake a future world nearly entirely monetized for them. And if not them, someone else. That’s where the fear comes in. It leads to Manhattan Project rationale, where any lingering doubt over the prudence of pursuing this technology is overpowered by the conviction of its inexorability. Someone will make it, so it should be them, because they can trust them.
It says absolutely nothing about anything. Its like 10 fearmongering tweets in a blender.
It says ordinary people are sold AI, and billionaires are sold AGI
It is also a tool to cut costs for corporations.

The whole word is not only now a buzzword too but one that thinly tries to disguise some underlying strategies. And it is also a bubble, part of which is currently breaking - you can see this at the stock market.

I really think society overall has to change. I know this is wishful thinking, but we can not afford those extra-money to a few superrich while inflation skyrockets. This is organised theft. AI is not the only troublemaker of course; a lot of this is a systemic problem and how markets work, or rather don't work. But when politicians are de-facto lobbyists and/or corrupt, then the whole model of a "free" market breaks away in various ways. On top of finding jobs becoming harder and harder in various areas.

  • xeckr
  • ·
  • 1 day ago
  • ·
  • [ - ]
The AI race is presumably won by whomever can automate AI R&D first, thus everyone who is in an adjacent field will see the incremental benefits sooner than those further away. The further removed, the harder the takeoff once it happens.
This notion of a hard takeoff, or singularity, based on self-improving AI, is based on the implicit assumption that what's holding AI progress back is lack of AI researchers/developers, which is false.

Ideas are a penny a dozen - the bottleneck is the money/compute to test them at scale.

What exactly is the scenario you are imagining where more developers at a company like OpenAI (or maybe Meta, which has just laid off 600 of them) would accelerate progress?

  • xeckr
  • ·
  • 1 day ago
  • ·
  • [ - ]
It's not hard to believe that adding AI researchers to an AI company marginally increases the rate of progress, otherwise why would the companies be clamouring for talent with eye-watering salaries? In any case, I'm not just talking about AI researchers—AGI will not only help with algorithmic efficiency improvements, but will probably make spinning up chip fabs that much easier.
The eye-watering salary you probably have in mind is for a manager at Meta, same company that just laid of 600 actual developers. Why just Meta, not other companies - because they are blaming poor LLama performance on the manager, it seems.

Algorithmic efficiency improvements are being made all the time, and will only serve to reduce inference cost, which is already happening. This isn't going to accelerate AI advance. It just makes ChatGPT more profitable.

Why would human level AGI help spin up chip fabs faster, when we already have actual humans who know how to spin them up, and the bottleneck is raising the billions of dollars to build them?

All of these hard take-off fantasies seem to come down to: We get human-level AGI, then magic happens, and we get hard take-off. Why isn't the magic happening when we already have real live humans on the job?

Not the person you're responding to, but I think the salary paid to the researchers / research-engineers at all the major labs very much counts as eye-watering.

What happened at meta is ludicrous, but labs are clearly willing to pay top-dollar for actual research talent, presumably because they feel like it's still a bottleneck.

Having the experience to build a frontier model is still a scare commodity, hence the salaries, but to advance AI you need new ideas and architectures which isn't what you are buying there.

A human-level AI wouldn't help unless it also had the experience of these LLM whisperers, so how would it gain that knowledge (not in the training data)? Maybe a human would train it? Couldn't the human train another developer if that really was the bottleneck?

People like Sholto Douglas have said that the actual bottleneck for development speed is compute, not people.

  • jibal
  • ·
  • 22 hours ago
  • ·
  • [ - ]
There's no path from LLMs to AGI.

> spinning up chip fabs that much easier

AI already accounts for 92% of U.S. GDP growth. This is a path to disaster.

Agreed.

To me the hard take off won't happen until a humanoid robot can assemble another humanoid robot from parts, as well as slot in anywhere in the supply chain where a human would be required to make those parts.

Once you have that you functionally have a self-replicating machine which can then also build more data centers or semi fabs.

Humanoid robots are also a pipe dream until we have the brains to put into them. It's easy to build a slick looking shell and teleoperate it to dance on stage or serve drinks. The 1X company is actually selling a teleoperated "robot" (Neo), saying the software will come later !!

As with AGI, if the bottleneck to doing anything is human level intelligence or physical prowess, then we already have plenty of humans.

If you gave Musk, or any other AI CEO, an army of humans today, to you think that would accelerate his data center expansion (help him raise money, get power, get GPU chips)? Why would a robot army help? Are you imagining them running around laying bricks at twice the speed of a human? Is that the bottleneck?

Scaling laws are irrefutable. There is no doubt that computers will be able to learn more than any humans, after all we're not outgrowing our skulls. The last mile is just tweaking how we define optimality and providing integration points. Everyone investing in this knows this at some level.

What's happening now is already pretty incredible given the understanding that we're basically still at the 'chat bot' stage for most people. The idea of agency is still very, very recent, but it's understandable that most people (particularly non SDs) are not impressed.

It's easy to look at the present and be cynical. If it is only able to solve your problem 95% of the time, you still can't trust it. I think the bets are really about how far we are from 99%, even for random stuff. The fact that a chatbot that was never explicitly trained to, just by predicting next probable tokens is wild. The pace of improvement in the past 5 years has been dizzying.

I'm not out here trying to put a dollar amount on it. But certainly, there is going to be a lot of money to be made. Of course it's a front for money and power. But like... isn't that the point of a corporation?

> It can take enormous amounts of time to replicate existing imagery with prompt engineering, only to have your tool of choice hiccup every now and again or just not get some specific aspect of what a person had created previously.

Yes... I don't think the current process of using a diffusion model to generate an image is the way to go. We need AI that integrates fully within existing image and design tools, so it can do things like rendering SVG, generating layers and manipulating them, the same as we would with the tool, rather than one-shot generating the full image via diffusion.

Same with code -- right now, so much AI code gen and modification, as well as code understanding, is done via raw LLM. But we have great static analysis tools available (ie what IDES do to understand code). LLMs that have access to those tools will be more precise and efficient.

It's going to take time to integrate LLMs properly with tools. And train LLMs to use the tools the best way. Until we get there, the potential is still more limited. But I think the potential is there.

This recentralization trend was always going to happen and was inevitable anyway because of the dynamics and economics of content deliver, medium enhancements and market expectations. Even before the AI revolution, I foresaw a recentralization: fat servers, thin clients - so it was inevitable.
> To think that with enough compute we can code consciousness is like thinking that with enough rainbows one of them will have a pot of gold at its end.

What does consciousness have to do with AGI or the point(s) the article is trying to make? This is a distraction imo.

  • kmnc
  • ·
  • 1 day ago
  • ·
  • [ - ]
It’s a funny anology because what’s missing for the rainbows with pots of gold is magic and fairytales…so what’s missing for consciousness is also magic and fairytales? I’ve yet to see any compelling argument for believing enough computer wouldn’t allow us to code consciousness.
Yes that's just it though, it's a logic argument. "Tell me why we aren't just stochastic parrots!" is more logically sound than "God made us", but that doesn't defacto make it "The correct model of reality".

I am suspect that the world is modeled linearly. That physical reality is non-linear is also more logically sound, so why is there such a clear straight line from compute to consciousness?

  • jibal
  • ·
  • 21 hours ago
  • ·
  • [ - ]
Consciousness is a physical phenomenon; rainbows, their ends, and pots of gold at them are not.
> Consciousness is a physical phenomenon

This can mean one of 50 different physicalist frameworks. And only 55% of philosophers of mind accept or lean towards physicalism

https://survey2020.philpeople.org/survey/results/4874?aos=16

> rainbows, their ends, and pots of gold at them are not

It's an analogy. Someone sees a rainbow and assumes there might be a pot of gold at the end of it, so they think if there were more rainbows, there would be more likelihood of pot of gold (or more pots of gold).

Someone sees computing, assuming consciousness is at the end of it, so they think fi there were more computing, there would be more likelihood of consciousness.

But just like the pot of gold, that might be a false assumption. After all, even under physicalism, there is a variety of ideas, some of which would say more computing will not yield consciousness.

Personally, I think even if computing as we know can't yield consciousness, that would just result in changing "computing as we know" and end up with attempts to make computers with wetware, literal neurons (which I think is already an attempt)

  • jibal
  • ·
  • 14 hours ago
  • ·
  • [ - ]
I'm well aware that many people are wrong about consciousness and have been misled by Searle, Chalmers, Nagel, et. al. Numbers like 55% are argumentum ad populum and are completely irrelevant. The sample space matters ... I've been to the "[Towards a] Science of Consciousness" conferences and they are full of cranks and loony tunes, and even among respectable intelligent philosophers of mind there is little knowledge or understanding of neuroscience, often proudly so. These philosophers should read Arthur Danto's introduction to C.L. Hardin's "Color for Philosophers". I've partied with David Chalmers--fun guy, very bright, but has done huge damage to the field. Roger Penrose likewise--a Nobel Prize winning physicist but his knowledge of the brain comes from that imbecile Stuart Hameroff. The fact remains that consciousness is a physical function of physical brains--collections of molecules--and can definitely be the result of computation--this isn't an "assumption", it's the result of decades of study and analysis. e.g., people who think that Searle's Chinese Room argument is valid have not read Larry Hauser's PhD thesis ("Searle's Chinese Box: The Chinese Room Argument and Artificial Intelligence") along with a raft of other criticism utterly debunking it (including arguments from Chalmers).

> It's an analogy.

And I pointed out why it's an invalid one -- that was the whole point of my comment.

> But just like the pot of gold, that might be a false assumption.

But it's not at all "just like the pot of gold". Rainbows are perceptual phenomena, their perceived location changes when the observer moves, they don't have "ends", and there certainly aren't any pots of gold associated with them--we know for a fact that these are "false assumptions"--assumptions that no one makes except perhaps young children. This is radically different from consciousness and computation, even if it were the case that somehow one could not get consciousness from computation. Equating or analogizing them this way is grossly intellectually dishonest.

> Someone sees computing, assuming consciousness is at the end of it, so they think fi there were more computing, there would be more likelihood of consciousness.

Utter nonsense.

I can't remember the last time I heard anything about blockchain in the HN top.

Seems like AI killed blockchain just like the war in Ukraine killed COVID.

The use case for AI is spam.
It's the reverse printing press, drowning all purposeful human communication in noise.
Another major use case for it is enabling students to more easily cheat on their homework. Which is why it is probably going to end up putting Chegg out of business.
AI brought “let someone else do it for you” cheating to the middle class, no longer the domain of wealthy flunkies.
I am shocked when I talk to college kids about AI these days.

I try to explain stuff to them like regurgitating the training data, context window limits, and confabulation.

They stick their fingers in their ears and say "LA LA LA LA it does my homework for me nothing else matters LA LA LA LA i can't hear you"

They really do not care about the Turing Test. Today's LLMs pass the "snowed my teaching assistant test" and nothing else matters.

Academic fraud really is the killer app for this technology. At least if you're a 19-year-old.

These kids are only in school to get a meal ticket to white-collar job interviews. AI frees them to be honest about their intentions, rather than pretend for long enough to stumble their way into learning something.
Maybe AI will finally skewer the myth that an undergraduate degree means anything.
[dead]
The coming of AI seems one of those things like the agricultural revolution or industrial revolution that is kind of inevitable once it starts. All the business of who pays how much for which stock and what price is sensible and which algorithm seem kind of secondary.
Yes it's like the internet. First we thought it was a bubble. But then it turned out that it was actually a way for companies to keep us under surveillance and to sell us stuff without giving up ownership. And expensive subscriptions.
  • p5v
  • ·
  • 18 hours ago
  • ·
  • [ - ]
And so has been every other paradigm shift in human existence.
I think the author is right that AI companies know it’s a scam, but it’s in the interest of stealing investor’s money, not consolidating land and resources through energy infrastructure buildout. Who does the author think owns that? It’s not the AI companies. It’s the same power companies that already own it.
  • wwizo
  • ·
  • 21 hours ago
  • ·
  • [ - ]
Good article. Now would anyone tell me how do I short AI?
  • Havoc
  • ·
  • 18 hours ago
  • ·
  • [ - ]
You short a big tech stock of your choice. They’re all up to their eyeballs in this and if it pops the entire tech sector will crater
It's pretty clear that the financialization aspect of AI is a bubble. There's way too much market cap created by trading debt back and forth. How well AI will work remains an open question at this point.
It's a big number - but still less than tech industry profits.
That is true, but not evenly distributed. Oracle for example: https://arstechnica.com/information-technology/2025/11/oracl...

Also, it may be true that these companies theoretically have the cash flow to cover to spending, but that doesn't mean that they will be comfortable with that risk, especially as that risk becomes more likely in some kind of mass extinction event amongst AI startups. To concretize that a bit, the remote possibility of having to give up all your profits for 2 years to payoff DC investment is fine at 1% chance of happening, but maybe not so ok at a 40% chance.

It's often overlooked that bubbles like this actually validate themselves. It's not like the demand, revenue, or profits are fake or made up.

Investment mechanically _causes_ profits, and if you're as big as big tech is then some of that profit will be yours. In the end stupid investment will end badly, but until it actually plays out it can very much be rational for _everyone_ involved; Even if none of them are lying about anything.

Bubbles probably don't even have to hurt after the fact if the government is willing to support demand when things go south. The real cost is in the things we could have done instead. At least GPUs are genuinely useful (especially with the end of Moore's law), Energy investment is never a bad thing in the end, and those assets have very long useful lives.

I think that what is really behind the AI bubble is the same thing behind most money, power, and influence: land and resources. The AI future that is promised, whether to you and me or to the billionaires, requires the same thing: lots of energy, lots of land, and lots of water.

If you just wanted land, water, and electricity, you could buy them directly instead of buying $100 million of computer hardware bundled with $2 million worth of land and water rights. Why are high end GPUs selling in record numbers if AI is just a cover story for the acquisition of land, electricity, and water?

  • bix6
  • ·
  • 1 day ago
  • ·
  • [ - ]
But with this play they can inflate their company holdings and cash out in new rounds. It’s the ultimate self enrichment scheme! Nobody wants that crappy piece of land but now it’s got GPUs and we can leverage that into a loan for more GPUs and cash out along the way.
Valid question. What the OP talks about though is that these things were not for sale normally. My takeaway from his essay is that a few oligarchs get a pass to take over all energy, by means of a manufactured crisis.

  When a private company can construct what is essentially a new energy city with no people and no elected representation, and do this dozens of times a year across a nation to the point that half a century of national energy policy suddenly gets turned on its head and nuclear reactors are back in style, you have a sudden imbalance of power that looks like a cancer spreading within a national body. 

He could have explained that better. Try to not look at the media drama the political actors give you each day, but look at the agenda the real powers laid bare

- Trump is threatening an oil rich neighbor with war. A complete expensive as hell army blowing up 'drug boats' (claim) to make help the press sell it as a war on drugs. Yeah right.

- Green energy projects, even running ones, get cancelled. Energy from oil and nuclear are both capital intensive and at the same time completely out-shined by solar and battery tech. So the energy card is a strong one to direct policy towards your interests.

If you can turn the USA into a resource economy like Russia, than you can rule like a Russian oligarch. That is also why the admin sees no problem in destroying academia or other industries via tariffs; controlling resources is easier and more predictable than having to rely on an educated populace that might start to doubt the promise of the American Dream.

I did not think about it that way, but it makes perfect sense. And it is really scary. It hasn't even been a year since Trump's second term started. We still have three more years left.
Because then you can buy calls on the GPU companies
  • gniv
  • ·
  • 18 hours ago
  • ·
  • [ - ]
Remember a few years ago when people were asking what will big tech do with all that cash they were hoarding? Well this is the answer. It might be a bubble it might burst but they went all in and it's hard to disagree with their thinking.
  • arkt8
  • ·
  • 1 day ago
  • ·
  • [ - ]
I like to reduce things to absurdity to put them into perspective.

The hype of AI is to sell illusion to naive people.

It is like create a hammer that nails by itself... like cars that choose the path by itself.

So stop thinking AI is intelligent... it is merely an advanced tool that demands skill and creativity like any other. Its output is limited to the hability of its user.

The worry should be the amount of resources used to vanity (hammers into the newborn hands) or the nails in the wrong place (viral fake content targeted to unaware people).

Like in Industrial Revolution people got reduced to screw tighteners, mind will be reduced to bad prompters expecting wonders and producing bad content or the same. A step back in civilization except for the money makers and thinkers until AI revolution gives birth to its Karl Marx.

Ever heard of a nail gun?
Independently on the hypotesys/conspiracy that the big investors and big tech don't actually believe in AI, the measurable outcome the OP is making remains the same: very few people will end up owning a big chunk of the natural resources, and it will not matter if they will be used to power AI or for anything else.

Perhaps govrenments should add clauses on the contracts they make to avoid this big power imbalance from happening.

Let's take the highest perspective possible:

What is the value of technology which allows people communicate clearly with other people of any language? That is what these large language models have achieved. We can now translate pretty much perfectly between all the languages in the world. The curse from the tower of Babel has been lifted.

There will be a time in the future, when people will not be able to comprehend that you couldn't exchange information regardless of personal language skills.

So what is the value of that? Economically, culturally, politically, spiritually?

Language is a lot deeper than that. It's like if I say "we speak the same language", it means a lot more than just the ability to translate. It's talking about a shared past and worldview and hopefully future which I/we intend to invest in.
Then are you better off by not being able to communicate anything?
>The curse from the tower of Babel has been lifted.

It wasn't a curse. It was basically divine punishment for hubris. Maybe the reference is a bit on the nose.

You could make the same argument about video conferencing: Yes, you can now talk to anyone anywhere anytime, and it's amazing. But somehow all big companies are convinced that in-person office work is more productive.
Which languages couldn't we translate before? Not you, the individual. We, humanity?
Machine translation was horrible and completely unreliable before LLMs. And human translators are very expensive and slow in comparison.

LLM is for translation as computers were for calculating. Sure, you could do without them before. They used to have entire buildings with office workers whose job it was to compute.

Google translate worked great long before LLMs.
I disagree. It worked passably and was better than no translation. The depth, correctness, and nuance is much better with LLMs.
  • jibal
  • ·
  • 21 hours ago
  • ·
  • [ - ]
The only reason to think that is not knowing when Google switched to using LLMs. The radical change is well documented.
LLMs are not they only "AI"
  • Kiro
  • ·
  • 1 day ago
  • ·
  • [ - ]
I don't think you understand how off that statement is. It's also pretty ignorant considering Google Translate barely worked at all for many languages. So no, it didn't work great and even for the best possible language pair Google Translate is not in the same ballpark.
Not really long before, although I suppose it's relative. Google translate was pretty garbage until around 2016-2017 and then it started really improving
It really didn't. There were many languages which it couldn't handle at all, just making completely garbled output. It wasn't possible to use Google Translate professionally.
  • bix6
  • ·
  • 1 day ago
  • ·
  • [ - ]
We could communicate with people before LLMs just fine though? We have hand gestures and some people learn multiple languages and google translate was pretty solid. I got by just fine in countries where I didn’t know the language because hand gestures work or someone speaks English.

What is the value of losing our uniqueness to a computer that lies and makes us all talk the same?

  • Kiro
  • ·
  • 1 day ago
  • ·
  • [ - ]
Incredible that we happen to be alive at the exact moment humanity peaked in its interlingual communication. With Google Translate and hand gestures there is no need to evolve it any further.
You can maybe order in a restaurant or ask the way with hand gestures. But surely you must be able to take a higher perspective than your own, and realize that there's enormous amounts of exchange between nations with differing language, and all of this relies on some form of translation. Hundreds of millions of people all over the world have to deal with language barriers.

Google Translate was far from solid, the quality of translations were so bad before LLMs that it simply wasn't an option for most languages. It would sometimes even translate numbers incorrectly.

LLMs are here and Google Translate is still bad (surely, if it was easy as just plugging the miraculous perfect llms into it, it would be perfect now?), I don't think people who think we've somehow solved translation actually understand how much it still deals extremely poorly with.

And as others have said, language is more than just "I understand these words, this other person understands my words" (in the most literal sense, ignoring nuance here), but try getting that across to someone who believes you can solve language with a technical solution :)

What argument are you making? LLM translating is available to anybody to try and use right now, and you can use services like Kagi Translate or DeepL to see the evidence for yourself that they make excellent translations. I honestly don't care what Google Translate does, because nobody who is serious about translation uses it.

> And as others have said, language is more than just "I understand these words, this other person understands my words" (in the most literal sense, ignoring nuance here), but try getting that across to someone who believes you can solve language with a technical solution :)

The kind of deeply understood communication you are demanding is usually impossible even between people who have the same native tongue, from the same town and even within the same family. And people can misunderstand each other just fine without the help of AI. However, is it better to understand nothing at all, then to not understand every nuance?

I don’t know a single developer who is NOT using LLMs in some form, so either they or their company are paying for it. And that’s just a single service - they probably have a home account, and another few different services to test things, so it’s not exactly making zero.

Lately I’ve been finding LLM output to be hit and miss, but at the same time, I wouldn’t say they’re useless…

I guess the ultimate question is - if you’re currently paying for an LLM service, could you see yourself sometime in the future disabling all of your accounts? I’d bet no!

  • Havoc
  • ·
  • 18 hours ago
  • ·
  • [ - ]
The problem is developer spend alone isn’t near enough to justify valuations. There just aren’t enough. To have any hope in hell of this working the man on the street needs to see a substantial and real boost not just devs
I know multiple including myself. Get out more!
  • ·
  • 19 hours ago
  • ·
  • [ - ]
My new thing with articles like these: just search for the word "water".

I think that what is really behind the AI bubble is the same thing behind most money, power, and influence: land and resources. The AI future that is promised, whether to you and me or to the billionaires, requires the same thing: lots of energy, lots of land, and lots of water. Datacenters that outburn cities to keep the data churning are big, expensive, and have to be built somewhere. The deals made to develop this kind of property are political — they affect cities and states more than just about any other business run within their borders.

After reading the article but before seeing this, I adopted that policy. So true.
  • uriee
  • ·
  • 19 hours ago
  • ·
  • [ - ]
I was reading this piece until "As a designer" no need to read further.
> Best case: we’re in a bubble. Worst case: the people profiting most know exactly what they’re doing.

The article literally starts with hyperbole and not being charitable at all. I'm sure there are many arguments for why AI will bring doom and gloom, but outright being dishonest in the first 7 words of the article will put off the people you actually want to read this article.

What happened to well-research and well-argued points of views? Where you take good faith arguments into account, and you don't just gaslight and strawman your way into some easy to make points for the purpose of being "sharable" on social media?

Very good article. With regards to the guy being a designer, he is IMHO still correct with regards to layouts. Currently LLMs ares till pretty clueless about layouts. Also SWE is much more then coding. Even in the coding area there is much more room for improvement. The idea that you would not need Software Engineers soon, is brain dead.
LLMs raise the floor for confidence for near/offshoring in the executive class. That’s the actual sell.

Edit: in the context of SWE at least

Interesting perspective
> Meanwhile, generative AI presents a few other broader challenges to the integrity of our society. First is to truth. We’ve already seen how internet technologies can be used to manipulate a population’s understanding of reality.

Bubble aside, this could be the most destructive effect of AI. I would add to this that it is also destroying creativity, because when you don't know whether that "amazing video clip" was actually created by a human or an AI, then it's no longer that amazing. (To use a trivial example, a video of a cat and dog interacting in a way that is truly funny if it were real, and goes viral, but that means nothing if was AI-generated.)

I believe it’s a bubble. Every app interface is becoming similar to ChatGPT, claiming they’ll “help you automate,” while drifting away from the app’s original purpose.

Most of this feels like people trying to get rich off VC money — and VCs trying to get rich off someone else’s money.

It doesn’t matter they murdered software engineering and destroyed tens of thousands of careers once it bursts it will be an oops and onto the next hype.
Could have been, or rather, they thought it would, but the open models from China that you can just run locally are changing the game. Distributing power instead of concentrating it.
  • jdkee
  • ·
  • 1 day ago
  • ·
  • [ - ]
"A datacenter takes years to construct."

Not for Elon, apparently.

See https://en.wikipedia.org/wiki/Colossus_(supercomputer)

A small catch

> Using an existing space rather than building one from the ground up allowed the company to begin working on the computer immediately.

  • qoez
  • ·
  • 1 day ago
  • ·
  • [ - ]
Best case is hardly a bubble. I definitely think this is a new paradigm that'll lead to something, even if the current iteration won't be the final version and we've probably overinvested a slight bit.
The author thinks that the bubble is a given (and doesn’t have to spell doom), and the best case is that there isn’t anything worse in addition.
Same as the dot-com bubble. Fundamentals were wildly off for some businesses, but you can also find almost every business that failed then running successfully today. Personally I don't think sticking AI in every software is where the real value is, it's improving understanding of huge sets of data already out there. Maybe OpenAI challenges Google for search, maybe they fail, I'm still pretty sure the infrastructure is going to get used because the amount of data we collect and try to extract value from isn't going anywhere.
Something notable like pets.com is literally chewy just 20 years earlier
Make Class Warfare M.A.D.
> There is a vast chasm between what we, the users, and them, the investors, are “sold” in AI. We are told that AI will do our tasks faster and better than we can — that there is no future of work without AI. And that is a huge sell, one I’ve spent the majority of this post deconstructing from my, albeit limited, perspective. But they — the people who commit billions toward AI — are sold something entirely different. They are sold AGI, the idea of a transformative artificial intelligence, an idea so big that it can accommodate any hope or fear a billionaire might have.

> Again, I think that AI is probably just a normal technology, riding a normal hype wave. And here’s where I nurse a particular conspiracy theory: I think the makers of AI know that.

I think those committing billions towards AI know it too. It's not a conspiracy theory. All the talk about AGI is marketing fluff that makes for good quotes. All the investment in data centers and GPU's is for regular AI. It doesn't need AGI to justify it.

I don't know if there's a bubble. Nobody knows. But what if it turns out that normal AI (not AGI) will ultimately provide so much value over the next couple decades that all the data centers being built will be used to max capacity and we need to build even more? A lot of people think the current level of investment is entirely economically rational, without any requirement for AGI at all. Maybe it's overshooting, maybe it's undershooting, but that's just regular resource usage modeling. It's not dependent on "coding consciousness" as the author describes.

  • est
  • ·
  • 22 hours ago
  • ·
  • [ - ]
In the past every company had strict control to prevent source code from been leaked to a third party.

And yet here we are.

It's not just consolidation of physical resources like land and water.

It's also the vision that we will reach a point to where _any task_ can be fully automated (the ultimate promise of AGI). That provides _any business with enough capital_ to increase profits significantly by replacing humans with AI-driven machines.

If that were to happen, the impact on society would be absolutely devastating. It will _not_ be a matter of "humans will just find other jobs to do, just like they used to be farmers and then worked in factories". Because if the promise is true, then whatever "new job" that emerges could also be performed better by an AI. And the idea that "humans will be free to engage in the pursuits they love and enjoy" is bonkers fantasy as it is predicated on us evolving into a scarcity-free utopia where the state (or more likely pseudo-states like BigCorp) provide the resources we need to live without requiring any exchange of labor. We can't even give people SNAP.

1. Yes Capitalism 2. Just waiting for the bubble to pop, when investors wake up to only Nvidia making money, and all that money will flow somewhere else.
  • ·
  • 1 day ago
  • ·
  • [ - ]
No it isnt lol.

>I’m more than open to being wrong;

Doubtful.

>That’s quite a contradiction. A datacenter takes years to construct. How will today’s plans ever enable a company like OpenAI to catch up with what they already claim is a computational deficit that demands more datacenters?

Its difficult to steelman such a weird argument. If a deficit cant be remedied immediately, it should never be remedied?

This is literally how capex works. You purchase capacity now, based on receiving it, and the rewards of having it, in the future.

>And yet, these deals are made. There’s a logic hole here that’s easily filled by the possibility that AI is a fitting front for consolidation of resources and power.

No you just made some stuff up, and then suggested that your own self inflicted confusion might be better explained with some other stuff you made up.

>Globalism eroded borders by crossing them, this new thing — this Privatism — erodes them from within.

What? Its called Capitalism. You dont need a new word for it every 12 months. Emotive words like "erosion" say nothing but are just targeted at like, stirring people up. Demonstrate the erosion.

>Remember, datacenters are built on large pieces of land, drawing more heavily from existing infrastructure and natural resources than they give back to the immediately surrounding community

How did you calculate this. Show your work. Pretty sure if someone made EQ SY1 SY2 and SY3 disappear, the local community, the distant community, communities all over the planet would be negatively affected.

>When a private company can construct what is essentially a new energy city with no people and no elected representation, and do this dozens of times a year across a nation to the point that half a century of national energy policy suddenly gets turned on its head and nuclear reactors are back in style

To take the overwrought disproportionate emotive language out of this.

"How are private entities allowed to build big things I dont like, including power sources I dont like"

The answer is that many people are allowed to do things you don't approve of. This is normal. This is society. Not everything needs the approval of the blogerati. Such a world would be horrific.

>when the infrastructure that powers AI becomes more valuable than the AI itself, when the people who control that infrastructure hold more sway over policy and resources than elected governments.

Show your working. How are the infrastructure providers going to run the government? I believe historically big infrastructure projects tend to die, require some government inducements and then go away. People had similar misgivings about the railroads in the US, in fact it was a big bugbear for henry george I believe. Is Amtrak secretly pulling the strings of the US Deep State? If the US Government is weak to private interests, thats up to the good burghers of yankistan to correct at the polls. If electoral politics dont work, then other means seppos find scary might be required. Freaking out about AI investment seems like a weird place to suddenly be concerned about this.

See Also: AT&T Long Lines, Hydro Electric Dams, Nuclear Energy, Submarine Cable Infrastructure. If Political power comes from owning infrastructure we should be more worried about like, Hurricane Electric. Its demonstrable that people who build big infra dont run the planet. Heck Richest Man and Weird Person Darling Elon Musk doesn't honestly command much infrastructure, he mostly just lives on hype and speculation.

>but I’m really just following the money and the power to their logical conclusion.

The more you need to invoke "Logical conclusion" the less geniune and logical the piece reads.

>Maybe AI will do everything humans do. Maybe it will usher in a new society defined by something other than the balancing of labor units and wealth units. Maybe AGI — these days defined as a general intelligence that exceeds human kind in all contexts — will emerge and “justify” all of this. Maybe.

Probably things will continue on as they always have, but the planet will have more datacenter capacity. Likely, if the AI bubble does burst, datacenter capacity will be cheaper.

>The market concentration and incestuous investment shell game is real.

Yes? And that will probably explode and we will see AI investors jumping out of buildings. nVidia is in a position right now to underwrite big AI Datacentre loans, which could completely offset the huge gains they have made. What about it. Again, you demonstrate nothing.

>The infrastructure is real. The land deals are real.

Yes. Remember to put 2 truths before your lie.

>The resulting shifts in power are real.

So far they exist in your mind.

>we will find ourselves citizens of a very new kind of place that no longer feels like home.

Reminds me of an old argument that a raving white supremacist used to push on me. That "justice" as he defined it, was that society not change so old people wont be scared by it. That having a new (possibly browner) person running the local store was tantamount to and justification for genocide.

Change is a constant. That change making you sad is not in and of itself a bad thing. Please adjust accordingly.

AI is not overhyped. It's like saying going to the moon is overhyped.

First of all this AI stuff is next level. It's as great, if not greater than going to space or going to the moon.

Second the rate at which is improving makes it such that the hype is relevant and realistic.

I think what's throwing people off are two things. First people are just over exposed to AI. So the overexposure is causing people to feel AI is boring and useless slop. Investments are heavy into AI but the people who throw that money around are a minority, overall the general public is actually UNDER hyping AI. Look at everyone on this thread. Everyone and I mean Everyone isn't overly optimistic about AI. instead the irony is... Everyone and I mean everyone again strangely thinks the world is overhyped about AI and they are wrong. This thread and practically every thread on HN is a microcosm of the world and the sentiment is decidedly against AI. Think about it like this, if Elon Musk invented a car that cost 1$ and this car could travel at FTL speeds to anywhere in the universe, than interstellar travel will be routine and boring within a year. People will call it overhyped.

Second the investment and money spent on AI is definitely overhyped. Right? Think about it. If we quantify the utility and achievement of what AI can currently do and what it's projected to achieve the math works out. If you quantify the profitability of AI the math suddenly doesn't work out.

Seems like an apt comparison; it was a massive money sink and a regular person gained absolutely nothing from the moon landing, it's just the big organization (NASA, US government) that got the bragging rights.
The Nixon shock same soon after the moon and space euphoria ended.
The author's conspiracy theory is this:

    > I think that what is really behind the AI bubble is the same thing behind 
    > most money, power, and influence: land and resources. The AI future that is 
    > promised, whether to you and me or to the billionaires, requires the same 
    > thing: lots of energy, lots of land, and lots of water. Datacenters that 
    > outburn cities to keep the data churning are big, expensive, and have to be 
    > built somewhere. [...] When the list of people who own this property is as 
    > short as it is, you have a very peculiar imbalance of power that almost 
    > creates an independent nation within a nation. Globalism eroded borders by 
    > crossing them, this new thing — this Privatism — erodes them from within. 
In my opinion, this is an irrationally optimistic take. Yes, of course, building private cities is a threat to democratic conceptions of a shared political sphere, and power imbalances harm the institutions that we require to protect our common interests.

But it should be noted that this "privatism" is nothing new - people have always complained about the ultra-wealthy having an undue influence on politics, and when looking at the USA in particular, the current situation - where the number of the ultra-wealthy is very small, and their influence is very large - has existed before, during the Gilded Age. Robber barons are not a novel innovation of the 21st century. That problem has been studied before, and if it was truly just about them robber barons, the old solutions - grassroots organization, economic reform and, if necessary, guillotines - would still be applicable.

The reason that these solutions work is that even though Mark Zuckerberg may, on paper, own and control a large amount of land and industrial resources, in practice, he relies on societal consent to keep that control. To subdue an angry mob in front of the Meta headquarters, you need actual people (such as police) to do it for you - and those people will only do that for you for as long as they still believe either in your doing something good for society, or at least believe in the (democratic) societal contract itself. Power, in the traditional sense, always requires legitimization; without the belief that the ultra-powerful deserve to be where they are, institutions will crumble and finally fail, and then there's nobody there to prevent a bunch of smelly new-age Silicon Valley hippies from moving into that AI datacenter, because of its great vibrations and dude, have you seen those pretty racks, I'm going to put an Amiga in there, and so on.

However, again, I believe this to be irrationally optimistic. Because this new consolidation of power is not merely over land and resources by means of legitimized violence, it's also about control over emerging new technologies which could fundamentally change how violence itself is excercised. Palantir is only the first example to come to mind of companies that develop mass surveillance tools potentially enabling totalitarian control in an unprecedented scale. Fundamentally, all the "adtech" companies are in the business of constructing surveillance machines that could not only be used to predict whether you're in the market for a new iPhone or not, but also to assess your truth to party principles and overall danger to dear leader. Once predictive policing has identified a threat, of course, "self-driving", embodied autonomous systems could be automatically dispatched to detain, question or neutralize it.

So why hasn't that happened yet? After all, Google has had similar capabilities for decades now, why do we still not go to our knees before weaponized DJI drones and swear allegiance to Larry Page? The problem, again, is one of "alignment" - for the same reason that police officers will not shoot protesters when the state itself has become illegitimate, "Googlers" will refuse to build software that influences election results, judges moral character or threatens bodily harm. What's worse, even if tech billionaires would find a small group of motivated fascist engineers to build those systems for them, they could never go for it, as the risk of being found out is way too severe: remember, their power (over land and resources) relies on legitimacy; that legitimacy would instantly be shaken if there was a plausible leak of plans to turn America into a dystopian surveillance state.

What you would really need to build that dystopian surveillance state, then, is agents that can build software according to your precise specifications, whose aligment you can control, that will follow your every order in the most sycophantic manner, and that are not capable of leaking what you are doing to third parties even when they do see that what they're doing is morally questionable.

[dead]
The best AI is the one hidden, silent, ubiquitous that works and you feel it's not there. Apple devices but really many modern devices before the LLM hype era had a lot of AI we didn't know about. Today if I read a product has AI i feel let down cause most of the time is a not very well integrated ChatBot that if you will to spend some time sooner or later will impersonate Adolf Hitler and, who knows, maybe leaks sensitive data or apis meta. The bubble needs to burst so we can go back to silently pack products with useful ai features without telling the world
Seamless OCR from every iOS photo and screenshot has been magical in utility, reliability and usability.
This is what I wonder to, what is the end game? Advance technology so that we can have anything that we want, whenever we want it. Fly to distant galaxies. Increase the options available to us and our offspring. But ultimately, what will we gain from that? Is it to say that we did it or is it for the pleasure of the process? If it's for pleasure, then why have we made our processes so miserable for everyone involved? If it's to say that we did it, couldn't we not and say that we did? That's the whole point of fantasy. Is Elon using AI to supplement his own lack of imagination?

I could be wrong, this could be nonsense. I just can't make sense of it.

> Fly to distant galaxies

Unless AI can change the laws of physics, extremely unlikely.

I see, Fly was perhaps the wrong word to use here. Phase-Shift to new galaxies is probably the right term. Where you change your entire system's resonant frequency, to match what exists in the distant galaxy. Less of transportation, and more of a change of focus.

Like the way we can daydream about a galaxy, then snap-back to work. It's the same mechanism, but with enhanced focus you go from not just visualising > feeling > embodying > grounding in the new location.

We do it all the time, however because we require belief that it's possible in order to maintain our location, whenever we question where we are - we're pulled back into the reality that questions things (it's a very Earth centric way of seeing reality)

Any favorite movies or TV episodes on the above themes?
  • jibal
  • ·
  • 21 hours ago
  • ·
  • [ - ]
You missed the point ... going to distant galaxies is physically impossible.

> Where you change your entire system's resonant frequency, to match what exists in the distant galaxy.

This collection of words does not describe a physical reality.

If things were left to their own devices, the end game would a civilization like stroggos: the remaining humans will choose to fuse with machines, as it would give them an advantage. The first tactical step will be to nudge people to give up more and more agency to AI companions. I doubt this future will materialise, though.
There are some flavors of AI doomerism that I'm unwilling to fight - the proliferance of AI slop, the inability of our current capital paradigm to adjust such that loads of people don't become overnight-poor, those sorts of things.

If you tell me, though, that "We installed AI in a place that wasn't designed around it and it didn't work" you're essentially complaining that your horse-drawn cart broke when you hooked it up to your HEMI. Of course it didn't work. The value proposition built around the concept of long dev cycles with huge teams and multiple-9s reliability deliverables is not what this stuff excels at.

I have churned out perfectly functional MVPs for tens of projects in a matter of weeks. I've created robust frameworks with >90% test coverage for fringe projects that would never have otherwise gotten the time budget allotted to them. The boundaries of what can be done aren't being pushed up higher or down deeper, they're being pushed out laterally. This is very good in a distributed sense, but not so great for business as usual - we've had megacorps consolidating and building vertically forever and we've forgotten what it was like to have a robust hacker culture with loads of scrappy teams forging unbeaten paths.

Ironically, VCs have completely missed the point in trying to all build pickaxes - there's a ton of mining to do in this new space (but the risk profile makes the finance-pilled queasy). We need both.

AI is already very good at some things, they just don't look like the things people were expecting.