With open models like DeepSeek-R1, Llama, Qwen, etc and closed source models like o1, o3, Claude etc and even OpenAI's upcoming agentic operator.
At this point knowledge work is going to be white hot with automation in the focus this year and the next.
Can we just be real and just say we aren't advancing humanity with AI, it is just focused on automating out jobs.
OpenAI even admitted that their definition of AGI is:
"By AGI we mean highly autonomous systems that outperform humans at most economically valuable work." (0)
I knew this for the longest time, but I still see and talk to researchers in AI companies that still believe they are furthering humanity with AI and all this optimism of the future. While I like to be optimistic about it, just tell the truth plainly that everybody hasn't actually thought about what happens when AGI comes.
The more we just admit this to ourselves the more people can be prepare for when AGI inevitably comes.
Otherwise everybody will prepare to fail.
(0) https://openai.com/index/how-should-ai-systems-behave/
The outcome of further automation will be to move even more capital under the control of an even smaller number of hands. It's that increasing inequality which is the problem, not AGI.
1) Mental preparation - understanding your current career is finite (be it 5 more years or 20, its not gonna last forever) and broaden and deepen other things in your life such as family, friends and hobbies. I've contemplated the loss of my career hundreds of times already if not thousands, I do it almost on a daily basis (not for hours, just a few moments of a passing thought). It's kind of the opposite of denial and it seems helpful for me to do it.
2) Financials - an obvious one. We should all accept that quite possibly our standard of living will be lower 10 years from now. It's not a certainty, but the likelihood is high enough that we need to be more frugal - which means both trying to save money as much as we can and also learning to enjoy things that don't cost a lot of money.
3) Possible career pivot - there will be opportunities. Some parts of the labour market are starving for employees and I don't mean just the trades - there's a constant need for nurses, teachers, care etc. The problem is these jobs are notorious for burning their workers out; I don't have a solution for that.
These jobs are also not scalable and not directly profitable, they require money from other sources to even exist at a large scale. In this potential AI-dominated world where there is less money being earned by individuals, consequently less taxes being paid, who exactly is going to be paying for nurses, teachers, care, etc.?
It's not in the current sociopolitical/economical zeitgeist for governments to step in and create jobs, since the 80s we've moved towards privatisation, even very socially oriented governments like Western/Northern Europe are in it. To undo this flow into the other direction will take another generation or two of people under a new ideology voting for it.
We've been fed neoliberal policies for way too long, the contemporary Western world has been molded in it, I don't believe such a huge shift will happen fast enough if AI does actually develop that fast to replace jobs. We will live through a limbo of pain until newer generations fight against it, usually that only happens when our collective pain is way above the uncomfortable threshold.
> Even now at least half of economic activity is government created in many developed countries
I wasn't aware of this fact, where could I find statistics about it (if you have sources easily, if not I can do my own research).
Look at countries like France, quite crazy.
I assume we're currently seeing a lot of smoke and mirrors on the AI controlling those robots, but the mere physical form factor isn't enough any more.
Nurses may have longer than plumbers because many of us will want the human connection, and servos behind rubber masks are currently deeply in uncanny valley — at least, they are for me, but I've noticed from the breadth of responses to GenAI that uncanny valley is in a different place for different people.
Robots that not only have AGI (running locally, under battery power?!) but also the dexterity, flexibility, etc to match that of a plumber seem a long, long way off, and would of course require something a lot more advanced that today's LLM based AI - would need to have full blown brain-like capabilities, obviously including the mechanisms to learn a craft.
They've improved a lot since the ones decades ago. When ASIMO was state of the art, that meant being able to walk on a flat surface and climb stairs.
Now, it's more like this: https://www.youtube.com/watch?v=WlUFoZstcWg
(Though again, as this is a sales video, I expect it to be implying more than it can actually do, in the same vein as the Tesla Optimus robot bartenders but not necessarily the same specifics)
> Robots that not only have AGI (running locally, under battery power?!)
Assuming batteries are a must and we can't e.g. use mains, that means the gap between AI reaching the power-performance envelope needed for level 5 self driving cars and the one needed for such robots would be about 5 years. Get the former in x years from now, the latter in x+5 years.
> but also the dexterity, flexibility, etc to match that of a plumber seem a long, long way off
Single humanoid hand, solving a Rubik's Cube, real time video from 5 years ago: https://www.youtube.com/watch?v=kVmp0uGtShk
> and would of course require something a lot more advanced that today's LLM based AI
The future existence of a sufficiently more advanced AI is baked into the assumptions in the question at the top of this thread. It's the goal and driving purpose of the field.
Dunno when it comes; if I did, that would make it easier to prepare for.
> would need to have full blown brain-like capabilities, obviously including the mechanisms to learn a craft.
How confident are you about what specifically they need and don't have, and why?
I've seen people confidently say AI would need brain-like capabilities to play Chess, then when Deep Blue beat Kasparov they were saying it about Go, then when AlphaGo beat Lee Sedol they were saying it about natural language and creating pictures, then ChatGPT happened and various Transformer and Diffusion models got so good that collectively they've become a fraud risk.
Based on the observation that LLMs could get up to the level of interns or mid-university students, I suspect we've already got AI capable of learning a craft to that (mediocre) level just by training on all the YouTube DIY videos.
Based on all the sim2real work, I think it's at least plausible (not certain, only plausible) that some mediocre plumber AI bootstrapped by YouTube would also be able to improve substantially in this fashion: https://arxiv.org/search/?query=sim2real&searchtype=all&sour...
I don't think we need to solve, e.g., the question of why humans seem to be able to learn from so much less data than our AI require to reach any given skill level. Likewise, we do we don't need to agree on what "consciousness" is, nor do we need to then give that to the machines.
If we're relying on pre-training for skill/knowledge acquisition, then the training set would need to include robot-POV training data for every task and scenario is was expected to succeed at, which seems essentially impossible even given a potential world simulator in which to train them.
Without continual learning, or being pre-trained for every task, past or future, it'd be perpetual groundhog day where you coach your robo-plumber to do a task one day, and have to coach it again every time the same task comes up. Of course most consumers aren't expert plumbers, so there is really no alternative than have robo-plumber come pre-trained for all eventualities or be able to learn on the job the same way an inexperienced plumber would do.
There are many other things needed to replicate animal intelligence other than continual learning (e.g. traits like curiosity & boredom, in order to drive an autonomous system to experiment and learn), but continuous learning is a big one. We've been stuck on "whole dataset SGD-train, then deploy" since the advent of neural networks, despite many smart folk like Hinton trying to find something better.
As far as things like flexibility, the bar is pretty high. The robo-plumber needs to be able to lie on it's back in a pool or spray of water while contorting itself in the cabinet under your kitchen sink to fix a leak when the water couldn't be shut off 100% ... the real world is infinitely more messy and challenging than any simulation is going to be, and the simulation isn't going to prepare the robot for the wet/greasy/slippery/etc physical environment in which it'd be working.
Never mind figuring how to build human level AGI (with learning, etc), having it operate in real time and be battery powered is a massive challenge. A car at least has the battery being charged continuously by the engine/alternator. Real-time response by a multi-modal LLM currently requires multiple H100's or similar - probably a few kilo-watts. There's no reason to suppose that even in theory it's possible to build a compact battery (or super-capacitor) technology capable of delivering that sort of power output for an 8-hour shift. There's more hope that future cognitive architectures, and realizations (dataflow vs synchronous?) might reduce power needed to what's available from a battery, but that'd be many decades away.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
GPT-3 is over 4 years old. The writing was on the wall then. FAANG status-strivers had plenty of warning to maybe stop blowing their TC on impressing other people, and choose instead to sock something away.
Most chose not to, because living the "right" lifestyle was far more important than prudence. It is for these people that just desserts are in store.
The Social Network and its consequences have been a disaster for the software engineering race. It marked the Eternal September when this profession was inundated with greedy normies who don't care about software, but rather their wealth and status. The end of this era could not come quickly enough.
Going back to the industrial revolution the people who control capital and production have openly sought to replace human labor with automation. Nothing new, just that this time it threatens “knowledge workers” and the techie HN crowd. If you hear someone claim that they want to “advance humanity” with automation they profit from you can safely assume they don’t mean that sincerely.
Are you ready for your Brave New World? :^)
It may be more like SMBC: https://www.smbc-comics.com/comic/ai-13
It seems to me, that the path we are on will lead us closer to a Terminator-like future, and less likely to lead to a positive future.
Our leadership are greedy fools.
Why? Because our government hasn't seized the assets of the AI companies, like intellectual eminent domain? This is brand new technology from which nobody knows the eventual impact on society (hence the discussion in this thread). And if the government did take it over, who would continue to build it, earning a govt salary?
It is said that AI will make 200 million jobs redundant. China's workforce alone shrunk by 80 million during the previous decade according to their 2020 census.
Overall I think we need to increase efforts with AI, particularly make it more energy efficient, as currently it's simply expensive to run, if we wish to not bear the consequences of having a globally ageing literate population.
They're going to change, where humans sit in the loop and how they program will change, probably a good blend of procedural control and object-oriented processes, where the agents are considered objects and process flow is defined in the procedural methods, but don't freak out there's still a ton to do.
There are many design principles we can leverage to enhance human participation and fulfillment instead of just replacing them. Lights out manufacturing hasn't taken off, why would we think a lights out society would work any better?
Edited for formatting.
It's harder to do than it looks. Offshoring to places with lower wages has kept wages for many white- and blue-collar workers in the West from growing. But it is hard to do correctly.
I suspect AI or AGI or AI with agents will take a longer time to actually work reliably than many suspect.
Not sure how you (as a worker) want to prepare for that - maybe by becoming depressed, quiet and atomized in advance. US leaders (political and big tech) seem to be preparing exactly for that now, but my assumption is you are not one of them.
Between those two points is a period of instability. Judging from history, many people will be impacted and those people may not transition to new jobs. Though in my case, if I were to lose my software engineering job because AI had become good enough to replace me completely then my ego thinks that we'd be much closer to superhuman AI than I thought and all bets are off. Software developers don't just write code. They think, and think, and think and second guess, and look at the larger stack impact, and scaling, and so on. AI may well get to that level but IMO there is a reason we still don't see robots fixing plumbing, and it is because the reasoning part of these jobs are much deeper than some people realise.
Once we get to the point that huge amounts of white collar reasoning workers can be replaced, then I optimistically think we'll also be closer to reducing scarcity. What I'd hope for is that we would adjust to other types of work that we find hard to predict at the moment, just like every other leap in technology has resulted in. I know the strong narrative is AI can simply do those too, but try to imagine a world where we have superhuman ability to produce, but no consumers because no-one has the means to pay for the produce. That just doesn't work. These so called "elites" will have no-one to sell to. So it seems self obvious to me that a balance will be found. If it isn't people will elect governments who force that balance. Maybe some rough period ahead, but I don't subscribe to the doom theories on the economic side.
What I do unfortunately subscribe to is doom of another type. Once near superhuman AI is possible, then it feels impossible to stop it from being widely available, eventually. And given how many humans there are on the planet, I feel there will be some who are deranged enough to use the technology for full planet wide destruction of some sort. I think this feels almost close to inevitable.
[1] https://gizmodo.com/leaked-documents-show-openai-has-a-very-...
[2] https://www.forbes.com/sites/dereksaul/2025/01/21/oracle-sto...
I think the people who will be most easily displaced are those that don't have an additional combined skillset. I see a lot of software engineers with a CS background whose skillset is only writing software. I think people from mixed backgrounds are likely to do better in a big disruption since they've got another thing to jump to, perhaps in a different industry.
If you could actually get a respectable degree or skills on a CV that matter for anything other than junior-most positions (if at all), you would see higher completion rate.
The quality of material itself is often pretty good. That varies from course to course of course, but I must say I started understanding electronics only after an edx course despite having it in my university.
but if the cost of doing so is so much less than the in-person experience, then it will become the norm, while the in-person experience becomes relegated to the wealthy like those who can afford private colleges today. And not that many professors will be needed, so good luck!
Every ~5 yrs I've had to reinvent myself.
So, what about the disruption from AI? Just suck it up, ok? You have to reinvent yourself anyway if you are in the tech sector. Start moving already. Every tech disruption is a challenge and opportunity.
Sure, this won't be true for every role and organization, but for many this will definitely be true.
There was a time when cutting grass was done with scythes, specially shaped swords. All day back and forth swinging the blade to get grass just the perfect length. Swing, take a step, swing. It was backbreaking labor in the sun. So lawns probably were quite expensive and having your own was probably pretty flashy, or maybe they were public, provided by the state to the people, or maybe both.
Along comes the string of inventions that led to lawnmowers. Now, anyone can mow grass in an afternoon. Gone are the lawn-scything jobs! Did the sky fall then?
Of course we know how this played out. Some lawn-scythers lamented the loss of their work, but they’re forgotten to history. Other clever lawn-scythers went and trained as lawn-mowers, while a few even cleverer ones went and became lawnmower mechanics, a few became engineers so they could start to design lawnmowers, and some started lawnmower factories and employed plenty of workers making way more lawnmowers than anyone thought possible.
Lawnmowers aren’t free but they’re super cheap and getting cheaper all the time, they’re abundant, and the real cost now is labor, which is going up all the time. So what do you do? You pay a lawnmowing person to take care of your lawn while you work your high paid office job.
Whenever someone says AI is coming for the jobs, ask them which exact AI model is coming for the jobs; which tool built by which startup it is that’s coming for the jobs, and if so why are they hiring?
These lawnmower/textile worker analogies fall flat because this time might actually be different
Some knowledge work has more leverage than other knowledge work. We don’t have a term for this distinction right now. But the low-leverage work will get automated, leaving us with more pleasant, higher-level, more creative jobs. This is just a continuation of the trend that saw most people leave the factory and sit down at air conditioned desks.
Knowledge workers can keep up if they stop thinking of themselves as workers and start thinking of themselves as automators of knowledge work.
Some office Karen that calls google homepage „the internet“ and struggles using shift key to capitalize (true story) is not going to automate anything whatsoever in a world where even tech people struggle to keep up
Or put different yes jobs but at different level entirely. So we’re still going to be stuck with a huge societal problem of people displaced with no place to go
> But the low-leverage work will get automated
Is this accurate?
In my mind whether something can/will get automated is not linked to level of leverage it has.
For a relatable present-day example of the structural unemployment you’re alluding to, consider the typical HN comment by the person who sent 500+ resumes only to get no response and/or be ghosted. Whatever skills they’re plying aren’t needed by employers - even though they’re software skills. Implying there exist sub-categories of software skills that are not needed in today’s economy. It’s possible for annyone to get left behind if they don’t upskill. But yet, unemployment in the US is insanely low: https://www.cbo.gov/publication/59431
How do you square that circle? I don’t think there is or will be structural unemployment, but there’ll always be anecdotes to support fearmongering about it.
> They’re delivering your food and keeping the streets swept
And when we automate those? If you keep automating stuff eventually you run out of places to put the displaced people. Does that not seem intuitively logical to you?
Upskilling doesn’t magically solve that problem. It just saturates a different level. Bit like going to Uni used to be a certain win now less so because everyone is going to Uni.
How many mass AIs are there going to need to be?
Who’s deciding what to use it for??
Who’s auditing the mass AI’s output?
Who will you call when the AI inevitably go off the rails?
Who will debug the mass AI that went off the rails so the lessons can be learned and integrated into the next generation of mass AI?
Who will build, deploy, and supervise that next generation of mass AI?
These and other questions, attempted thoughtfully one by one, should make it harder and harder to hold on to a belief that we can ever run out of work for humans. The work will keep getting better as more is done for us - more creative, and less drudgerous; but it’ll never stop being directed by us because we are the interface to the human world.
the question remains, what will the other 90% who aren't needed anymore do in order to survive economically?
My point is there is even less of a pressure relief valve this time.
Please just think about it a bit. Everyone seems to be thinking "I'll lose my job!" but what you need to be thinking is "everyone loses their job".
People realize the end goal, but it's naive to think that will happen overnight.
The road there could get really messy, with a lot of wealth redistribution while a few still hang on to their jobs. Not to mention the truly dystopian scenario where a few companies hold all the compute (and power).
they have and they will reap the benefits as the "alphas"; what happens to the rest of humanity, that's someone else's problem
IT, doctors, lawyers will be mostly gone. Probably most office worker jobs that I’m not thinking of.
Best thing to do is become a nurse, that’ll be bulletproof until AGI makes progress on robotics.
You go to 90% of doctors, you plug in your symptoms, they get to ask you questions based on your symptoms and vitals, possibly order tests and imaging and come up with a diagnosis based on those, using data from their university days and medical cases so far.
AI would be hundreds of times better than any doctor, since it'll base its diagnosis on thousands of years of collective medical cases and texts, instead of the couple of decades that individual doctors can muster up.
There's still the surgery part of it, where some doctors stick for a while, but surgery robots have been getting really good at that too, they'll likely be the first robots to put humans out of jobs.
It’s where the doctor goes in a breaks the bones in your foot and resets it.
Here is the description of it.
https://www.ncbi.nlm.nih.gov/books/NBK551713/
Would you trust a robot to do that?
Have you known anyone who had to deal with diabetes? That’s the only major, constant care I know about from relatives besides more minor things like eye surgery.
genuinely asking: Do you see LLMs creating jobs? I see them taking 10 jobs to create one
Sure there might be a world down the line where one does NOT need jobs and everything is practically free because AI makes infinite of it. But the transition period could be quite painful
Well put. And what is this broader process? Those who say new jobs will always be created as others become obsolete think that the broader process is standing still in stream: job titles change, average wealth and comfort increase, but the structure of civilisation will never change. That we'll all still have jobs and won't become unemployable. Eternal stasis sounds absurd to me.
But I don't assume that the change that happens will be bad. And I don't think all jobs will go. I think we'll end up with a fraction of people employed and well off, such as tradespeople, while the majority will busy themselves with unpaid labour. I already do.
As a society? No. For those individuals that were caught up in the transition? Probably.
Sure, we'll have work for people taking care of children and the elderly (and even this might start to fade away as we get humanoid robots that become cheaper and more capable, staff losing it and beating up children is a frequent occurrence where I'm from) but what about the tens of millions of other jobs we all do?
I'm talking about the theoritcal case AGI is in fact achieved. For now - we're not there at all.
Both OpenAI and Anthropic think it’s a question of a couple of years at most. Gwern, who has a good track record of predictions, think so too. We’re pretty much here.
I arrived at the same conclusion independently. Not because I’m a genius like Gwern and Could Work at a Big Lab if I Wanted, but because it is in fact pretty obvious. People focus way too much on the current limitations of current models, not enough on their strength. The core strength of current LLMs are already superhuman (speed, memorization, ability to navigate long context) by a significant margin. Their overall ability is heavily constrained by their weaknesses (mainly planning — hallucinations is a non-story). There are known solutions to this. AlphaZero is "training to plan" and predates GPT-2, you just have to adapt it to the LLM paradigm. What did you think AlphaProof was ? An idle experiment just for fun ?
The only hope now is that there’s some non-identified and non foreseen weakness, where LLMs currently sit well below human-level but has been obscured by the lack of planning, that significantly limit capabilities of the planned models in the same way that planning limits current models, and that has no obvious solution. But at that point that’s just Copium.
Or that we collectively fucking wake up, realize that "we’re pretty much there" and "we don’t want this", and do something. But at that point that’s just Copium too: the governing elites got the memo and are okaying this (cue Andreessen on the Trump camp, the last Biden EO ordering to fast-track AI-scale datacenters on the Democrats camp).
The big issue is that people who don’t want this (the vast majority of people I believe) think we’re not here, and people who think we’re here want this (the labs, the two big parties).
It does not helps that usual human irrationality kicks in heavily on this topic. "I don’t like AI and its consequences" => "I enjoy disparaging AI and consuming content disparaging AI" (look at those 10 epic fails of ChatGPT !) => "I’m going to inflate their weaknesses and downplay their strength" (I can’t believe it can’t count the r in strawberry !) => "Yes AGI is Very Bad News but it’s not there, have you looked at where we’re at ?". Expect a lot of Pikachu Faces in the following months/years.
2028 was the initial timeline given by, I believe, Shane Legg. He recently said he’s on track. You better give some credence to that or wake up with some very, very nasty surprise very soon.
This tech is not expected to be that expensive, it will most likely become widely deployed and available for the masses. Sure, you still need MRI machines and nurses to give you meds etc, but getting a diagnosis will be improved and possibly we won't need as many doctors so costs will go down. Only today I read Demis Hassabis saying in 2025 A.I developed drugs will go through clinical trials. If we have AGI we can accelerate real breakthroughs in cancer treatments, diabetes etc.
> or your healthcare costs exceed what you're willing and able to pay
I know the U.S is different but in most of the developed world you get a decent basic health care regardless of your employment status / economic value. We've decided collectively some things should be available for everyone.
> And who's going to spend time learning anything when the AI is deployed to do all the possible work anyway?
That's a good point. If we do reach true AGI there won't be much point in traditional learning. I think a greater emphasis will be put on emotional development. I think A.I assistants can do a great job in that actually.
There’s a lot of reasons. Not in the order of importance, just what in the order of what comes first in my mind :
1. While Real World Interactions (robotics, autonomous driving, factories automation,…) are somewhat parallelizable with Purely Digital AGI (games, text, videos, programming,…), it is way more easy to do AGI first and Real World Interactions second. This is why you see the Big Brains and the Big Money going to Anthropic/DeepMind/OpenAI. If you have AGI you have Waymo. So predictably, OpenAI/DeepMind/Anthropic will go faster than Waymo.
2. The source of the difficulty gap is easy to understand. It is hard to parallelize and scale experiments in the real world. It is trivial in the digital world, just takes More Money. AlphaZero is an AI engine doing dozens of millions of games playing against itself, eventually reaching super-human capabilities in chess and go. Good luck doing that with robotics/cars.
3. "I learned to drive faster" : It is unknown how much bits of priors evolution have put in the human brain (we don’t even know how genes encode priors — a fascinating question). It is certainly not zero. Evolution did that hard work of parallelizing/scaling the "learning to interact with the world" before you were even born. Hell, most of the work on this problem was probably already completed by the start of the mammalian line. No wonder you find this easy and Waymo find this hard. It is not that the problem is inherently easy and "how bad are AI are to fail this simple promble ?" It is that you are custom-tailored-built for it.
4. We have higher standards for AI than humans, and regulation reflect that.
> con: losing your job. pro: the best health care you can imagine. the best education for your kids. etc etc
The con is that humanity is going to lose pretty much any influence on the future. "losing you job" is a pretty bad way of picturing it.
It is a frustrating topic. Let me try to explain you the stakes in a few words, and let’s start with this image :
https://en.wikipedia.org/wiki/German_revolution_of_1918%E2%8...
It’s a communist militia in Berlin at the early stages of the Weimar Republic. The specifics doesn’t matter, you don’t have to judge who was right or wrong. I could have taken a picture of the proto-nazis, or the SDP, or anyone really. The story is the same.
Why are those humans here ? In the cold, in a potentially dangerous situation ? What’s going on in their head ?
"This a an important moment. I have to be the Best Person I can be, take the Best Actions I can take. If I am right, and I do this right, my actions will help better my future. It will help my family. My neighbor. My community. The World. My actions and my choices here Matter. I Matter".
Those two words, "I Matter" is I believe a fundamental requirement of what is it to be human. To my great surprise, there are people who actually actively disagree with that. "Mattering does not matter very much". Maybe you are one of those, I don’t know, I don’t know you. Those people should indeed accept and welcome the AGI. No human will matter anymore but who cares ? Great healthcare, great education, great entertainments.
But AGI being way better in all cognitive domains : Business/Economy, Policy/Politics/Governance, Science, Arts,… means exactly this : humans will no longer have any place in those domains, and this "I Matter" feeling will be lost forever.
EDIT: I forgot a point :
> and I'm not sure 'trend predictions' work that well
It’s not trend prediction. It’s engineering. Roadblocks have been identified. Solutions to those roadblock have been identified. Now they are just the phase of "implement those solutions". Whether those solutions are sufficient to go all the way to AGI is a bit more speculative, but the odds are clearly in the "yes" side.
Yes there will be a crisis of meaning, in fact in most secular societies there already is one (how much meaning can you derive from preparing a balance sheet or handling customer support tickets?). Some societies will deal much better with unemployment - mostly religious societies. If we can create societies of abundance (where you get most services for pretty much free due to A.I) I think we will solve the crisis of meaning with family, friends, hobbies and really good next generation T.V and computer games.
In the grand scheme of things most of us understand we don't matter at all (at least I don't think I matter in any significant way, nor does humanity as a whole imo), but we do need a reason to get up in the morning, somewhere to go and interact with society. We matter to our families and close friends if it makes you feel better.
This is the most likely reason that we won’t see widespread adoption of autonomous vehicles in the next 30 years.
Around 120 people die in automobile accidents every day and people shrug. But if 12 people died in accidents caused by autonomous vehicles in a year (yes I changed units), they will quickly be banned.
I think this is my main disconnect with the pessimists, I don't see "stop AI progress" as a valid option anyway.
> If we are to survive and thrive long term we need to become true masters of our environment, and that means we need to be smarter, stronger and more productive.
Yes. I fundamentally agree with that vision. We want this for us. We/Us humans.
If we build AGI we won’t thrive. We won’t be smarter. We won’t be stronger. We won’t be more productive. We won’t be masters of our environment. The AI will be all of this. We’ll just be relegated to passive, helpless spectators.
The world is chaotic. We have no way to predict that this is what it will end up like.
One possibility is that AGI remains a tool, without a mind or direction other than what we give it. We may manage to constrain it in that manner, and it is used to do "work" in the sense of solving mathematical problems, engineering problems, etc. We could end up with a reduced scarcity (if not post scarcity) scenario, where people do other things for meaning (come on, use your imagination here, plenty to imagine).
I could even imagine neural implants and regenerative technology. Meaning suddenly we're not so constrained by our physical limitations. What possibilities does that unlock?
I sometimes feel that the doomers lack imagination.
"Solve for the equilibrium" is the name of the game. "AGI remain a tool" is nice, until you notice that some people will use it lightly (solve some day-to-day problems I encounter when running my business, but I still do most of the work and decisions), some people will use it less lightly (I give you complete access to my mail box and my bank account — now run my business for me), and the first one will get absolutely crushed by the second in any market that is remotely close to free.
If you can ensure that "AGIs remains a tool", you just get to the same result, "AGI controls the future", with an additional step of nominally-in-charge humans rubber-stamping decisions made by AGIs.
> I sometimes feel that the doomers lack imagination.
It’s really the opposite.
I have asked a lot of people, "doomers" or not, "what is the positive vision of a future where there are AGIs around that dominate human in all cognitive tasks ?". All the answers feel superficially good and correct, until you actually use your imagination to poke and probe at it. It always fall apart. The only exception is those accepting : "humans not in control of the future is good actually, I don’t mind having infinite entertainment and zero responsibility". I have nothing to reply to that except : I personally abhor that picture.
Did you really think I have not enough imagination to generate by myself the hypothesis "what if AGIs stay tools ?". Of course I have. It’s just that I have used my imagination to dig deeper at that image. And the result is still not the pretty happy story it initially looks like.
Once you generate a dozen of such hypothesis, dig, and find the same outcome, you start to get the feeling that it is not exactly an accident.
It is not a lack of imagination. The problem is fundamentally extremely hard, possibly impossibly hard.
> The world is chaotic. We have no way to predict that this is what it will end up like.
I strongly disagree with that. Chaotic behavior on the micro-scale is not incompatible with predictability on the macro-scale. Almost all science is like that actually. Even tho you can’t predict when and in which individuals a particular mutation will arise, and the exact life path of those individuals, you can still have laws relating relative fitness and fixation rates, and the prediction that eventually, a beneficial mutation will fixate in a population.
Everything you know works like that. Electromagnetism does not require knowledge of individual photon/electron interactions to describe Maxwell equations. Thermodynamics do not require detail knowledge of molecular dynamics to relate quantities like pressure and temperature.
Today we summarize the native/settlers conflicts by "Eventually the Settlers displaced the Natives from their lands". And while it leaves some nuances out, it is not a bad summary. If you ask a human living this period, he will say what you say "world is chaotic, who can predict what the outcome will be ?". Actually the overall outcome was pretty obvious given the significant technological gap. Details may be surprising ; the big picture isn’t.
Edit : I focus too much on AGI on this message, not enough on humans. The problem is at the end mostly a human problem. Accepting diversity — true diversity, "I abhor your vision for a future, but I agree you should have a place to realize it with other like-minded people, as long as you don’t interfere with other people realizing their vision of the future" is the cure. If humans can accept genuine diversity, I believe there are pretty straightforward solutions that are robust to at least my level of poking and probing. I do not expect humans to be able to accept genuine diversity tho, and the more I reach out to other humans, the more I despair on this particular axis. It’s very easy to make a human say "Yes Diversity is Good" in the abstract. Then you start to paint some concrete pictures, and the mask fall off: "actually the diversity I like means accepting some people slightly, superficially different from me".
I'm not all that bothered about stopping AI progress, but if capital is overwhelmingly directed towards replacing blue and white collar workers with machines while climate change and every other existential threat is starved of capital then as a species we are doomed.
It's not like this is being done for the benefit of humanity either. Ask those same profit-obsessed oligarchs (or their media outlets) who are effusive about AI's ability to create a better future what will happen if the retirement age is raised in response or even kept the same. They 180 on their techno-jobs-killing-optimism so fast you will get whiplash: https://www.forbes.com/sites/dandoonan/2024/01/30/demographi...
My sense is that there is a lot of compute capacity, data acquisition, etc., required to create any dent in this type of automation.
Likely the next 2 years are focused on more “compute”, “data center capacity”, and “energy”. That allows for a) better training, b) more inference load, … and hence prepare for more automation.
Much like the Y2K era of 30 years back, we are entering the new era of YNA, an era whY Not Automate.
We are doing both!
You are assuming people like to work, and jobs are something good. Without jobs, people will have less stress and more free time. They will spend more time on leisure activities, with their families, and so on.
Also jobs contribute to carbon emissions, we have to maintain cities, large offices, car makes CO2 for commute...
and how will they pay for rent, food, etc.? __that's__ the question
Not all bad.
The second half feels like wish fulfillment.
I honestly don't see how you get there from here.
but from individual perspective I don't think that is the case. Since AlphaGo first time was released and beat world-class players, have all these players gone? not really, but it even promotes more people study Go with AI instead.
As a software engineer myself, am I enjoy using AI for coding? yes, for many trivial and repetitive works, but do I want it to take my coding works fully away such that I can just type natural languages? My answer is no. I still need the dopamine hit from coding myself, either for work or my for hobby time, even I am rebuilding some wheels other folks have already built, and I believe many of ours are the same.
The guy that got beaten literally decided to retire immediately after explicitly because AI displaced him
> On 19 November 2019, Lee announced his retirement from professional play, stating that he could never be the top overall player of Go due to the increasing dominance of AI.
I do get your point though that overall player count is still fine
1) Land
2) Natural Resources
3) Energy
4) Weapons
5) Physical/in-person Labour
That means that, unless there are big societal changes, you better have at least a few of those available to you. A huge percentage of Americans own no land nor investments in durable assets. That means at best, they are thrown into a suddenly way-oversupplied physical labour market in order to make a living.
At the geopolitical level, a country like the United States, with so much of its current wealth tied to amorphous things like IP, the petro dollar, and general stability, certainly has all 5 of the above categories covered. However, there's a lot of vulnerable perceived value in the stock market and housing market which might vanish if the world is hit with a sudden devaluation of labour. Even if the US is theoretically poised to capitalize off of AGI, a sudden collapse of the middle class and therefore the housing market and therefore the entire financial sector might ruin that.
UBI is the bare minimum I can imagine preventing total societal collapse. Countries with existent strong safety nets or fully socialist systems might have a big advantage in surviving this. I certainly feel a certain sense of comfort living in Quebec, where although many aspects of government are broken, I can at least count on a robust and cheap hydroelectric-supplied grid and reasonable social safety nets. Between AI and climate change, I feel like there are worse places to be this century.
This is not just for businesses but for everything in life.
For example, we all buy robot vaccuums, right? People even rearrange furniture to make sure it is compatible with the robot vaccuums. Everyone wants it to do more and be more reliable.
Seems to me that AI (today) allows amateurs to generate low quality professional work. Cheaply. Disrupting the careers of many creative professionals. Where does that lead?
Some people believe AGI is imminent, others believe AGI is here now. Observe their behavior, calm your anticipation, and satisfy your curiosoty rather than seeking to confirm a bias on the matter.
The tech is new in my experience and accepting claims beyond my capacity to validate such a grand assertion would require me to take on faith the word of someone I don't know or have never seen, who likely generated such a query in the first place outside the context length of Chatgpt mini.
We could hide behind "we can't predict the future" but it would be wise to get ready for that inevitability.
One day you will ask your computer to "open a word processor" and it will pull a fully-featured Word 2013 out of thin air. What will developers do then?
That day could be March 1st, 2025 or 2050. Many of us will likely still be in the jobs pool either way.
In hunter/gatherer days within small bands, and also to a large extent within families (but perhaps a bit less now) the method was Generalized Reciprocity[1], where people basically take care of each other and share. This was supported by the extremely fertile and bountiful forests and other resources that were shared between early people.
Later, the large "water monopoly" cultures like ancient Egypt had largely Redistributive economies, where resources flowed to the center (the Pharaoh, etc.) and were distributed again outwards.
Finally we got Market Exchange, where people bargain in the marketplace for goods, and this has been efficiently able (to a greater or lesser degree) to distribute resources for hundreds of years. Of course we have some redistributive elements like Social Security and Welfare, but these are limited.
Market Exchange relies now on basically everyone having a Job or another means to acquire money. But with automation this breaks down because jobs will more and more be taken by the AIs and then by robotic AIs.
So only a few possibilities are likely: either we end up with almost everyone a pauper and increase dole payments just up to the point where there's no revolution, or we somehow end up with everyone owning the AI resources and their output, taking the place the forests and other ancient resources played in hunter-gatherer days, and everyone can eventually be wealthy.
It looks as if, at least in the USA we are going down the path of having a tiny oligarch class with everyone else locked out of ownership of the products of the AI, but this may not be true everywhere, and perhaps somehow won't end up being true here.
[1] Stone Age Economics by Marshall Sahlins https://archive.org/details/stoneageeconomic0000sahl/page/n5...
We are already headed towards that in every major economy, and all the others I know of too. Wealth and power started concentrating before AI, so its not just AI.
The group that has the wealth and power has the power to block change. They can distract the hoi polloi with scape goats (e.g. immigrants), culture wars, doomscrolling, day to day survival and a lot more.
The current trajectory is towards a global oligarchic class.
That is a huge political change that will be strongly = resisted. Do you recognise the quote "to each according to his needs"? It is what you are suggesting, and the very opposite of the thinking that dominates the world (not just the US, these days).
> We will live in interesting times.
Certainly, but it will be very turbulent, and very likely violent. Even previously stable societies are clearly a lot less stable if you look at the political changes and the hostility between different political groups (the move from those on the other side being just "wrong" to being "evil" - "worse than Nazis" to quote a prominent British politician). I would say we live in scary times.
Yes, of course I did see the parallel, but I think raking over old quotes doesn't help keep a clear mind. Wikipedia - February Revolution describes Russia's failure to
...modernise its archaic social, economic, and political structures while maintaining the stability of ubiquitous devotion to an autocratic monarch.
Sounds familiar?
Of course communism is not the only value system I remember an Yes Minister episode (I cannot recall which one) in which some one quotes that and Hacker guesses the source of the quote is Jesus. Not an unreasonable guess.
The problem is that whatever values you base the justification of the change on, they have to be radically different from that of the current political consensus.
> Wikipedia - February Revolution describes Russia's failure to
The problem is what that lead to. Not something I want to live through, or want my children to live through.
I am definitely not suggesting communism. What I laid out was the problem and how, without insurrection, it could be mitigated.
No because that's not real. AI will automate jobs but also advance humanity.
It's like saying can't we be real and admit that tractors and farm machinery were not advancing humanity they were just replacing farm jobs. They replaced some farm jobs but also provided food abundance and let people go off and work as personal trainers and the like to help people lose the weight. And work as scientists, artists etc.
Simplistic: A major impact today is that amateurs using AI can generate tons of low quality professional work for peanuts. Overall quality suffers but profits soar or rise enough. Where does that lead?
hang on there! the industrial revolution brought people into _worse_ labor conditions -- it wasn't technology that created better work environments, it was worker protests that did so, and it took decades
These are the same thing.
Automation does advance humanity. The reason for the current world prosperity has been lots of automation.
(There is the separate and much more concerning risk of humans going the way of the horse, but that does not seem to be what you are concerned about)
Cheap AI labor will depress wages and therefore reduce consumer demand. This is not a recipe for GDP growth or a vibrant economy anymore than outsourcing to reduce labor costs has been.
The people trying to sell AI as good for the economy will no doubt tell you that companies don't want to reduce your salary or lay you off - that they will be happy to keep payroll the same, or increase it (increased consumer spending = GDP growth!) by employing you as a highly paid AI whisperer. Yeah, no, companies are looking to reduce payroll.
I believe the goal is to advance humanity by automating out jobs.
I don't know if that is wishful thinking or if it will actually work (Nash equilibriums and politics make me suspect the former) but the idea that it was possible to do both at the same time was already a widespread meme in the spaces that culturally drive AI research of the sort we now see.
The idea that we can have both is also why optimistic people sometimes bring up (amongst other things) UBI in this context.
I'm really not sure what an individual can do to "prepare" for AGI; it's like trying to guess in 1884 if you should bet on Laissez-faire USA, the British Empire, or that manifesto you've heard of from Marx a few decades ago. And then deciding that, no matter what, the world will always need more buggy-whips and humans will never fly in heavier-than-air vehicles.
Nope. People will prepare as much as engineers care. But engineers don't care. Educating the people is tedious. It's easier to "manipulate"/direct them, which is the job of representatives, who are about status and money and being groomed by the industries, who are exclusively about money.
People are fine without AGI until they are not. That's another 15 years at least.
If you want to worry, worry about local solutions to climate change mitigation where you need old school manpower, big machines, shovels and logistics.
then robots will become a thing and the same for manual labor i think the non manual labor work class was chosen because they have demonstrated problematic sentiments to the powers that be
a lot of people might die or UBI may become a thing. migration to less developed areas might happen although i doubt the powers that be will allow cross border movements they love their fiefdoms
the barons will always be there unless there is a skynet type event that wipes out humanity same shit different day if you a peasant like me and not a good dev without any dev education or connections most probably will die it is what it is