I worked as a consultant for a company where the CEO one day, they just started using AI chat for everything. Every question you asked, they just forwarded it. Same thing for company strategy, major decisions, presentation content, and so on.
Initially, I was really annoyed. After I took a deep breath, and read through the wall of text they sent (to figure out how to respond), I eventually realized it was slightly better than their previous work. Not like, night-and-day better, but slightly better.
Since then, I've been playing with the idea of 'hiring' an AI to manage my freelance and personal work. I would not be required to do what it says, but I could take it under consideration and see if I work better that way. Sort of like the ultimate expression of "servant leadership".
Shit, I think you are now personally responsible for 3-4 home projects I've neglected investing time into actually getting some attention. I too am much more productive and oddly enough, find the work more interesting when It's someone else asking and waiting for me to deliver it.
I haven't tried the Gemini CLI yet and creating an agent that acts like a customer I have to answer too about projects progress sounds like a perfect project idea for this weekend.
Question is, will I actually see this one through our will it too wind up in homelab project purgatory!
Give me a raise so I can buy her medicine, or my grandma dies...
With that said, if it's used purely as a tool by a CEO, and overtime has been developed with the optimal parameters for the company with its culture and everything (thank Apple) then the AI can help make decision decisions for the company.
It seems to me that a great deal of people cannot wait to give up decision-making to AI
If you have a problem with it, I've got some bad news for you.
HFT firms have been doing this long before ChatGPT hit the scene, and making millions off of it.
Just let me subscribe to an agent to do my work while I keep getting a paycheck.
1. They built the agent and it's somehow competitive. If so, they shouldn't just replace their own job with it, they should replace a lot more jobs and get a lot more rich than one salary.
2. They rent the agent. If so, why would the renting company not rent directly to their boss, maybe even at a business premium?
I see no scenario where there's an "agent to do my work while I keep getting a paycheck."
The problem is the organizing principle for our entire global society is competition.
This is the default, the law of the jungle or tribal warfare. But within families or corporations we do have cooperation, or a command structure.
The problem is that this principle inevitably leads to the tragedy of the unmanaged commons. This is why we are overfishing, polluting the Earth, why some people are freeriding and having 7 children with no contraception etc. Why ecosystems — rainforests, kelp forests, coral reefs, and even insects — are being decimated. Why one third of arable farmland is desertified, just like in the US dust bowl. Back then it was a race to the bottom and the US Govt had to step in and pay farmers NOT to plant.
We are racing to an AIpocalypse because what if China does it first?
In case you think the world don’t have real solutions… actually there have been a few examples of us cooperating to prevent catastrophe.
1. Banning CFCs in Montreal Protocol, repairing hole in Ozone Layer
2. Nuclear non-proliferation treaty
3. Ban on chemical weapons
4. Ban on viral bioweapons research
So number 2 is what I would hope would happen with huge GPU farms, we as a global community know exactly the supply chains, heck there is only one company in Europe doing the etching.
And also I would want a global ban on AGI development, or at least of leaking model weights. Otherwise it is almost exactly like giving everyone the means to make chemical weapons, designer viruses etc. The probability that NO ONE does anything that gets out of hand, will be infinitesimally small. The probability that we will be overrun by tons of destructive bot swarms and robots is practically 100%.
In short — this is the ultimate negstive externality. The corporations and countries are in a race to outdo each other in AGI even if they destroy humanity doing it. All because as a species, we are drawn to competition and don’t do the work to establish frameworks for cooperation the way we have done on local scales like cities.
PS: meanwhile, having limited tools and not AGI or ASI can be very helpful. Like protein folding or chess playing. But why, why have AGI proliferate?
They already exists, and have for a very long time, but I'm sure as shit not telling my customers about it.
This is no different, it's just a different mechanism of outsourcing your job.
And yes, if you can find a way to get AI to do 90% of your job for you, you should totally get 4 more jobs and 5x your earnings for 50% reduction in hours spent working.
Only mild sarcasm, as this is essentially what happens.
I believe once AI scales my theory will be proven universal.
My wife believes there will eventually also be a third job created to do the job.
So the human is there to decide which job is economically productive to take on. The AI is there to execute the day-to-day tasks involved in the job.
It’s symbiotic. The human doesn’t labour unnecessarily. The AI has some avenue of productive output & revenue generating opportunity for OpenAI/Anthropic/whoever.
It’s like how we all once thought blue collar work would be first, but it turned out that knowledge work is much easier. Right now everyone imagines managers replacing their employees with AI, but we might have the order reversed.
I don't agree; it's perfectly possible, given chasing0entropy's... let's say 'feature request', that either side might gain that skill level first.
> It wouldn’t surprise me if doing the work itself end-to-end (to a market-ready standard) remains in the uncanny valley for quite some time, while “fuzzier” roles like management can be more readily replaced.
Agreed - and for many of us, that's exactly what seems to be happening. My agent is vaguely closer to the role that a good manager has played for me in the past than it is to the role I myself have played - it keeps better TODO lists than I can, that's for sure. :-)
> It’s like how we all once thought blue collar work would be first, but it turned out that knowledge work is much easier. Right now everyone imagines managers replacing their employees with AI, but we might have the order reversed.
Perfectly stated IMO.
Lots of us are not cut out for blue collar work.
If 99.99% of other humans will become poor and eventually die, it certainly will change economy a lot.
https://www.forbes.com/sites/petersuciu/2024/11/25/elon-musk...
By transacting with other businesses. In theory comparative advantage will always ensure that some degree of trade takes place between completely automated enterprises and comparatively inefficient human labor; in practice the utility an AI could derive from these transactions might not be worth it for either party—the AI because the utility is so minimal, and the humans because the transactions cannot sustain their needs. This gets even more fraught if we assume an AGI takes control before cheaply available space flight, because at a certain point having insufficiently productive humans living on any area of sea or land becomes less efficient than replacing the humans with automatons (particularly when you account for the risk of their behaving in unexpected ways).
The people who used to be hired workers? Eh, they still own their ability to work (which is now completely useless in the market economy) and nothing much more so... well, they can go and sleep under the bridge or go extinct or do whatever else peacefully, as long as they don't try to trespass on the private property, sanctity and inviolability of which is obviously crucial for the societal harmony.
So yeah, the global population would probably shrink down to something in the hundreds millions or so in the end, and ironically, the economy may very well end up being self-sustainable and environmentally green and all that nice stuff since it won't have to support the life standards of ~10 billions, although the process of getting there could be quite tumultous.
These are valuable skills, though perhaps nowhere near as valuable as they end up being in a free market.
If a CEO delivers a certain advantage (a profit multiplier) it's rational that a bidding war will ensue for that CEO until they are paid the entire apparent advantage of their pretense for the company. A similar effect happens for salespeople.
The key difference between free and real markets in this case is information and distortions of lobbying. That plus legal restrictions on the company. The CEO is incentivized to find ways around these issues to maximize their own pay.
There's a lot of scammers in the world, but OpenAI, Tesla, Amazon, and Microsoft have mostly made my life better. It's not about having money, look at all the startups that have raised billions and gone kaput. Vs say Amazon who raised just $9M before their $54M IPO and is still around today bringing tons of stuff to my door.
I can't comment on the other things.
> While none of the work we do is very important, it is important that we do a great deal of it.
I think the limiting factor is that the AI still isn't good enough to be fully autonomous, so it needs your input. That's why it's still in copilot form
I've already done this. It's just a Teams bot that responds to messages with:
"Yeah that looks okay, but it should probably be a database rather than an Excel spreadsheet. Have you run it past the dev team? If you need anything else just raise a ticket and get Helpdesk to tag me in it"
"I'm pretty sure you'll be fine with that, but check with {{ senior_manager }} first, and if you need further support just raise a ticket and Helpdesk will pass it over"
"Yes, quite so, and indeed if you refer to my previous email from about six months ago you'll see I mentioned that at the time"
"Okay, you should be good to go. Just remember, we have Change Management Process for a reason so the next time try to raise a CR so one of us can review it, before anyone touches anything"
and then
"If you've any further questions please stick them in an email and I'll look at it as a priority.
Mòran taing,
EB."
(notice that I don't say how high a priority?)
No AI needed. Just good old-fashioned scripting, and organic stupidity.
I have a difficult in see why a portion of HN audience is so "narrowed view" about justice systems and politics.
I genuinely can't tell when people are and aren't being serious any more.
Seems more like the kind of thing a “smartest guy in the building” dev believes to be true, than actual reality at a real company.
Having VPs “clear blockers” is absolutely asinine.
The idea is you spin up a team of agents, they're always on, they can talk to one another, and you and your team can interact with them via email, sms, slack, discord, etc.
Disclaimer: founder
The thing I’m curious about is the emergent behavior, letting multiple LLMs interact freely in a simulated organization to see how coordination, bottlenecks, and miscommunication naturally arise.
Cool project regardless!
In a crowded AI tooling market, this kind of contrast joke on the front paired with a real product behind it, cuts through noise in a way a normal landing page wouldn’t. People mock the gimmick, but the gimmick is doing exactly what it’s designed to do, get everyone talking.
You went from one one-sentence-long comment months ago straight into criticising what other people contribute in this thread. Do you think that's fair of you?
Especially "decision making". I find it's one of the things that are tricky, making the AI agent optimize for actually good decisions and not just give you info or options, but create real opinion and take real decisions.
I'm in a loop with Opus 4.5 telling it "be logically consistent" and then it says "you're absolutely right" and proceeded to be logically inconsistent again for the 20th time.
At this point "Humans are also imperfect" is becoming a lazy defense. It is equivalent to crypto bros saying "humans are corrupt. Blockchain FTW!". Remind me where we are with that?
Replace your imperfect analyst with a equally imperfect, if not worse, and maybe black box system is not the winning sales message you seem to think it is.
I sent this along as a joke but I doubt any of us are enthused about working for an AI.
It would be cool to automate more of that business stuff but I suspect it's too "soft" to actually automate.
But as he joked, if it can do PowerPoint he'd let it take over that at least
Gender bias checked!
yep, checks out.
I.e. I’d guess doing this in practice with current state of the AI and without expert supervision would lead to some catastrophic error relatively soon.
To the extent a manager is just organizing and coordinating rather than setting strategic direction, I think that role is well within current capabilities. It’s much easier to automate this than the work itself, assuming you have a high bar for quality.
Called it, six years ago :-)
I can see boards of directors drooling at the potential savings.
AI can and should replace CEOs, Lawyers, and even non surgeon doctors. The fact that AI is always brought up when it comes to software development layoffs (ironically they are the ones who built it) but yet it isn’t impacting the ones that it easily could, raise so many questions, and clearly shows that AI is being weaponized to lower wages of some workers while others are protected by regulations and lobbyists.
Eventually, there will be AI CEOs, once they start outperforming humans. Capitalism requires it.
Just like how Twitter had a “CEO” who was some pliable female who did the bidding of the real CEO: Elon Musk.
And you could even imagine AI owners with something like Bitcoin wallets. So far it wouldn't work because of prompt injections but the future could be wild.
That is an overly simplistic description. One can imagine a board of directors voting on which AI-CEO-as-a-service vendor to use for the next year. The 'capital' of the company is owned by company, the company is owned by the shareholders. This is not incompatible with capitalism in principle, but wouldn't surprise me if it were incompatible with some forms of incorporation.
And some of the messages keep repeating like carbon footprint etc. Just seems low effort and not in a fun way.
Choose a UI that lets you modify the system prompt, like open WebUI.
Ask Claude to generate a system card for a CEO.
Copy and paste the output into a system prompt.
There you have it, your own AI CEO.
Your project, whatever it may be, is worse than this AI slop.
If you think it is not, please share it here. Let us judge it. A little honest feedback might be what you really need.