stackskipton makes a good point about authority. SRE works at Google because SREs can block launches and demand fixes. Without that organizational power, you're just an on-call engineer who also writes tooling.

The article's premise (AI makes code cheap, so operations becomes the differentiator) has some truth to it. But I'd frame it differently: the bottleneck was never really "writing code." It was understanding what to build and keeping it running. AI helps with one of those. Maybe.

As an SRE I can tell you AI can't do everything. I have done a little software development, even AI can't do everything. What we are likely to see is operational engineering become the consolidated role between the two. Knows enough about software development and knows enough about site reliability... blamo operational engineer.
For those who were oblivious to what SRE means, just like me: SRE os _site reliability engineering_
I knew what an SRE was and found the article somewhat interesting with a slightly novel (throwaway), more realistic take, on the "why need Salesforce when you can vibe your own Salesforce convo."

But not defining what an SRE is feels like a glaring, almost suffocating, omission.

Seemingly Random Engineering
Sales Recovery Engineering
As someone who works in Ops role (SRE/DevOps/Sysadmin), SREs are something that only works at Google mainly because for Devs to do SRE, they need ability to reject or demand code fixes which means you need someone being a prompt engineer who needs to understand the code and now they back to being developer.

As for more dedicated to Ops side, it's garbage in, garbage out. I've already had too many outages caused by AI Slop being fed into production, calling all Developers = SRE won't change the fact that AI can't program now without massive experienced people controlling it.

But there is bad code and good code and SREs cant tell you which is which, nor fix it.
What? Maybe OPs future. SWE is just going to replace QA and maybe architects if the industry adopts AI more, but there's a lot of hold outs. There's plenty of projects out there that are 'boring' and will not bother.
CRE - Code Reliability Engineering

AI will not get much better than what we have today, and what we have today is not enough to totally transform software engineering. It is a little easier to be a software engineer now, but that’s it. You can still fuck everything up.

> AI will not get much better than what we have today

Wow, where did this come from?

From what just comes to my mind based on recent research, I'd expect at least the following this or next year:

* Continuous learning via an architectural change like Titans or TTT-E2E.

* Advancement in World Models (many labs focusing on them now)

* Longer-running agentic systems, with Gas Town being a recent proof of concept.

* Advances in computer and browser usage - tons of money being poured into this, and RL with self-play is straightforward

* AI integration into robotics, especially when coupled with world models

Until you find out there are 40 - 80 startups writing agents in the SRE space :/
  • ikiris
  • ·
  • 16 seconds ago
  • ·
  • [ - ]
And I wish them luck, because the thought of current ai bots doing SRE work effectively is laughable.
It only matters if any of those can promise reliability and either put their own money where their mouth is or convince (and actually get them to pay up) a bigger player to insure them.

Ultimately hardware, software, QA, etc is all about delivering a system that produces certain outputs for certain inputs, with certain penalties if it doesn’t. If you can, great, if you can’t, good luck. Whether you achieve the “can” with human development or LLM is of little concern as long as you can pay out the penalties of “can’t”.

This says nothing about how if AI can write software, AI cannot do these other things.