I've noticed a huge gap between AI use on greenfield projects and brownfield projects. The first day of working on a greenfield project I can accomplish a week of work. But the second day I can accomplish a few days of work. By the end of the first week I'm getting a 20% productivity gain.

I think AI is just allowing everyone to speed-run the innovator's dilemma. Anyone can create a small version of anything, while big orgs will struggle to move quickly as before.

The interesting bit is going to be whether we see AI being used in maturing those small systems into big complex ones that account for the edge cases, meet all the requirements, scale as needed, etc. That's hard for humans to do, and particularly while still moving. I've not see any of this from AI yet outside of either a) very directed small changes to large complex systems, or b) plugins/extensions/etc along a well define set of rails.

Enterprise IT dinosaur here, seconding this perspective and the author’s.

When I needed to bash out a quick Hashicorp Packer buildfile without prior experience beyond a bit of Vault and Terraform, local AI was a godsend at getting me 80% of the way there in seconds. I could read it, edit it, test it, and move much faster than Packer’s own thin “getting started” guide offered. The net result was zero prior knowledge to a hardened OS image and repeatable pipeline in under a week.

On the flip side, asking a chatbot about my GPOs? Or trusting it to change network firewalls and segmentation rules? Letting it run wild in the existing house of cards at the core of most enterprises? Absolutely hell no the fuck not. The longer something exists, the more likely a chatbot is to fuck it up by simple virtue of how they’re trained (pattern matching and prediction) versus how infrastructure ages (the older it is or the more often it changes, the less likely it is to be predictable), and I don’t see that changing with LLMs.

LLMs really are a game changer for my personal sales pitch of being a single dinosaur army for IT in small to medium-sized enterprises.

>LLMs really are a game changer for my personal sales pitch of being a single dinosaur army for IT in small to medium-sized enterprises.

This is essentially what I'm doing too but I expect in a different country. I'm finding it incredibly difficult to successfully speak to people. How are you making headway? I'm very curious how you're leveraging AI messaging to clients/prospective clients that doesn't just come across as "I farm out work to an AI and yolo".

Edit - if you don't mind sharing, of course.

I interpreted his statement as LLMs being valuable for the actual marketing itself.
  • vages
  • ·
  • 21 minutes ago
  • ·
  • [ - ]
Which local AI do you use? I am local-curious, but don’t know which models to try, as people mention them by model name much less than their cloud counterparts.
I let Claude configure en setup entire systems now. Requires some manual auditing and steering once in a while. But managing barebone servers without any management software has become pretty feasible and cheap. I managed to configure +50 Debian server cluster simultaneously with just ssh and Claude. Yes it's cowboy 3.0. But so are our products/sites.
  • somat
  • ·
  • 1 hour ago
  • ·
  • [ - ]
Isn't this true of any greenfield project? with or without generative models. The first few days are amazingly productive. and then features and fixes get slower and slower. And you get to see how good an engineer you really are, as your initial architecture starts straining under the demands of changing real world requirements and you hope it holds together long enough to ship something.

"I could make that in a weekend"

"The first 80% of a project takes 80% of the time, the remaining 20% takes the other 80% of the time"

  • dust42
  • ·
  • 42 minutes ago
  • ·
  • [ - ]
From personal experience I'd like to add the last 5% take 95% of the time - at least if you are working on a make over of an old legacy system.
I find AI great for just greasing the wheels, like if I’m overthinking on a problem or just feel too tired to start on something I know needs doing.

The solutions also help me combat my natural tendency to over-engineer.

It’s also fun getting ChatGPT to quiz me on topics.

It seems to be fantastic up to about 5k loc and then it starts to need a lot more guidance, careful supervision, skepticism, and aggressive context management. If you’re careful, it only goes completely off the rails once in a while and the damage is only a lost hour or two.

Overall, still a 4x production gain overall though, so I’m not complaining for $20 a month. It’s especially good at managing complicated aspects of c so I can focus on the bigger picture rather than the symbol contortions.

  • orwin
  • ·
  • 4 hours ago
  • ·
  • [ - ]
Yeah, my observation is that for my usual work, I can maybe get a 20% productivity boot, probably closer to 10% tbh, and for the whole team overall productivity it feels like it has done nothing, as senior use their small productivity gains to fix the tons of issues in PR (or in prod when we miss something).

But last week I had two days where I had no real work to do, so I created cli tools to help with organisation, and cleaning up, I think AI boosted my productivity at least 200%, if not 500.

Similar experience. I love using Gemini to set up my home server, it can debug issues and generate simple docker compose files faster than I could have done myself. But at work on the 10 year old Rails app, I find it so much easier to just write all the code myself than to work out what prompt would work and then review/modify the results.
This makes me think how AI turns SW development upside down. In traditonal development we write code which is the answer to our problems. With AI we write questions and get the answers. Neither is easy, finding the correct questions can be a lot fo work, whereas if you have some existing code you already have the answers, but you may not have the questions (= "specs") written down anywhere, at least not very well, typically.
I find that setting up proper structure while everything still fits in a single context window of Claude code, as well as splittjng as much as possible into libraries works pretty well for staving off that moment.
It’s fantastic to be able to prototype small to medium complexity projects, figure what architects work and don’t, then build on a stable foundation.

That’s what I’ve been doing lately, and it really helps get a clean architecture at the end.

I’ve done this in pure Python for a long time. Single file prototype that can mostly function from the command line. The process helps me understand all the sub problems and how they relate to each other. Best example is when you realize behaviors X, Y, and Z have so much in common that it makes sense to have a single component that takes a parameter to specify which behavior to perform. It’s possible that already practicing this is why I feel slightly “meh” compared to others regarding GenAI.
I have experienced much of the opposite. With an established code base to copy patterns from, AI can generate code that needs a lot less iteration to clean up than on green fields projects.
I solve this problem by pointing Claude at existing code bases when I start a project, and tell it to use that approach.
That's a fair observation, there's probably a sweet spot. The difference I've found is that I can reliably keep the model on track with patterns through prompting and documentation if the code doesn't have existing examples, whereas I can't document every single nuance of a big codebase and why it matters.
My observations match this. I can get fresh things done very quickly, but when I start getting into the weeds I eventually get too frustrated with babysitting the LLM to keep using it.
The "upside" description:

  On the other you have a non-technical executive who's got his head round Claude Code and can run e.g. Python locally.

  I helped one recently almost one-shot converting a 30 sheet mind numbingly complicated Excel financial model to Python with Claude Code.

  Once the model is in Python, you effectively have a data science team in your pocket with Claude Code. You can easily run Monte Carlo simulations, pull external data sources as inputs, build web dashboards and have Claude Code work with you to really integrate weaknesses in your model (or business). It's a pretty magical experience watching someone realise they have so much power at their fingertips, without having to grind away for hours/days in Excel.
almost makes me physically sick.

I've a reasonably intense math background corrupted by application to geophysics and implementing real world numerical applications.

To be fair, this statement alone:

* 30 sheet mind numbingly complicated Excel financial model

makes my skin crawl and invokes a flight reflex.

Still, I'll concede that a Claude Code conversion to Python of a 30 sheet Excel financial model is unlikely to be significantly worse than the original.

One of the dirty secrets of a lot of these "code adjacent" areas is that they have very little testing.

If a data science team modeled something incorrectly in their simulation, who's gonna catch it? Usually nobody. At least not until it's too late. Will you say "this doesn't look plausible" about the output? Or maybe you'll be too worried about getting chided for "not being data driven" enough.

If an exec tells an intern or temp to vibecode that thing instead, then you definitely won't have any checkpoints in the process to make sure the human-language prompt describing process was properly turned into the right simulation. But unlike in coding, you don't have a user-facing product that someone can click around in, or send requests to, and verify. Is there a test suite for the giant excel doc? I'm assuming no, maybe I'm wrong.

It feels like it's going to be very hard for anyone working in areas with less black-and-white verifiability or correctness like that sort of financial modeling.

This has had tremendous real world consequences. The European austerity wave of the early 2010s was largely downstream of an excel spreadsheet errors that changed the result of a major study on the impact of debt/gdp.

https://www.newscientist.com/article/dn23448-how-to-stop-exc...

This is a pet peeve of mine at work.

Any and I mean any statistic someone throws at me I will try and dig in. And if I'm able to, I will usually find that something is very wrong somewhere. As in, the underlying data is usually just wrong, invalidating the whole thing or the data is reasonably sound but the person doing the analysis is making incorrect assumptions about parts of the data and then drawing incorrect conclusions.

It seems to be an ever-present trait of modern business. There is no rigor, probably partly because most business professionals have never learned how to properly approach and analyze data.

Can't tell you how many times I've seen product managers making decisions based on a few hundred analytics events, trying to glean insight where there is none.

  • gyomu
  • ·
  • 2 hours ago
  • ·
  • [ - ]
If what you're saying 1) is true and 2) does matter in the success of a business, then wouldn't anyone be able to displace an incumbent trivially by applying a bit of rigor?

I think 1) holds (as my experience matches your cynicism :), but I have a feeling that data minded people tend to overestimate the importance of 2)...

Rigor helps for better insights about data. That can help for entrepreneurship.

What also can help for entrepreneurship is having a bias for action. So even if your insights are wrong, if you act and keep acting you will keep acting then you will partially shape reality to your will and bend to its will.

So there are certain forces where you can compensate for your lack of rigor.

The best companies have both of those things by their side.

> does matter in the success of a business

In many experience, many of the statistics these people use doesn't matter in the success of a business --- they are vanity metrics. But people use statistics, and especially the wrong statistics, to pass their agenda. Regardless, it's important to fix the statistics.

I've frequently found, over a few decades, that numerical systems are cyclically 'corrected' until results and performance match prior expectations.

There are often more errors. Sometimes the actual results are wildly different in reality to what a model expects .. but the data treatment has been bug hunted until it does what was expected .. and then attention fades away.

Or the company just changes the definition of success, so that the metrics (that used to be bad last quarter) are suddenly good
> If a data science team modeled something incorrectly in their simulation, who's gonna catch it? Usually nobody. At least not until it's too late. Will you say "this doesn't look plausible" about the output?

The local statistics office here recently presented salary statistics claiming that teachers' salaries had unexpectedly increased by 50%. All the press releases went out, and it was only questions raised by the public that forced the statistics office to review and correct the data.

I did a fair about of data analysis and deciding when or if my report was correct was a huge adrenaline rush.

A huge test for me was to have people review my analyses and poke holes. You feel good when your last 50 reports didn’t have a single thing anyone could point out.

I’ve been seeing a lot of people try to build analyses with AI who haven’t been burned with the “just because it sounds correct doesn’t mean it’s right” dilemma who haven’t realized what it takes before you can stamp your name on an analysis.

I'm almost certain it will be significantly worse.

The Excel sheet will have been tuned over the years by people who knew exactly what it was doing and fixed countless bugs along the way.

The Claude Code copy will be a simulacrum that may behave the same way with some inputs, but is likely to get many of edge cases wrong, and, when you're talking about 30 sheets of Excel, there will be many, many of these sharp edges.

I won't disagree - I suffered from insufficient damning praise in my last sentence above.

IMHO, earned through years of bleeding eyeballs, the first will be riddled with subtle edge cases curiously patched and fettled such that it'll limp through to the desired goal .. mostly.

The automated AI assisted transcoding will be ... interesting.

My assumption is that with the right approach you can create a much much better and reliable program using only Claude code. You are referring to yolo coding results
The thing is, when you use AI, you're not really doing things, you're having things done. AI isn't a tool, it's a service.

Now, back in the day, IBM designed and built an "executive data terminal". It wasn't really a computer terminal in the sense that you and I understand it. Rather, it was a video and two-way-audio feed to a room with a team of underlings, which an executive could ask for business data and analyses, which could be called up on a computer display (also routed to the executive's office). This allowed the executive to ask questions so he (it was the 1960s, it was almost invariably a he) could make informed decisions, and the team of underlings to call up data or crunch numbers on the computer and show the results on the display.

So because executives are used to having things done for them, I can totally see AI being used by executives to replace the "team of underlings" in this setup—in principle. The fact is that were I in that CEO's chair, I'd be thinking twice before trusting anything an LLM tells me, and double-checking those results—perhaps with my team of underlings.

Discussed on Hackernews: https://news.ycombinator.com/item?id=42405462 IEEE article: https://spectrum.ieee.org/ibm-demo

Obligatory xkcd: https://xkcd.com/1667/
Some years ago, I was at a conference and attended a very interesting talk. I don't remember the title of the talk, but what stuck with me was: "It's no longer the big beating the small, but the fast beating the slow". This talk was before all the AI hype. Working at a big company myself, I think this has never been more true. I think the question is, how to stay fast.
And, to add to that, how to know when to slow down. Also, having worked at a big company myself, I think the question shifts towards "how to get fast" without compromising security, compliance etc.
  • swyx
  • ·
  • 11 minutes ago
  • ·
  • [ - ]
this is generic startup advice (doesnt mean its not true). you level up a bit when you find instances where slow beat fast (see: Teams vs Slack)
One the most reliable BS detectors I've found is when you have to try to convince other people of your edge.

If you have found a model that accurately predicts the stock market, you don't write a blog post about how brilliant you are, you keep it quiet and hope no one finds out while you rake in profits.

I still can't figure out quite what motivates these "AI evangelist" types (unlike crypto evangelists who clearly create value for themselves when they create credibility), but if you really have a dramatically better way to solve problems, you don't need to waste your breath trying to convince people. The validity of your method will be obvious over time.

I was just interviewing with a company building a foundation model for supposedly world changing coding assistants... but they still can't ship their product and find enough devs willing to relocate to SF. You would think if you actually had a game changing coding assistant, your number one advantage would be that you don't need to spend anything on devs and can ship 10x as fast as your competition.

> First, you have the "power users", who are all in on adopting new AI technology - Claude Code, MCPs, skills, etc. Surprisingly, these people are often not very technical.

It's not surprising to me at all that these people aren't very technical. For technical people code has never been the bottleneck. AI does reduce my time writing code but as a senior dev, writing code is a very small part of the problems I'm solving.

I've never had to argue with anyone that using a calculator is a superior method of solving simple computational math problems than doing it by hand, or that using a stand mixer is more efficient than using a wooden spoon. If there was a competing bakery arguing that the wooden spoon was better, I wouldn't waste my time arguing about the stand mixer, I would just sell more pastry then them and worry about counting my money.

> I helped one recently almost one-shot[3] converting a 30 sheet mind numbingly complicated Excel financial model to Python with Claude Code.

I'm sure Claude Code will happily one-shot that conversion. It's also virtually guaranteed to have messed up vital parts of the original logic in the process.

It depends on how easily testable the Excel is. If Claude has the ability to run both the Excel and the Python with different inputs, and check the outputs, it's stunningly likely to be able to one-shot it.
Something being simultaneously described as a "30 sheet, mind-numbingly complex Excel model" and "testable" seems somewhat unlikely, even before we get into whether Claude will be able to test such a thing before it runs into context length issues. I've seen Claude hallucinate running test suites before.
It compacted at least twice but continued with no real issues.

Anyway, please try it if you find it unbelievable. I didn't expect it to work FWIW like it did. Opus 4.5 is pretty amazing at long running tasks like this.

I think the skepticism here is that without tests or a _lot_ of manual QA how would you know that it did it correctly?

Maybe you did one or the other , but “nearly one-shotted” doesn’t tend to mean that.

Claude Code more than occasionally likes to make weird assumptions, and it’s well known that it hallucinates quite a bit more near the context length, and that compaction only partially helps this issue.

If you’re porting some formulas from one language to another, “correct” can be defined as “gets the same answers as before.” Assuming you can run both easily, this is easy to write a property test for.

Sure, maybe that’s just building something that’s bug-for-bug compatible, but it’s something Claude can work with.

I generally agree with you, but I tried to get it to modernize a fairly old SaaS codebase, and it couldn't. It had all the code right there, all it had to do was change a few lines, upgrade a few libraries, etc, but it kept getting lots of things wrong. The HTML was wrong, the CSS was completely missing, basic views wouldn't work, things like that.

I have no idea why it had so much trouble with this generally easy task. Bizarre.

  • rk06
  • ·
  • 2 hours ago
  • ·
  • [ - ]
where exactly have you seen excel forumalas to have tests?

I have, in my early careers, gone knee deep into Excel macros and worked on c# automation that will create excel sheet run excel macros on it and then save it without the macros.

in the entire process, I saw dozens of date time mistakes in VBA code, but no tests that would catch them...

And also - who understands the system now? Does anyone know Python at this shop? Is it someone’s implicit duty to now learn Python, or is the LLM now the de facto interface for modifying the system?

When shit hits the fan and execs need answers yesterday, will they jump to using the LLM to probabilistically make modifications to the system, or will they admit it was a mistake and pull Excel back up to deterministically make modifications the way they know how?

That's exactly what it did (author here).
I'm having trouble reconciling "30 sheet mind numbingly complicated Excel financial model" and "Two or three prompts got it there, using plan mode to figure out the structure of the Excel sheet, then prompting to implement it. It even added unit tests to the Python model itself, which I was impressed with!"

"1 or 2 plan mode prompts" to fully describe a 30-sheet complicated doc suggests a massively higher level of granularity than Opus initial plans on existing codebases give me or a less-than-expected level of Excel craziness.

And the tooling harnesses have been telling the models to add testing to things they make for months now, so why's that impressive or suprising?

No it didn't make a giant plan of every detail. It made a plan of the core concepts and then when it was in implementation mode it kept checking the excel file to get more info. It took around ~30 mins in implementation mode to build it.

I was impressed because the prompt didn't ask it to do that. It doesn't normally add tests for me without asking, YMMV.

Ah, I see.

Did it build a test suite for the Excel side? A fuzzer or such?

It's the cross-concern interactions that still get me.

80% of what I think about these days when writing software is how to test more exhaustively without build times being absolute shit (and not necessarily actually being exhaustive anyway).

You touched on Kolmogorov complexity there :)
Doesn't it help you sleep at night that your 401k might be managed by analysts #yoloing their financial modeling tools with an LLM?
having worked in large financial institutions, this would be a step improvement

the largest independent derivatives broker in australia collapsed after it was discovered the board were using astrology and magicians to gamble with all the clients money

https://www.abc.net.au/news/2016-09-16/stockbroker-used-psyc...

Well that would do it. Astrology and magic stop working once they are scrutinized. That is their only weakness.
It sounds like a step sideways, not a step up. LLMs are akin to a Ouija board.
I'd be very interested in seeing some statistics on what could be considered confidential material pasted on ChatGPT's chat interface.

I think the results would be pretty shocking and I think mostly because the integrations to source services are abject messes.

https://www.theregister.com/2025/10/07/gen_ai_shadow_it_secr...

"With 45 percent of enterprise employees now using generative AI tools, 77 percent of these AI users have been copying and pasting data into their chatbot queries, the LayerX study says. A bit more than a fifth (22 percent) of these copy and paste operations include PII/PCI."

Terrifying that people are creating financial models with AI when they don’t have the skills to verify the model does what they expect
All we need is one major crash caused by AI to scare the capital owners. Then maybe us white collar workers can breath a bit for at least another few more years(maybe a decade+).
All we need is one major crash caused by AI to scare the capital owners.

All the previous human-driven crashes didn't change anything about capital owners' approach to money, so why would an AI-driven crash change things?

  • ktzar
  • ·
  • 1 minute ago
  • ·
  • [ - ]
because we have an alternative that we humans can fix. The problem with AI is that it creates without leaving a trace of understanding.
The scapegoating is different. Using an LLM makes them more culpable for the failure, because they should have known better than to use a tech that is well known to systematically lie.
A decade+ is wishful copium.
They have an excel sheet next to it - they can test it against that. Plus they can ask questions if something seems off and have it explain the code.
I'm not sure being able to verify that it's vaguely correct really solves the issue. Consider how many edge cases inhabit a "30 sheet, mind-numbingly complicated" Excel document. Verifying equivalence sounds nontrivial, to put it mildly.
Consider how many edge cases it misses. Equivalence probably shouldn't be the top priority here.
Equivalence here would definitely be the worst test, except for all the alternatives.
  • lmm
  • ·
  • 6 hours ago
  • ·
  • [ - ]
> They have an excel sheet next to it - they can test it against that.

It used to be that we'd fix the copy-paste bugs in the excel sheet when we converted it to a proper model, good to know that we'll now preserve them forever.

[flagged]
You would be surprised at the volume of money made by businesses supported by Excel.
Yes. I suspect there are thousands of Excel files that "process" >$1bn/yr out there.
Allow me to introduce to you: ACH. It is truly fascinating.
I’m trying to learn rust coming from python (for fun). I use various LLM for python and see it stumble.

It is a beautiful experience to realize wtf you don’t know and how far over their skis so many will get trusting AI. The idea of deploying a rust project at my level of ability with an AI at the helm is is terrifying.

  • taneq
  • ·
  • 4 hours ago
  • ·
  • [ - ]
If they have the skills to verify the Excel model then they can apply the same approach to the numbers produced by the AI-generated model, even if they can’t inspect it directly.

In my experience a lot of Excel models aren’t really tested, just checked a bit and them deemed correct.

[dead]
It's not terrifying at all, some shops will fail and some will succeed and in the aggregate it'll be no different for the rest of us
Business as usual.
  • wrs
  • ·
  • 5 hours ago
  • ·
  • [ - ]
Some minor editing to how this would have been written in the mid-1980s:

“The real leaps are being made organically by employees, not from a top down [desktop PC] strategy. Where I see the real productivity gains are small teams deciding to try and build a [Lotus 123] assisted workflow for a process, and as they are the ones that know that process inside out they can get very good results - unlike a [mainframe] software engineering team who have absolutely zero experience doing the process that they are helping automate.”

The embedded “power users” show the way, then the CIO-friendly packaged software follows much later.

The power is in the tails
> On one hand, you have Microsoft's (awful) Copilot integration for Excel (in fairness, the Gemini integration in Google Sheets is also bad). So you can imagine financial directors trying to use it and it making a complete mess of the most simple tasks and never touching it again.

Microsoft has spent 30 years designing the most contrived XML-based format for Excel/Word/Powerpoint documents, so that it cannot be parsed except by very complicated bespoke applications with hundreds of developers involved.

Now, it's impossible to export any of those documents into plain text that an LLM can understand, and Microsoft Copilot literally doesn't work no matter how much money they throw at it. My company is now migrating Word documents to Markdown because they're seeing how powerful AI is.

This is karmic justice imo.

Tim Berners-Lee thought pages would become machine-readable long ago, with "obvious" benefits, and that idea partly drove XML, RDF and HTML 5. Now the benefit of doing so seems even bigger (but are they?), and the time spent making existing documents AI readable seems to keep growing.
Totally agree, though ironically Claude code works way better with Excel than I expected.

I even tried telling Copilot to convert each sheet to a CSV on one attempt THEN do calculations. It just ignored it and failed miserably, ironically outputting me a list of files that it should have made, along with the broken python script. I found this very amusing.

> Microsoft has spent 30 years designing the most contrived XML-based format for Excel/Word/Powerpoint documents, so that it cannot be parsed except by very complicated bespoke applications with hundreds of developers involved.

I had interns use c++ to unzip, parse, and repackage to json a standardized visio doc. I had no say in the standard, but specific blocks meant specific things, etc. The project was successful. The xml was parse-able... at least for our needs. The overall project died a swift death and this tidbit will probably be forgotten forever in the depths of repo heirarchy.

what would you have used?
I don't see a divergence, from what I can tell a lot of people have only just started using agents in the past 3-4 months when they got good enough that it was hard to say otherwise. Then there's stuff like MCP, which never seemed good and was entirely driven by people who talked more about it than used it. There also used to be stuff like langchain or vector databases that nobody talks about anymore, maybe they're still used but they're not trendy anymore.

It seems way too soon to really narrow down any kind of trends after a few months. Most people aren't breathlessly following the next twitter trend, give it at least a year. Nobody is really going to be left behind if they pick up agents now instead of 3 months ago.

While I agree that the MCP craze was a bit off-putting, I think that came mostly from people thinking they can sell stuff in that space. If you view it as a protocol and not much else, things change.

I've seen great improvements with just two MCP servers: context7 and playwright. The first is great on planning sessions and leads to better usage of new-ish libraries, and the second is giving the model a feedback loop. The advantage is that they work with pretty much any coding agent harness you use. So whatever worked with cursor will work with cc or opencode or whatever else.

The only people I see talking about MCP are managers who don't do anything but read linked in posts and haven't touched a text editor in years if ever.
  • neom
  • ·
  • 5 hours ago
  • ·
  • [ - ]
Not sure how much falling behind there is even going to be, I'm an old school linux type with D- programming skills, yet getting going building things has been ridiculously easy. The swarms thing makes is so fast. I've churned 2 small but tested apps out in 2 weekends just chatting with claude code, the only thing I had to do was configure the servers.
  • _1tan
  • ·
  • 2 hours ago
  • ·
  • [ - ]
What‘s used instead of MCP in reality? Just REST or other existing API things?
> Microsoft itself is rolling out Claude Code to internal teams

Seems like Nadella is having his Baller moment

Code red moment
  • fdsf2
  • ·
  • 6 hours ago
  • ·
  • [ - ]
Nothing but ego frankly. Apple had no problem settling for a small market share back in the day... look where they are now. It didnt come from make-believe and fantasy scenarios of the future based on an unpredictable technology.
>look where they are now.

Still with a small market share. They only figured out how to extort the maximum amount of money from a smaller user base, and app developers, really anyone they can.

I'm still trying to wrap my head over the past decade: useful AI, self operating vehicles, real AI robots, immersive VR, catching reusable rockets with chopsticks, and of course the flying cars.

What will be the expected work output for the average future worker?

  • with
  • ·
  • 5 hours ago
  • ·
  • [ - ]
> The bifurcation is real and seems to be, if anything, speeding up dramatically. I don't think there's ever been a time in history where a tiny team can outcompete a company one thousand times its size so easily.

Slightly overstated. Tiny teams aren't outcompeting because of AI, they're outcompeting because they aren't bogged down by decades of technical debt and bureaucracy. At Amazon, it will take you months of design, approvals, and implementation to ship a small feature. A one-man startup can just ship it. There is still a real question that has to be answered: how do you safely let your company ship AI-generated code at scale without causing catastrophic failures? Nobody has solved this yet.

  • mhink
  • ·
  • 1 hour ago
  • ·
  • [ - ]
> how do you safely let your company ship AI-generated code at scale without causing catastrophic failures? Nobody has solved this yet.

Ultimately, it's the same way you ship human-generated code at scale without causing catastrophic failure: by only investing trust in critical systems to people who are trustworthy and have skin in the game.

There are two possibilities right now: either AI continues to get better, to the point where AI tools become so capable that completely non-technical stakeholders can trust them with truly business-critical decision making, or the industry develops a full understanding of their capabilities and is able to dial in a correct amount of responsibility to engineers (accounting for whatever additional capability AI can provide). Personally, I think (hope?) we're going to land in the latter situation, where individual engineers can comfortably ship and maintain about as much as an entire team could in years past.

As you said, part of the difficulty is years of technical debt and bureaucracy. At larger companies, there is a *lot* of knowledge about how and why things work that doesn't get explicitly encoded anywhere. There could be a service processing batch jobs against a database whose URL is only accessible via service discovery, and the service's runtime config lives in a database somewhere, and the only person who knows about it left the company five years ago, and their former manager knows about it but transferred to a different team in the meantime, but if it falls over, it's going to cause a high-severity issue affecting seven teams, and the new manager barely knows it exists. This is a contrived example, but it goes to what you're saying: just being able to write code faster doesn't solve these kinds of problems.

I swear in a month at a startup I used to build what takes a year at my current large corp job. AI agents don't seem to have sped up the corporate process at all.
> AI agents don't seem to have sped up the corporate process at all.

I think there's a parallel here between people finding great success with coding agents vs. people swearing it's shit. But when prodded it turns out that some are working on good code bases while others work on shit code bases. It's probably the same with large corpos. Depending on the culture, you might get such convoluted processes and so much "assumed" internal knowledge that agents simply won't work ootb.

Thought this was going to be more about programmers, but it was actually about non technical users and Microsoft’s product development failure.

One tidbit I’d disagree with is that only those using the bleeding edge AI tools are reaping the benefits. There seem to be a lot of highly specialized tools and a lot of specific configurations (and mystical incantations) to get them to work, and those are constantly changing and being updated. The bleeding edge is a dangerous place to be if you value your time (and sanity).

Personally, as someone working on moderate-to-highly complex software (live inference of industrial IoT data), I can’t really open a merge / pull request for my colleagues to review unless I 100% understand what I’ve pushed, and can explain to them as well.

My killer app for AI would just be a CLI that gets me to a commit based on moderately technical input:

“Add this configuration variable for this entry point; split this class into two classes, one for each of the responsibilities that are currently crammed together; update the unit tests to reflect these changes, including splitting the tests for the old class into two different test classes; etc”

But, all the hype of the bleeding edge is around abstracting away the entire coding process until you don’t even understand what code is being generated? Hard to see it as anything but a pipe dream. AI is useful, but it’s not a panacea - you can’t fire it and replace it when it fucks up.

“Add this configuration variable for this entry point; split this class into two classes, one for each of the responsibilities that are currently crammed together; update the unit tests to reflect these changes, including splitting the tests for the old class into two different test classes; etc”

Granted I'm way behind the curve, but is this not how actual engineers (and not influencers) are using it? I heavily micro-manage the implementation because my manager still expects me to know the code

Microsoft's failure around copilot in Excel gave my partner a very poor impression on AI's ability to help with financial tasks.

It took a lot of convincing, but I finally got her to start using ChatGPT to help her write SQL and walk her through setting up some SaaS accounting software formulas.

It worked so well now she's trying to find more applications at work. Claude code is too scary for her though. That will need to be in some Web UI before she feels comfortable giving it a try.

  • ·
  • 2 hours ago
  • ·
  • [ - ]
> sandboxing agents is difficult

I use this amazingly niche and hipster approach of giving the agent its own account, which through inconceivably highly complex arcane tweaking and configurations can lock down what they can and cant do.

---

Can somebody for the love of god tell me why articles keep bringing up why this is so difficult?

I have antigravity in its own account and that has worked pretty well so far. I also use devcontainers for the cli agents and that has also worked out well. It's one click away in my normal dev flow (I was using this anyway before for python projects).
  • ·
  • 1 hour ago
  • ·
  • [ - ]
It's a bunch of work, that takes a bunch of time, and I want it nowwwww-owwwww!

...is how I imagine that conversation goes.

what is the source data? the author says they've seen "far more non-technical people than I'd expect using Claude Code in terminal" so like, 3 people? who are these people?
  • ·
  • 5 hours ago
  • ·
  • [ - ]
  • ·
  • 5 hours ago
  • ·
  • [ - ]
> To really underline this, Microsoft itself is rolling out Claude Code to internal teams, despite (obviously) having access to Copilot at near zero cost, and significant ownership of OpenAI. I think this sums up quite how far behind they are

I think it sums up how thoroughly they've been disrupted, at least for coding AIs (independent of like-for-like quality concerns rightly mentioned elsewhere in this thread re: Excel/Python).

I understand ChatGPT can do like a million other things, but so can Claude. Microsoft deliberately using competitors internally is the thing that their customers should pay attention to. Time to transform "Nobody gets fired for buying Microsoft" into "Nobody gets fired for buying what Microsoft buy", for those inclined.

  • Havoc
  • ·
  • 6 hours ago
  • ·
  • [ - ]
The copilot button in excel at my work can’t access the excel file of the window it’s in. As in “what’s in cell A1” and it says I can’t read this file. Not even sure what the point is then frankly.

I’m happily vibe coding at work but yeah article is right. MS has enterprise market share by default not by merit. Stunning contrast between what’s possible and what’s happening in big corp

Meanwhile the people I know who work at Microsoft say there's a constant whip-cracking to connect everything they're doing to "AI" and prove that's what they're doing.
  • ·
  • 6 hours ago
  • ·
  • [ - ]
yeah I actually use AI a lot, but copilot is... useless. When microsoft adds copilot to their various apps they don't seem to put any thought/effort behind it beyond sticking a copilot button somewhere.

And if the copilot button does nothing but open a chat window without any real integration with the app, what the hell is the point of that when there's already a copilot button in the windows taskbar?

>You can easily run Monte Carlo simulations

Ah yes, Monte Carlo simulations, regular part of a finance team's objectives.

  • doom2
  • ·
  • 4 hours ago
  • ·
  • [ - ]
I guess this is as good a thread as any to ask what the current meta is for agentic programming (in my case, as applied to data engineering). There are all these posts that make it to the front page talking about productivity gains but very few of them actually detail the setup that's working for the author, just which model is best.

I guess it's like asking for people's vim configs, but hey, there are at least a few popular posts mainly around git/vim/terminal configs.

I push most work into chat interface (attach full codebase as a single file, paste in specs, describe what I want), then copy the tasklist from chat into codex. This is to reduce codex token usage to avoid breaching weekly limits. I'd use a more agent-heavy process if I didn't care about cost.
There more stuff in mine, but at the top of my ~/.claude/CLAUDE.md file, I have:

    ## Important Instructions
    
    - update todo.md as items are completed
    
    **Commit to git after making code changes.** Check `git status` first - only commit if there are actual changes:
    ```bash
    # If not in a git repository, initialize it first:
    git init
    
    # Then commit changes:
    git add <FILES_UPDATED>
    # Be surgical - add only the changes you just made.
    git commit -m "Description of changes"
This lets me have bite-sized git commits that I can marshall later, rather than having to wrangl git myself.
Three kinds, those who do not use it.
Generally speaking, if you're using your coding agent as your assistant inside your IDE, you're missing out on 80% of its benefits... If anything you should ask it how to do something and then act as its assistant on implementing it
I know it's fun to bash Microsoft, but--while Claude is better, Microsoft's Copilot is far from "awful". I've used it productively with the VS Code integration for some esoteric projects: PIC PIO programming and Verilog.
  • ·
  • 5 hours ago
  • ·
  • [ - ]
The argument seems to be that having a corporation restrict your ability to present arbitrary text directly to the model and only being able to go through their abstract interface which will integrate your text into theirs (hopefully) is more productive than fully controlling the input text to a model. I don't think that's true generally. I think it can be true when you're talking about non-technical users like the article is.
The use of specialization of interfaces is apparent if you compare Photoshop with Gemini Pro/Nano Banana for targeted image editing.

I can select exactly where I want changes and have targeted element removal in Photoshop. If I submit the image and try to describe my desired changes textually, I get less easily-controllable output. (And I might still get scrambled text, for instance, in parts of the image that it didn't even need to touch.)

I think this sort of task-specific specialization will have a long future, hard to imagine pure-text once again being the dominant information transfer method for 90% of the things we do with computers after 40 years of building specialized non-text interfaces.

One reasonable niche application I've seen of image models is in real estate, as a way to produce "staged" photos of houses without shipping in a bunch of furniture for a photo shoot (and/or removing a current tenant's furniture for a clean photo). It has to be used carefully to avoid misrepresenting the property, of course, but it's a decent way of avoiding what is otherwise a fairly toilsome and wasteful process.
This sort of thing (not for real estate, but for "what would this furniture actually look like in this room) is definitely somewhere the open-ended interface is fantastic vs targeted-remove in Photoshop (but could also easily be integrated into a Photoshop-like tool to let me be more specific about placement and such).

I was a bit surprised by how it still resulted in gibberish text on posters in the background in an unaffected part of the image that at first glance didn't change at all. So even just the "masking" ability of like "anything outside of this range should not be touched" of a GUI would be a godsend.

  • fdsf2
  • ·
  • 6 hours ago
  • ·
  • [ - ]
It behooves me that Gemini et al dont have these standard video editing tools. Do the engineers seriously think prompting by text is the way people want videos to be generated? Nope. People want to customise. E.g. Check out capcut in the context of social media.

Ive been trying to create a quick and dirty marketing promo via an LLM to visualise how a product will fit into the world of people - it is incredibly painful to 'hope and pray' that by refining the prompt via text you can make slight adjustments come through.

The models are good enough if you are half-decent at prompting and have some patience. But given the amount invested, I would argue they are pretty disappointing. Ive had to chunk the marketing promo into almost a frame-by-frame play to make it somewhat work.

Speaking as someone who doesn't like the idea of AI art so take my words with a grain of salt, but my theory is that this input method exclusivity is intentional on their part, for exactly the reason you want the change. If you only let people making AI art communicate what they want through text or reference attachments (the latter of which they usually won't have), then they have to spend time figuring out how to put it into words. It IS painful to ask for those refinements, because any human would clearly understands it. In the end, those people get to say that they spent hours, days, or weeks refining "their prompt" to get a consistent and somewhat-okay looking image; the engineers get to train their AI to better understand the context of what someone is saying; and all the while the company gets to further legitimize a false art form.
tl;dr: If you are trying to protect your IP from AI you probably use Copilot or nothing. If you have no IP to protect you are free to mess about.