Developers, Operations, and Security used to be dedicated roles.
Then we made DevOps and some businesses took that to mean they only needed 2/3 of the headcount, rather than integrating those teams.
Then we made DevSecOps, and some businesses took that to mean they only needed 1/3 the original roles, and that devs could just also be their operations and appsec team.
That's not a knock on shift-left and integrated operations models; those are often good ideas. It's just the logical outcome of those models when execs think they can get a bigger bonus by cutting costs by cutting headcounts.
Now you have new devs coming into insanely complex n-microservice environments, being asked to learn the existing codebase, being asked to learn their 5-tool CI/CD pipelines (and that ain't being taught in school), being asked to learn to be DBAs, and also to keep up a steady code release cycle.
Is anyone really surprised they are using ChatGPT to keep up?
This is going to keep happening until IT companies stop cutting headcounts to make line go up (instead of good business strategy).
I think that what the MBAs miss is this phenomena of overconstraint. Once you have separate the generic role of "developer" into "developer, operations, and security", you've likely specified all sorts of details about how those roles need to be done. When you combine them back into DevSecOps, all the details remain, and you have one person doing 3x the work instead of one person doing the work 3x more efficiently. To effectively go backwards, you have to relax constraints, and let that one person exercise their judgment about how to do the job.
A corollary is that org size can never decrease, only increase. As more employees are hired, jobs become increasingly specialized. Getting rid of them means that that job function is simply not performed, because at that level of specialization, the other employees cannot simply adjust their job descriptions to take on the new responsibilities. You have to throw away the old org and start again with a new, small org, which is why the whole private equity / venture capital / startup ecosystem exists. This is also Gall's Law exists:
And then you evangelize this approach and every other company wants to follow suit but they don’t really have top talent in management or engineering or both (every company claims to hire top talent which obviously cannot be true). So they make a poor copy of what the best organizations were doing and obviously it doesn’t go well. And the thing is that they’ve done it before. With Agile and with waterfall before that, etc. There is no methodology (organizational, programming, testing, etc.) that can make excellence out of mediocrity.
This is a thought provoking phrase, and I liked that. Thinking a bit deeper, I'm not sure if it's accurate, practical or healthy though.
I've seen mediocrity produce excellent solutions, and excellence churn out rubbish, and I suspect most people with a few years and tech jobs under their belt have.
You could argue that if they turned out excellent things, then by definition, they're excellent, but that then makes the phrase uselessly tautological.
If it's true, then what's the advice to mediocre people and environments - just give up? Don't waste your time on excellent organisation, programming, testing, because it's a waste and you'll fail?
I think there's no 1 thing that can make excellence out of mediocrity for everyone. But I like to think that for every combination of work, people and environment, there's an optimal set of practices that will lead to them producing something better than average. Obviously a lot don't find that set of practices due to the massive search space.
Lots of people like rock climbing but you can’t expect an average climber to win a major competition or climb some of the toughest mountains. It doesn’t mean they shouldn’t enjoy climbing, but if they start training like they are about to free solo El Capitan they are very likely to get seriously hurt.
Average developed, operators, sec. people can do plenty of good work. But put them in a position that will require them to be doing things way out of their area of expertise and no matter how you structure that role and what tools you give them you are setting them up for failure.
Another thing I was thinking about was not individual people being average but rather organizations: an average mediocre organization cannot reliable expect excellence even out of its excellent employees. The environment just isn’t set up for success. I am sure most people here are familiar with having talented people working in an organization that decided to poorly adopt Agile because it’s the thing that will fix everything.
I think it's help to consider excellence in the context of principals and practice. Everyone should be aiming for excellent principals for their domain and environment. But the practice that implements the principals needs to be specific to the capabilities.
E.g. I think in principal, any company with any tech should have backups, and ideally the ability to rebuild after a disaster. If you have a team of unicorn engineers ("company A"), the principals might be put into practice by meticulously tight and efficient SDLC and accompanying pipelines, tooling and other scripts and automations. If you have only one very competent engineer ("company B"), but a huge bunch of traditional ops people, that principal might have to be put into practice with a more manual, basic process (Bob from accounting once a week makes a copy of the important spreadsheets on a CD and stores it in a fireproof safe at home.)
Both businesses can deliver excellent and reliable service for their customers because they implement excellent principals in a way that their employees can deal with.
Swap company A and company B's practice, and even though it's for the same excellent principal, everything falls apart. I think we're in agreement here.
It's an uncomfortable truth that - on average - 50% of environments are company B. There's a problem that 90% of companies want their practices to look like company A. Nobody wants to feel mediocre.
1. Optimal practices are highly context specific.
2. The search space is very high dimensional and more often than not we are blind to most of it.
Care to elaborate on these points with anecdotes?
And yet we keep trying. We continually invent languages, architectures, and development methodologies with the thought that they will help average developers build better systems.
That’s a pretty toxic statement. No one was born writing perfect code, every one of us needed to learn. An organizational culture that rewards learning is the way to produce an organization that produces excellent software.
It doesn’t matter how excellent someone is if the organization puts them to work on problems that nobody needs solved. The most perfect piece of software that nobody uses is a waste of bytes no matter how well it’s written.
You can certainly structure your organisation around that and achieve great results. There’s no need for excellent tech to solve most problems. That’s ostensibly true for almost everything, because our society wouldn’t be able to survive if everything needed to be excellent.
Sometimes you want to learn, but your talents fail you.
Sometimes you want to learn, and you learn from someone who isn't good at the thing.
Sometimes you want to learn, and your environment punishes effectiveness.
Sometimes you want to learn, and you learn correctly for the wrong context.
Yes, people who give a damn about doing their job well are often good employees. But the converse is not true: many people who are not good employees do in fact care a great deal. There's no need to convert a descriptive statement about results into a moral judgement of a person's work ethic, and it's often just not factually correct to do so.
> Sometimes you want to learn, and you learn from someone who isn't good at the thing.
> Sometimes you want to learn, and your environment punishes effectiveness.
> Sometimes you want to learn, and you learn correctly for the wrong context.
Should you just blame not having learned on the that and call it a day?
Sure, some people learn without much effort, but I’ve seen an equal amount of people that had to put it a massive amount of effort to be considered ‘smart’ (whatever that means).
I think I consider “well, I tried, it didn’t work, so it’s forever impossible” to be the definition of mediocre. That’s actually still pretty good, as many don’t even try, they’ll give something up as impossible before even starting.
I know people have reasons for that, but I don’t like it.
Your framing is someone making the immediate assumption that any failure derives from barriers beyond their control. And I agree, that's just as incorrect as assuming that any failure is due to barriers beyond their control. (It's also really demotivating.) But I think your framing is overly judgmental, because in fact most people who struggle at something have tried many times not to, they just haven't succeeded.
When speaking of other people, you often do not have good context on what they have or haven't tried. They have enormous context that you don't, often context that would be difficult or possible for you to understand from the outside even if you did have it. And to confidently judge that they just don't care without that context, without any awareness of what they have or haven't done, assumes way too much.
For me, doing advanced mathematics is easier than not feeling self-conscious when I talk to a grocery store clerk. Founding a company is easier than reliably keeping the clutter off of my desk. Walking many miles is easier than doing a push-up. I could explain to you why these things make sense within my particular context, but it would take many, many pages of trying to tell you who I am to do that. If you were to judge that I don't care about not being socially awkward, or that I don't care about having a clean room, you would be wrong - those things are just harder for me than they are for for the average person, and I choose (implicitly or explicitly) to put my efforts elsewhere.
I don't think that recognizing that is accepting mediocrity. I believe in excellence a great deal! But I think that if you want to seek excellence, you have to do it in ways that recognize what you are. Most of the time, you have to work with what you are and figure out ways to make that work. Once in a great while, something about what you are is so fundamentally at odds with what you want that you have to, at tremendous effort, change yourself. But the latter isn't something you can do every day or in every way.
"I've tried, so it's forever impossible" isn't the position I'm arguing for. The position I'm arguing for is: "I've tried, and it was very difficult, maybe more difficult for me than it is for you. I'm a human being whose resources and motivation (and judgment!) are finite, so I had to make a choice between working on this very hard thing and working on something else or taking some time to recuperate, and this time I decided some other option was better."
While we might not like it to be true, no amount of process can turn the average person into a Michael Jordan, a Jimmy Hendrix or a Leo Tolstoy. Assuming that software engineering talent is somehow a uniform distribution and not a normal one like every other human skill is likely incorrect. Don't conflate "toxic" with "true".
Startup architecture and a 500+ engineer org's architecture are fundamentally different. The job titles are the same, but wont reflect the actual work.
Of course that's always been the case, and applies to the other jobs as well. What a "head of marketing" does at a 5 person startup has nothing to do with what the same titled person does at Amazon or Coca Cola.
I've also seen many orgs basically retitling their infra teams members as "devops" and call it a day. Communication between devs from different part of the stack has become easier, and there will be more "bridge" roles in each teams with an experienced dev also making sure it all works well, but I don't see any company that cared about their stack and performance fire half of their engineers because of a new fancy trend in the operation community.
Certainly. The startup architecture is often better. I don’t know what exactly leads to those overcomplicated enterprise things, but I suspect its a lack of limitations.
Someone mentioned DevSecOps upthread. The breaking down of organizational barriers really was more of a smaller company sort of thing. Platform Engineering and SREs are a better model of how this has evolved at scale.
And all these people have been given fodder by the microservices revolution.
Well, no, the difference in quantity of business functions can be enormous due to duplication. A small business will have an accounting business function, but a large business may have 57 separate accounting business functions due to having operations in different countries, having subsidiaries and mergers&acquisitions (possible multiple concurrently ongoing ones) where you will merge accountings (multiple!) from the new acquisition into yours, but it has not yet happened.
If you have 5 people in a startup you have 10 connections between them, 20 people = 190 connections, 100 = 4950 connections, 1000 people = 499500 connections.
Sure you split then in groups with managers and managers managers etc to break down the connections to less than the max but it's still going to be orders of magnitude more communication and coordination needed than in a startup.
In a larger company, literally 80% of your job is meetings, Slack, and distractions.
I think it entirely has to do with a generation of software people getting into the field (understandably) because it makes them a lot of money, rather than because they're passionate about software. These, by-and-large, are mediocre technical people and they tend to hire other mediocre technical people.
When I look around at many of the newer startups that are popping up they're increasing filled with really talented people. I chalk this up largely to the fact that people that really know what they're doing are increasingly necessary to get a company started in a more cash constrained environment and those people are making sure they hire really talented people.
Tech right now reminds me so much more of tech around 2004-2008, when almost everyone one that was interested in startups was in it because they loved hacking on technical problems.
My experience with Cursor is that it is excellent at doing things mediocre engineers can do, and awful at anything more advanced. It also requires the ability to very quickly understand someone else's code.
I could be wrong, but my suspicion is this will allow a lot of very technical engineers, that don't focus on things like front-end or web app development, to forgo needing to hire as many junior webdev people. Similar to how webmasters disappeared once we have frameworks and tools for quickly building the basic HTML/CSS required for a web page.
I don't know. Curiosity, passion, focus, creative problem solving seem to me much more important criteria for an engineer to have, rather than bitwise operations. An engineer that has these will learn everything needed to get the job done.
So it seems like we all got off the main road, and started looking for shibboleths.
I can see having to refresh on various bit shifting techniques for fast multiplication etc (though any serous program should at least be aware of this), but XOR is fundamental to even knowing the basics of cryptography. Bitwise AND, NOT, and OR are certainly something every serious programmer should have a very solid understanding of no?
I do know how bitwise AND, NOT, OR, XOR work but I don't solve problems on day to day basis that use those.
If you give me two bytes to do the operations manually it is going to be a piece of cake for me.
If you give me a problem where most optimal solution is using combination of those operations I most likely will use other solution that won't be optimal because on my day to day work I use totally different things.
A bit of a tangent is that is also why we have "loads of applicants willing to do tech jobs" and at the same time "skilled workers shortage" - if company needs someone who can solve optimally problems with bitwise operations "right here right now", I am not the candidate, given couple of months working on those problems I would get proficient rather fast - but no one wants to wait couple of months ;)
yet they'll take a couple of months interviewing folks, and generally not find the unicorn they're after.
Can you tell me how this knowledge makes one a serious programmer? In which moments of the development lifecycle they are crucial?
At the same time, that might be a good indication of passion: a useless but foundational thing you learn despite having zero economic pressure to do so.
In the domains you have worked on, what are examples of such things?
But then I started investigating type systems and proof languages and discovered them through Boolean algebras. I didn’t work in that space, it was just interesting. I later learned more practically about bits through parsing binary message fields and wonder what the deal with endianness was.
I also recall that yarn that goes around from time to time about Carmack’s fast inverse square root algorithm. Totally useless and yet I recall the first time I read about it how fun a puzzle it was to work out the details.
I’ve encountered them many times since then despite almost never actually using them. Most recently I was investigating WASM and ran across Linear Memory and once again had an opportunity to think about the specifics of memory layout and how bytes and bits are the language of that domain.
I've first got to enjoy low level programming when reading Fabien Sanglard breaking down Carmack's code. Will look into WASM, sounds like it could be a fun read too.
The developer that implemented them could have used a few bools but decided to cram it all into one byte using bitwise operators because they were trying to seem smart/clever.
This was a web app, not a microcontroller or some other tightly constrained environment.
One should not have to worry about solar flares! Heh.
Maybe. Every time I write something I consider clever, I often regret it later.
But young people in particular tend to write code using things they don’t need because they want experience in those things. (Just look at people using kubernetes long before there is any need as an example). Where and when this is done can be good or bad, it depends.
Even in a web app, you might want to pack bools into bytes depending on what you are doing. For example, I’ve done stuff with deck.gl and moving massive amounts of data between webworkers and the size of data is material.
It did take a beat to consider an example though, so I do appreciate your point.
Coming from a double major including EE though, all I have to say is that everyone’s code everywhere is just a bunch of NOR gates. Now if you want to increase your salary, looking up why I say “everything is NOR” won’t be useful. But if you are curious, it is interesting to see how one Boolean operand can implement everything.
Nobody that uses bit flags do it because they think it makes them look clever. If anything they believe it looks like using the most basic of tools.
> One should not have to worry about solar flares!
Do you legitimately believe that a bool is immune to this? Yeah, I get this a joke, but it's one told from a conceit.
This whole post comes off as condescending to cover up a failure to understand something basic.
I get it, someone called you out on it at some point in your life and you have decided to strike back, but come on... do you really think you benefit from this display?
I made a concerted effort to understand the code before I made any effort to adapt it to the repo I was working on. I'm glad I did (although honestly, it wasn't in the remotest bit necessary to solve the task at hand!)
> someone who has "curiosity, passion, focus, creative problem solving" regarding programming
would find this on their journey. Whereas you are describing someone who merely views programming as a job.
It's perfectly fine to view programming as "just work" but that's not passion. All of the truly great engineers I've worked with are obsessed with programming and comp sci topics in general. They study things in the evening for fun.
It's clear that you don't, and again, that's fine, but that's not what we're talking about.
Software for you is a job. I work in software because it's an excuse to get paid to do what I love doing. In the shadow of the dotcom bust most engineers where like this, and, they were more technical and show more expertise than most software engineers do today.
When the curious engineer is reading a book about their choosen programing language. Most likely in the part which describes operators.
Or when they parse something stored in binary.
The thing is there are unfathomable amount of such things to learn, and if somebody doesn't stumble upon them or won't spend time with them it doesn't indicate lack of it.
Bitwise operators are not a particularly complex topic and extend from the basics of how logic is implement in a computer system.
Personally I think ever one interested in programming should have, at least once, implemented a compiler (or at least interpreter). But a compiler is a substantial amount of work, so I understand that not everyone can have this opportunity. But understanding bitwise operators is requires a minimal investment of time and is essentially to really understanding the basics of computing.
This weird elitism drives me mad, but maybe truly I haven't done enough to prove my worthiness in this arena. Maybe PHP or JavaScript is not true programming.
What other elementary topics are considered too nerdy by the cool people in IT today? Binary numbers, pixels...? What are the cool kids passionate about these days? Buzzwords, networking, social networks...?
I think he's right, nearly every engineer needs a basic understanding IP routing more today than in the 2000's with how connected the cloud is. Few engineers do, however.
Every time you need to ask yourself, "Where is this packet going?" you're using bitwise operators.
There is absolutely truth to what you're saying. But like most obvious observations that aren't playing themselves out, there's more to it than that.
-----
One: curiosity, passion, and focus matter a lot. Every company that hires through us is looking for them in one form or another. But they still have to figure out a means by which to measure them.
One company thinks "passion" takes the form of "wanting to start a company someday", and worries that someone who isn't that ambitious won't be dedicated enough to push past the problems they run into. But another thinks that "passion" is someone who wants to tinker with optimizing some FPGA all night long because the platonic ideal of making a 15% more efficient circuit for a crypto node is what Real Technical Engineers do.
These companies are not just looking for passion in practice, but for "passion for".
And so you might say okay, screen for that. But the problem is that passion-for is easily faked - and is something you can easily fail to display if your personality skews the wrong way.
When I interviewed at my previous company many years ago, they asked me why I wanted to work there. I answered honestly: it seemed like a good job and that I'd be able to go home and sleep at night. This was a TERRIBLE answer that, had I done less well on other components of the interview or had they been interviewing me for more than a low-level job, would likely have disqualified me. It certainly would not have convinced them I had the passion to make something happen. But a few years later I was in the leadership of that company, and today I run a successor to it trying to carry the torch when they could not.
If you asked me the same question today about why I started a company, an honest answer would be similar: I do this business because I know it and because I enjoy its challenges, not because it's the Most Important Thing In The World To Me. I'm literally a startup founder, and I would not pass 90% of "see if someone's passionate enough to work at a startup" interview questions if I answered them honestly.
On the flip side, a socially-astute candidate who understands the culture of the company and the person they're interviewing with can easily fake these signals. There is a reason that energetic extraverts tend to do well in business - or rather, there are hundreds of reasons, and this is one of them. Social skills let you manipulate behavioral interviews to your advantage, and if you're an interviewer, you don't want candidates doing that.
So in effect, what you're doing here is replacing one shibboleth that has something to do with technical skill, with another that is more about your ability to read an interviewer and "play the game". Which do you think correlates better with technical excellence?
-----
And two: lots of people are curious, passionate, energetic, and not skilled.
You say that a person with those traits "will learn" everything needed. That might even be true. But "will learn" can be three years, five years, down the line.
One of the things we ask on our interview is a sort of "fizzbuzz-plus" style coding problem (you can see a similar problem - the one we send candidates to prep them - at https://www.otherbranch.com/practice-coding-problem if you want to calibrate yourself on what I'm about to say). It is not a difficult problem by any normal standard. It requires nothing more than some simple conditional logic, a class or two, and the basic syntax of the language you're using.
Many apparently-passionate, energetic, and curious engineers simply cannot do it. The example problem I linked is a bit easier than our real one, but I have reliable data on the real one, which tells me that sixty-two percent of candidates who take it do not complete even the second step.
Now, is this artificial? Yeah, but it's artificially easy. It involves little of the subtlety and complexity of real systems, by design. And yet very often we get code that (translated into the example problem I linked) is the rough equivalent of:
print("--*-")
with no underlying data structure, or if (row == 1 && col == 3)
where the entire board becomes an immutable n^2 case statement that would have to be wholly rewritten if the candidate were to ever get to later steps of the problem.Would you recommend someone who wrote that kind of code, no matter how apparently curious, passionate, or energetic they were?
If I was at work, that wouldn't be a problem, take 3 minutes and come back to it. You don't have that in an interview.
I'm thinking of the observation from the speed climbing event at the Olympics. They had set up the course wrong, and one of the holds in one of the lanes was like 2cm out of position-- and it wasn't even a hold that was used by the climbers. But that was enough that people in that lane consistently lost.
It also goes back to if you're passionate about the technology, you'll be willing to spend a weekend learning something new vs just doing the bare minimum to check off this week's sprint goals.
Think of your "DevSecOps" people doing 3x the work they should. Do you know what else are they doing? Booking their own business travels. Reconciling and reporting their own expenses. Reporting their own hours, broken down by business categories. Managing their own vacation days. Managing their own meetings. Creating presentations on their own, with graphics they made on their own. Possibly even doing 80% of work on lining up purchases from third parties. And a bunch of other stuff like that.
None of these are part of their job descriptions - in fact, all of these are actively distracting and disproportionally compromise the workers' ability to do their actual jobs. All of these also used to have dedicated specialists, that could do it 10x as efficiently, for fraction of the price.
My hypothesis is this: those specialists like secretaries, internal graphics departments, financial staff, etc. they all were visible on the balance sheet. Eliminating those roles does not eliminate the need for their work to be done - just distributes it to everyone in pieces (in big part thanks to self-serve office software "improving" productivity). That slows everyone down across the board disproportionally, but the beancounters only see the money saved on salaries of the eliminated roles - the slowdown only manifests as a fuzzy, generic sense of loss of productivity, a mysterious costs disease that everyone seems to suffer from.
I say it's not mysterious; I say that there is no productivity gain, but rather productivity loss - but because it turns the costs from legible, overt, into diffuse and hard to count, it's easy to get fooled that money is being saved.
It feels like you are working at the same company as me.
Companies complain all the time about how difficult it is to find competent developers, which is their excuse for keeping most of the teams understaffed. Okay, then how about increasing the developers' productivity by letting them focus on, you know, development?
Why does the paperwork I need to do after visiting a dentist during the lunch break take more time than the visit itself? It's not enough just to bring the receipt to HR; I need to scan the paper, print the scan, get it signed by a manager, start a new process in a web application, ask the manager to also sign it in the web application, etc. I need to check my notes every time I am doing it, because the web application asks me a lot of information that in theory it should already know, and the attached scan needs to have a properly formatted file name, and I need to figure out the name of the person I should forward this process to, rather than the application figuring it out itself. Why??? The business travels were even worse, luckily my current company doesn't do those frequently.
My work is defined by Jira tickets that mostly contain a short description like "implement XY for Z", and it's my job to figure out wtf is "Z", who is the person in our company responsible for "Z", what exactly they meant by "XY", where is any specification, when is the deadline, who am I supposed to coordinate with, and who will test my work. I miss the good old days when we had some kind of task description and the definition of done, but those were the days when we had multiple developers in a team, and now it's mostly just me.
I get invitations to meetings that do not concern me or any of my projects, but it's my job to figure that out, not the job of the person who sent the invitations. Don't get me started on e-mails, because my inbox only became manageable after I wrote dozen rules that put various junk in the spam folder. No, I don't need a notification every time someone in the company made an edit on any Confluence page. No, I don't need notifications about people committing code to projects I am not working on. The remaining notifications often come in triplicate, because first I get a message in Teams, then an e-mail saying that I got a message in Teams, and finally a Windows notification saying that I got a new e-mail. When I return from a vacation, I spend my first day or two just sorting out the flood in my e-mails.
On some days, it is lunchtime before I had an opportunity to write my first line of code. So it's the combination of being Agile-Full-Stack-Dev-Sec-Ops-Cloud-Whatever and the fact that everything around me seems designed to make my work harder that is killing me. This is a system that slows down 10x developers to 1x developers, and us lesser mortals to mere 0.1x developers.
So, either capitalism doesn't work, or your thesis isn't quite right...
I have two other counters to offer, first we have seen GDP per capita gradually increasing in major economies for the last 50 years (while the IT revolution has played out). There have been other technical innovations over this time, but I believe that GDP per capita has more than quadrupled in G8 economies. The USA and Canada have, at the same time, enjoyed a clear extra boost from fracking and shale extraction, and the USA has arguably enjoyed an extra extra boost from world dominance - but arguably.
The second one is simple anecdote. Hour for hour I now can do far more in terms of development than I did when I was a hard core techie in the 90's and 2000's. In addition I can manage and administer systems that are far more complex than those it took teams of people to run at that time (try running a 10gb size db under load on oracle 7 sitting on top of spinning rust and 64mb ram store for fun) I can also manage a team of 30's expenses, timesheets, travel requests and so on that again would have taken a person to do. I can just do these things and my job as well and I do it mostly in about 50 hrs a week. If I wasn't involved in my people's lives and happy to argue with customers to get things better I could do it in 40 hrs regularally, for sure. But I put some discretion in.
My point is - we are just more productive. It is hard to measure, and anecdote / "lived experience" is a bad type of evidence, but I think it's clearly there. This is why then accountants have been able to reorganise modern business organisations to use fewer people to do more. Have they destroyed value while doing this - totally - but they have managed to get away with it because 7/10 they have been right.
Personally I've suffered from the 3/10 errors. I know many of us on here have, but we shouldn't shut our eyes because of that.
That’s really not how competition works in practice. Verizon and AT&T are a mess internally but their competitors where worse.
GDP per capita has a lot more to do with automation than individual worker productivity. Software ate the world, but it didn’t need to be great software to be better than no software.
At large banks you often find thousands of largely redundant systems from past mergers all chugging along at the same time. Meanwhile economies of scale still favor the larger bank because smaller banks have more systems per customer.
So sure you’re working with more complex systems, but how much of that complexity is actually inherently beneficial and how much is legacy of suboptimal solutions? HTML and JavaScript are unbelievably terrible in just about every way except ubiquity thus tools / familiarity. When we talk about how efficient things are, it’s not on some absolute scale it’s all about the tools built to cope with what’s going on.
AI may legitimately be the only way programmers in 2070 deal with ever more layers of crap.
Yes, it's definitely this.
Btw, they key skill you're leaving out, is to understand the business your company is in.
If you can couple even moderate developer ability with a good understanding of business objectives, you may stay relevant even while some of the pure developers are replaced by AI.
I went through this myself early in my career. I did ML at insurance companies and got branded as an insurance ML guy. Insurance companies don't pay that well and there are a limited number of them. After I got out of that lane and got some name-brand tech experience under my belt, job hunting was much easier and my options opened up. I make a lot more money. And I can always go back if I really want to.
If you're an "ML insurance guy" outside of the US, it may be quite lucrative compared to other developers. It's really only in the US (and maybe China) that pure developers are demanding $200k+ salaries.
In most places in Europe, even $100k is considered a high salary, and if you can provide value directly to the business, it will add a lot to your potential compensation.
And in particular, if your skills are more about the business than the tech/math of your domain, you probably want to leverage that, rather than trying to compete with 25-year-olds with a strong STEM background, but poor real-life experience.
I think it's worth adding here that US developers can have a much higher burden for things like health insurance. My child broke his arm this year, and we hit our very high deductible.
I would like to see numbers factoring in things like public transportation, health insurance, etc., because I personally feel like EU vs US developers are a lot closer in quality of life after all the deductions.
I am in my early 50s, and having worked in tech for the last 24 (with sane hours and never at the FAANG salaries) I own my condo in a nice town, my kids college is paid for and my personal accounts are well into 7-digits.
This is not all roses: schools in the US stink (I have grown up on the other side of the pond and was lucky to get a great science education in school so I can see the difference), politics are polarized, supermarket produce is mediocre, etc.
The biggest issue for me though is that I suspect that the societies on both sides of the pond are going to go through major changes in the next 10-15 years and many social programs will become unaffordable. I see governments of every first world country making crazy financial and societal choices so rather than depending on government to keep me alive and well I much prefer US salaries allowing me to have money in my own hands. I can put some of that into gold or Bitcoin, or buy an inexpensive apartment in a quiet country with decent finances and reasonable healthcare. Not being totally beholden to the government is what helps me sleep well at night. My 2c.
In similar places in the US it may not even be the risk of criminals that is the largest threat. It may simply be that the road network is built for cars only, with few safe ways to cross roads without a vehicle.
By comparison, where I live, parents are expected to act as a kind of traffic police a couple of mornings every year. That means that every place where the kids have to cross the road will have an adult blocking all cars from passing even if a kid is merely getting close (even if the speed limit is only 30km/h or 20mph)
In other words, pedestrians get the highest priority while motorists are treated as second class.
Nation wide, about 50-60% of the kids will walk or ride a bike to school in my country (and those who don't tend to either live far from the school or in a higher crime area).
Compared to ~10% in the US.
Also, while in the US kids of low income households are more likely to (have to) walk to school.
In my country, it's possible that the relationship is, if anything, inversed. Having the kids walk to school is seen by many resourceful families as healthy, both from the physical activity in a screen-rich world and to teach them to be independent and confident.
That means that in neighborhoods with a large percentage of such parents the parents are likely to ensure that the route to school is safe and walkable for kids.
However, some US areas have competitively priced housing and jobs that would make the balance tilt in favor of America. In EU, affordable spots with lots of desirable local jobs are becoming increasingly rare. Perhaps Vienna, Wrocław and a few other places in Central/Eastern EU.
"Water supply issues are already holding back housing development around the city. In its previous draft water resources management plan, Cambridge Water failed to demonstrate that there was enough to supply all of the new properties in the emerging local plan without risk of deterioration.
The Environment Agency recently confirmed that it had formally objected to five large housing developments in the south of the county because of fears they could not sustainably supply water. It has warned that planning permission for more than 10,000 homes in the Greater Cambridge area and 300,000 square metres of research space at Cambridge University are in jeopardy if solutions cannot be found.
A document published alongside the Case for Cambridge outlines the government’s plan for a two-pronged approach to solving the water scarcity issue, to be led by Dr Paul Leinster, former chief of the Environment Agency, who will chair the Water Scarcity Group.
In the long term, supply will be increased, initially through two new pieces of infrastructure: a new reservoir in the Fens will delivery 43.5 megalitres per day, while a new pipeline will transfer 26 megalitres per day from Grafham Water, currently used by Affinity Water.
But, according to Kelly, a new reservoir would only solve supply requirements for the existing local plan and is “not sufficient if you start to go beyond that” – a point that is conceded in the water scarcity document. "
https://www.building.co.uk/focus/a-vision-for-150000-homes-b...
Same thing with transport. "We can't build new houses because it would increase car traffic", meanwhile putting up every barrier they can think of to stop East-West Rail.
The pharma & biotech sector is blooming in the Golden Triangle, which also includes London.
edit: Yes they do employ a ton of people, but most people I know don’t make those salaries.
I live in a location that wouldn't have public transportation even in Europe. And my healthcare, while not "free," was never expensive outside of fairly small co-pays and a monthly deduction that wasn't cheap but was in the hundreds per month range. Of course, there are high deductible policies but that's a financial choice.
If that's worth more than $50k or so anyone living in the US that's not making significantly more than the median wage or would be in pretty horrible spot financially.
> hit our very high deductible.
Isn't the maximum deductible that's allowed "only" $16k?
Also taxes are usually significantly higher in Europe with some exceptions (e.g. Switzerland vs California, though you need to pay for your health insurance yourself in Switzerland ).
I have never seen this happen. All the new grads I've ever worked with (from Ivy league schools as well) are pretty much incapable of doing anything without a lot of handholding, they lack so much of everything, experience, context, business knowledge, political awareness, technical knowledge (I still can't believe "designing data intensive applications" isn't a required read in college for people that want big tech jobs) and even the STEM stuff (if they're so good why do i have to explain why we don't use averages?).
Which is fine, they are new to the market, so they need to be educated. Other than CRUD apps, i doubt these people can do anything complex without supervision, we're not even in the same league to compete with each other.
I got branded as a compiler guy. My salary is $0.
Pay up. Pay up. Pay up. That's why!
By 'stay relevant' you mean run your own? Ability to build + align to business objectives = $$$, no reason to be an Agile cog at that point.
In my banking corp, in past 13 years I've seen massive rise of complexity, coupled with absolutely madly done bureaucracy increase. I still could do all stuff that is required but - I dont have access. I cant have access. Simple task became 10 steps negotiating with obscure Pune team that I need to chase 10x and escalate till they actually recognize there is some work for them. Processes became beyond ridiculous, you start something and it could take 2 days or 3 months, who knows. Every single app will break pretty quickly if not constantly maintained - be it some new network stuff, unchecked unix update, or any other of trillion things that can and will go wrong.
This means - paper pushers and folks at best average at their primary job (still IT or related) got very entretched in processes and won, and business gets served subpar IT, projects over time and thus budget, perpetuating the image of shitty tolerated evil IT.
I stopped caring, work to live is more than enough for me, that 'live' part is where my focus is and life achievements are.
- my AC's app takes 45 s to load even if I just used it, because it needs to connect. Worse, I'll bring the temp down in my house and in the evening raise it, but it'll come on even when 5F below my target value, staying on for 15+ minutes leaving us freezing (5F according to __it's thermometer__!)
- my TV controls are slow. Enough that I buffer inputs and wait 2-3 seconds for the commands to play. That pressing the exit button in the same menu (I turn down brightness at night because auto settings don't work, so it's the exact same behavior), idk if I'm exciting to my input, exiting the menu, or just exiting the sub menu. It's inconsistent!
There's so much that I can go on and on and I'm sure you can too. I think one of the worst parts about being a programmer is that I'm pretty sure I know how to solve many of these issues, and in fact sometimes I'll spend days to tear apart the system to actually fix it. Of course to only be undone by updates that are forced (app or whatever won't connect because why is everything done server side ( ┛ ◉ Д ◉ ) ┛ 彡 ┻ ━ ┻ ). Even worse, I'll make PRs on open projects (or open issues another way and submit solutions) that having been working for months and they just go stale while I see other users reporting the same problems and devs working on other things in the same project (I'll even see them deny the behavior or just respond "works for me" closes issue before opener can respond)
I don't know how to stop caring because these things directly affect me and are slowing me down. I mean how fucking hard is it to use sort? It's not even one line!
What the fuck is wrong with us?
Fwiw, my TV just ends up being a big monitor because all the web apps are better and even with all the issues jellyfin has, it's better than many of those. I just mostly use a mouse or kdeconnect.
Speaking of which, does anyone have a recommendation for an android keyboard that gives me things like Ctrl and super keys? Also is there a good xdotool replacement for Wayland? I didn't find ydotool working as well but maybe I should give it another try.
I can suggest this setup and think it'll work for many. My desktop sits behind my TV because it mostly does computational work, might run servers, or gaming. I'm a casual and so 60fps 4k is more than enough even with the delay. Then I just ssh from my laptop and do most of the work from there. Pretty much the same as my professional work, since I need to ssh into hpc clusters, there's little I need to do on the machine I'm physically in front of (did we go back to the 70's?)
This is simple: we can't just trust each other. When programming started, people were mostly interested in building things, and there was little incentive to spoil other peoples work. Now there is money to be made, either through advertising, or through malpractice. This means that people have to protect their code from others. Program code is obfuscated (compiled and copyright enforced) or stored in a container with a limited interface (cloud).
It's not a technical issue, it's a social issue.
Applying a social mindset to technical issues (asking your compiler to be your partner, and preparing them a nice dinner) is equally silly as applying a technical mindset to social issues.
> When programming started, people were mostly interested in building things, and there was little incentive to spoil other peoples work. Now there is money to be made, either through advertising, or through malpractice
Yeah, I lean towards this too. Signals I use now to determine good software usually are things that look auxiliary because I'm actually looking for things that tell me the dev is passionate and "having fun." Like easter eggs, little things like it looks like they took way too much time to make something unimportant pretty (keeping doing this devs. I love it and it's appreciated. Always makes me smile ^_^). But I am also sympathetic, because yeah I also get tons of issues opened that should have been a google search or are wildly inappropriate. Though I try to determine if these are in good faith because we don't get wizards without noobs, and someone's got to teach them.But it all makes me think we forgot what all of this is about, even "the economy." Money is supposed to be a proxy for increasing quality of life. Not even just on a personal level. I'm happy that people can get rich doing their work and things that they're passionate about but I feel that the way current infrastructures are we're actively discouraging or handcuffing people who are passionate. Or that we try to kill that passion. Managers, let your devs "have fun." Reign them in so that they don't go too deep of rabbit holes and pull too far away, but coding (like any engineering or any science) is (also) an art. When that passion dies, enshitification ensues.
For a concrete example: I'm wildly impressed that 99.9 times I'm filling out a forum that includes things like a country or timezone that my country isn't either autoselected or a copy isn't located at the top (not moved! copied!). It really makes me think that better than chasing leet code questions for interviews you ask someone to build a simple thing and what you actually look for is the details and little things that make the experience smoother (or product better). Because it is hard to teach people to about subtly, much harder than teaching them a stack or specific language (and if they care about the little things they'll almost always be quicker to learn those things). Sure, this might take a little more work to interview people and doesn't have a precise answer, but programs don't have precise answers either. And given how long and taxing software interviews are these days I wouldn't be surprised if slowing down actually ends up speeding things up and saving a lot of money.
Do they have a place to mail your complaints? Who forces you to use the app? Annoy them. Annoy their boss. Write a flier pointing out how long it takes to use the app and leave those fliers in every laundry room you visit (Someone checks on those machines, right?). Heck, send an organization-wide email pointing out this problem. (CC your mayor, council-member, or congressional representative.) (You don't have to do all of these things, but a bit of gentle, non-violent, public name-and-shame can get results. Escalate gently accordingly as you fail to get results.)
> my AC's app takes 45 s to load even if I just used it, because it needs to connect
If I were in your shoes, assuming I had time, I might (a) do the above "email the company with a pointed comment about how their app sucks" or (b) start figuring out how to use Home Assistant as a shim / middleman to manage the AC, and thus make Home Assistant server and its phone app the preferred front-end for that system (c) write a review on your preferred review site indicating the app is a pile of garbage
> Even worse, I'll make PRs on open projects (or open issues another way and submit solutions) that having been working for months and they just go stale while I see other users reporting the same problems and devs working on other things in the same project (I'll even see them deny the behavior or just respond "works for me" closes issue before opener can respond)
Admittedly, the heavy-handed solution for this is to make a software fork, or a mod-pack, or "godelski's bag of fixes" or whatever, and maintain that (ideally automating the upkeep on that) until people keep coming to you for the best version, rather than the primary devs.
---
No, I don't do this to everyone I meet or for every annoyance (it's work, and it turns people away if you complain about everything), but emails to the top of the food chain pointing out that basic stuff is broken sometimes gets results (or at least a meeting where you can talk someone's ear off for 30 minutes about stuff that is wrong), especially if that mail / email also goes to other influential people.
I'm pretty chill and kind and helpful, but when something repeatedly breaks and annoys several people every day, you might hear about it in several meetings that I attend, possibly over the next year, until I convince someone to fix it (even if it's me who eventually is tasked with fixing it).
I can't hack everything.
I can't fix everything.
I need time for my own things I need time to be human.
But no one is having it. My impression is that while everyone else is annoyed by things there's a large amount of apathy and acceptance for everything being crud.
I know you're trying to help, but this is quite the burden on a person to try to fix everything. And things seem to be getting worse. So I'm appealing to the community of people who create the systems. I do not think these are things that can be fixed by laws. You can't regulate that people do their best or think things through.
I appeal here because if I can convince other developers that slowing down will speed things up, then many more will benefit (including the developers and the companies they work for). Even convincing a few is impactful. Even more when the message spreads.
Small investments compound and become mighy, but so do "shortcuts"
Presumably the building he lives in contracted out laundry service to some third party company, which is in charge of the machines and therefore shitty app. In this situation there really isn't any real choice to vote with your wallet. They can tell you to pound sand and there's nothing you can do. Your only real option is to move out. Maybe if OP is lucky, owns the property, and is part of the HOA he might be able to lobby for the vendor to be replaced.
Ummmm…
In my anecdotal experience, many New York City landlords don’t see their tenants as human beings, just a revenue source. Tenants complain? Maybe a city inspector shows up, a day or days later, so the landlord can turn the heat/water back on, and the inspector reports “no issue found.” People get mad and move out? New tenants pay an even higher rent! Heard horror stories about both individual and management company landlords. Can’t be the only city like this.
I’m pretty sure the long histories of social unrest under feudalism, the French Revolution, the mere existence of Marxism and Renter’s Rights Law, strongly beg to differ with your contention.
To put something in perspective a few months ago they changed mailing policy so that only the lease holder could pick up packages (citing safety. This is grad student housing, so already need to get your ID scanned to pick up a package or specifically authorize someone. Note that family and children can and do get university IDs -- not just for this). I wrote an email explaining that this was illegal (citing the relevant laws [0]) and explained how this decreases safety since not everyone is living with a responsible or even kind lease holder (luckily I'm the lease holder but I've been in that situation before. Student housing...). I got an annoyed letter back doubling down on the safety issue, so I escalated the issue twice (including reporting to the post office). I assume some lawyer finally saw it and freaked out. Now the policy is anyone in the house can pick up a package ( - _ - ; ). I've had no such success with the laundry app (they took away any other payment[1])
But my overall point is that 95% of these issues could be resolved my people taking a little more time to understand the consequences of their decisions. People will spend more time and energy defending bad decisions than resolving them. We all make mistakes, so that's not an issue. Especially since doing "good" it incredibly hard. But I'm pissed when we deny cracks in the system or that problems exist. I'm even okay with acknowledging it is low priority. But denial and gaslighting is what I more often experience and that's when I get angry.
[0] why my local postman confirmed and even double checked for me
[1] fun fact: they once double charged me. I sent them the logs from the app, a screenshot, and a screenshot from my bank showing the charges. They claimed to have no record. So I issued a clawback. They didn't seem to care and tbh, there's lots of businesses like this because they have such dominance over the market.
I've seen clear illegal behavior from entire industries and reporting it does nothing. If you want an unambiguous example pull up archive and pick a 3d printer manufacturer and read https://www.ecfr.gov/current/title-16/chapter-I/subchapter-B... (hell, I even tried getting this on the radar of YouTubers. No one cares). The false advertising they do is worse than the example the fucking FCC gives (when I emailed a few companies trying to let them know in case they weren't aware they said what they did was legal. My email to then was really just "hey, I noticed this. Figured you didn't know, so maybe this will help so someone doesn't try to sue you". Not asking for anything in return because all I want is to stop playing these stupid games that eat up so much of my time)
I need to stress so much -- especially because how people think about AI and intelligence (my field of research[0]) -- that language is not precise. Language is not thoughts. Sure, many think "out loud" and most people have inner speech, but language (even inner) is compressed. You use language to convey much more complex and abstract concepts in your head to someone who is hopefully trying to make their decompression adapt to uncover the other person's intended meaning. This isn't a Chinese room where you just look in the dictionary (which also shows multiple definitions for anything). I know it's harder with text and disconnected cultures integration, but we must not confuse this and I think it's ignored far too often. Ignoring it seems to just escalate that problems.
> Add some game-theory to the thinking on that?
And please don't be dismissive by hand waving. I'm sure you understand game theory. Which would mean you understand what a complex problem this is to actually model (to a degree I doubt anyone could to a good degree of accuracy). That you understand perturbation theory and chaos theory and how they are critical to modeling such a process meaning you only get probability distribution as results.[0] I add this context because it decreases the amount of people that feel the need to nerdsplain things to me that they don't research.
Like Rome, corruption, laziness and barbarians will tear it all down.
But to explain the point of frustration, I found the comment indistinguishable from a public intellectual masterbation. I chastised pojzon for this because I believe my op demonstrates that I'm quite aware of the issue and I think a reasonably intelligent person can infer that my understanding is deeper than what can be conveyed in a simple HN comment. I found your responses similar, being short quips.
But if you closely pay attention to my op, you'll find that I'm making a call to other developers to join me in pushing back. To not create the problems in the first place.
The reason I do not take kindly to responses like those is they are defeatist attitudes. Identifying problems is the easy part -- although not always easy themselves. But there's little value in that if that information is not used to pursue a means to resolve them. I appreciate this a bit more -- though I still find it lacking -- as at least you're suggesting references that one may wish to read to understand the issues deeper (though I feel similarly about many of these class of sources).
I don't mind adding comments to help refine understanding, but short quips don't. We can do better than a mic drop. And clearly I'm not shying away from verbosity. In today's age we're too terse to be meaningful. And you truly be terse and effective requires high still and time to refine. It is not the nature of a typical conversation.
And wholeheartedly I actually disagree with baby of these sources (despite agreeing with some points, I find the solutions either non-existent or severely lacking). The problem isn't that we need to remove complexity. That's a false notion. One of the many issues is that we need to recognize that complexity is unavailable. Yes, make things as simple as possible, but no more. It's important to remember that truth and accuracy is bounded by complexity, while lies are not. Shying away from complexity is turning away from growth. Late stage capitalism and socialism have a similar fatal flaw: bureaucrats who believe everything can be understood from numbers on a spreadsheet.
We're people, not machines. The environment evolves with time. Our understanding becomes more nuanced and complex as we grow. But the fuzziness never goes away because we can never measure anything directly, even if it stayed the same. The most important thing stressed in experimental science is understanding your error and limitations of your model (at least that's what I was taught and what I teach my students). Because you only have models, and all models are wrong (though some are better).
There's so much I can say on this topic, and there's so much that has been said before. But no one wants to talk about the (Von Neumann's) elephant in the room.
Before your laundry app, your AC app, and your TV being smart, it may not have been "seamless" to do those chores or work with those appliances, but you can't say your experience is seamless now, based on your anecdotes. A hammer is not made better by putting a computer chip in it. Simpler. Is. Better.
I wouldn't even say it's bureaucrats who think everything can be understood from spreadsheets. It's Pichai, it's Nadella, it's Altman, saying software will eat the world and data will lead us to a singularity but they're the same crowd trying to tell us Juicero is the future.
Your problem is a systems problem. The only way you will solve this systems problem is by changing the purpose of the system when you don't have any control over the inputs, interactions, or motivations of the system, because the tech brahmins that license the software and demand more money for minimal effort hold all the levers, and the AI craze is their final gambit to accrue all the power, irreversibly, forever. That's why it's a predicament.
why are you doing the chasing? unless you're the project manager, comment "blocked by obscure pune team" on the ticket and move on
In my experience the people who solely focus on the code end up being significantly less effective.
ah, see, so your job description includes, besides being a dev, also being a project manager. that's fine, there's nothing bad about it, it's just that your job requires a bit more from you than other places.
That's across the board, from the startups whose business plan is to be acquired at all costs, to the giant tech companies, whose business plan is to get monopoly power first, then figure out how to extract money later.
The field is wide open for a startup to do it right. Why not start one?
If it's the latter (number 2), then we need to start asking why American companies "doing about as well as possible" are incapable of producing secure and reliable software. Being able to produce secure and reliable software, in general and en masse, seems like a useful ability for a nation.
Reminds me of our national ability to produce aircraft; how's the competition in that market working out? And are we getting better or worse at producing aircraft?
If customers are willing to pay for X, and no companies make X available, you have a great case to make to a venture capitalist.
BTW, in every company I've worked for, the employees thought management was stupid and incompetent. In every company I've run, the employees thought I was stupid and incompetent. Sometimes these people leave and start their own company, and soon discover their employees think they're stupid and incompetent.
It's just a fact of life in any organization.
It's also a fact of life that anyone starting a business learns an awful lot the hard way. Consider Senator George McGovern (D), who said it best:
George McGovern's Mea Culpa
"In retrospect, I wish I had known more about the hazards and difficulties of such a business, especially during a recession of the kind that hit New England just as I was acquiring the inn's 43-year leasehold. I also wish that during the years I was in public office, I had had this firsthand experience about the difficulties business people face every day. That knowledge would have made me a better U.S. senator and a more understanding presidential contender. Today we are much closer to a general acknowledgment that government must encourage business to expand and grow. Bill Clinton, Paul Tsongas, Bob Kerrey and others have, I believe, changed the debate of our party. We intuitively know that to create job opportunities we need entrepreneurs who will risk their capital against an expected payoff. Too often, however, public policy does not consider whether we are choking off those opportunities."
https://www.wsj.com/articles/SB10001424052970203406404578070...
It would be nice if being smart and competent was the key to success in our society, but you hint at the real key to success in your own comment--getting favor and money from those who already have it.
You didn't really engage with the other half of my comment, but I'll say it again, in general our society seems to be crumbling and our ability to get things done efficiently and with competence is waning. Hopefully the right people can get the blessing of venture capitalists to fix this (/s).
If you've never run a business, it can sure seem that way.
> the real key to success in your own comment--getting favor and money
Nobody is going to invest in your startup unless you convince them that you're capable of making money for them.
What I have realized is that most employees never do even a basic analysis of their industry vertical, the key players and drivers etc. Even low-level employees who will never come face-to-face with customers can benefit from learning about their industry.
The flip side is that a lot of business people (I exclude people who start their own companies or actively take an interest in a vertical) are also mostly the same. They care about rising from low-level business/product role to a senior role, potentially C-suite role, and couldn't care less about how they make this happen. Many times, it is hard to measure a business person's impact (positive or negative) - think about Boeing. All their pains today were seeded more than 20 years ago with a series of bad moves but the then CEO walked off into the sunset with a pile of cash and a great reputation. OTOH, there was a great article yesterday on HN from Monica Harrington, one of the founders of Valve whose business decisions were crucial to Valve's initial success, but had to sell her stake in the company early on.
I think business, despite its outsize role in the success/failure of a company, follows the same power law of talent that most other professions carry. Most people are average, some really good, some real greedy etc.
Look at the explosion of browsers and their capabilities after the IE monopoly was broken.
If you think there's a problem with this model (and based on your wording of "doing it right", this seems to be the case), it's largely in the incentive structure, not the actors.
If you manage to secure the bag that way and exit that class altogether, good for you, but that solves nothing for the rest. There's no be all end all threshold that everyone can just stay above and stay ahead of inflation.
Who regulates the regulators?
It wasn't Seattle laws that did that.
It's that those companies needed to go from Growth-Mode to Margin-Mode. They could no longer sell VC dollars for dimes.
Presumably it rose in Seattle higher/faster than it would otherwise. The source that he provided in a sibling comment says sales dropped "immediately", which seems to corroborate this. It's lazy to argue "well prices rose elsewhere too so the minimum wage law couldn't possibly have had an impact"
In a healthy democracy, the civic-minded voters do.
It still seems to be better than all the alternatives.
And besides, what's the alternative here? Don't protect employees? Let them work below minimum wage?
As I said in my original comment, show me something that isn't from the first couple of weeks after it took effect. Preferably something scholarly, not just anecdotes from a newspaper
The voters and their representatives, if they put their minds to it.
I have received e-mails "hey thats DB don't touch that stuff you are not a DBA" or "hey developers shouldn't do QA" while the premise might be right, lots of things could be done much quicker.
I have seen throwing stuff over the wall "hey I am a developer, I don't care about some server stuff", "hey my DB is working fine it must be you application" or months of fixing an issue because no one would take the responsibility across the chain.
DevSecOps works very well when you have your coding specialists, operations specialists (including DBAs), and Security specialists all on the same team together, rather than being different silos with different standups and team meetings, etc. But it doesn't work at all well if you just ask the devs to also be Ops and Security, and lay off the rest.
Well the DevOps grandfathers (sorry, Patrick & Kris, but you're both grey now) certainly wanted to tear down the walls that had been put up between Devs & Ops. Merging Dev & Ops practices has been a fundamentally good change. Many tasks that used to be dedicated Ops/Infra work are now so automated that a Dev can do them as part of their daily work (e.g. spinning up a test environment or deploying to production). This has been, in a sense, about empowerement.
The current "platform engineering"-buzz builds on top of that.
> - not some business people scheme to reduce headcount and push more responsibilities
I imagine that many business people don't understand tech work well enough to deliberately start such schemes. Reducing toil could probably result in lower headcount (no one likes being part of a silo that does the same manual things over and over again just to deploy to production), but by the same count the automations don't come free. They have to be set up and maintained. Once one piece of drudgery has been automated, something else will rear its ugly head. Automating the heck out of boring shit is not only more rewarding work, it's also a force multiplier for a team. I hope business people see those benefits and aim for them, instead of the aforementioned scheming.
The nerds who were into programming based on personal interest were really not affected.
Those who have tech as a passion will generally outpeform those who have it as a job, by a large margin.
But salary structures tend to ignore this.
What has changed is micromanaging of daily standup which reshapes work into neat packages for conversation but kills a non linear flow and limits exploration making things exactly as requested instead of what could be better.
We now have containers that run in VMs that run on physical servers. And languages built on top of JavaScript and backed by Shadow Dom and blah blah. Now sure I could easily skip all that and stick a static page on a cdn that points to a lambda and call it a day. But the layers are still there.
I'm afraid only more of this is coming with AI in full swing and fully expect a larger scale internet outage to happen at some point that will be the result of a subtle bug in all this complexity that no single person can fix because AI wrote it.
There's too much stuff in the stack. Can we stop adding and remove some of it?
A CGI script, even, which Lambda is the cloudified version of.
There are still people who do that at smaller companies, but you wouldn't call them a webmaster anymore.
In fairness as well, the frontend tooling landscape has become so complex that while I'm capable of jumping in, I am in no way an efficient fronted developer off the rip.
I used to “make websites” in the 2000s, then stopped for about 15 years to focus on backend and infrastructure.
Have been getting back into front end the last few months — there was a good bit of learning and unlearning, but the modern web stack is amazingly productive. Next.js, Tailwind, and Supabase in particular.
Last time I checked in to frontend dev, CSS directives fit on an A4 sheet, valid XHTML was a nerd flex and rounded corners were painstakingly crafted from tables and gifs.
Frontend is a dream now :)
I'm only 36, but you're making me feel extremely old.
To me, "developers of the past" were the people working on COBOL and JCL and FORTRAN and DB2, on z/OS or System 390/370/360, to whom "RPG" was only a 4GL[1], not a type of game, and there was no webmaster or graphic designer involved... not some dotcom era dev in the 90s when "webmasters" became a widespread thing.
Here's an interesting article on webmasters and their disappearance below[2].
1: https://en.wikipedia.org/wiki/IBM_RPG 2: https://thehistoryoftheweb.com/postscript/what-happened-to-t...
So many companies no longer think about quality or design. “Just build this now and ship it, we can modify it later”, not thinking about the ramifications.
No thinking about design at all anymore, then having tech debt but not allocating any sprints to mitigate it.
"But developers hate to write documentation" they say. Okay genius, so why don't you hire someone who is not a developer, someone who doesn't spend their entire time sprinting from one Jira task to another, someone who could focus on understanding how the systems work and keeping the documents up to date. It would be enough to have one such person for the entire company; just don't also give them dozen extra roles that would distract them from their main purpose. "Once we hired a person like this, but they were not competent for their job and everyone complained about docs, so we gave up." Yeah, I suspect you hired the cheapest person available, and you probably kept giving them extra tasks that were always higher priority than this. But nice excuse. Okay, back to having no idea how anything works, which is not a problem because our agile developers can still somehow handle it, until they burn out and quit.
Ultimately, mature developers just have to write docs. There's no alternative or lifestyle hack managers can come up with.
Definitely not.
First, if the architects properly document what they designed, there is no need to reverse engineer it. I don't know how frequent this is, but it's my experience that the architects often don't bother to have a model of the entire system -- they only care about the feature they are designing currently. It is then the developer's job to reverse engineer the entire design by looking at the code. And I would like this to stop.
It is so much easier to deploy now (and for the last 5-10 years) without managing an actual server and OS
It just gets easier, with new complexities added on top
In 2014 I was enamored that I didn’t need to be a DBA because platforms as a service were handling all of it in a NoSQL kind of way. And exposed intuitive API endpoints for me.
This hasn’t changed, at worst it was a gateway drug to being more hands on
I do fullstack development because it’s just one language, I do devops because it’s not a fulltime job and cloud formation scripts and further abstraction is easyish, I can manage the database and I haven’t gotten vendor locked
You don’t have to wait 72 hours for DNS and domain assignments to propagate anymore it’s like 5 minutes, SSL is free and takes 30 minutes tops to be added to your domain, CDNs are included. Over 10 years ago this was all so cumbersome
There's also no platform engineers but IaC has gotten that good that arguably they've become redundant. Architecture decisions get made on the fly by team members rather than by the Software Architect who only shows up now and again to point out something trivial. No Product Owner so again the team work out the requirements and write the tickets (ChatGPT can't help there).
You can categorize get people who are average at several things devOps. But you will not get someone with a deep background and understanding in all the fields at the same time.
I come from a back end background. and I appalled at how little the devOps I have worked with know about even SQL.
Having teams with people who are experts at different things will give a lot better output. It will be more expensive.
Most devOps I have met, with a couple of exceptions, are front end devs who knows a couple of Javascript, knows Javascript and Typescript. When it comes to the back-end it is getting everything possible form npm and stringing it together.
It's capital intensive, high risk, hard work with low margins. Not at all like stardew.
BigAg is HARD WORK
I'll never run a farm.
Closest thing I might come to is a florist's greenhouse, but that's probably still a no go
As a farmer, if you think programming, CI/CD pipeline management, and database administration being consolidated into one job is a line too far... Brace yourself!
If fewer people need to undertake more roles, I think the simplest things you can get away with should be chosen, yet for whatever reason that's not what's happening.
Need a front end app? Go for the modern equivalent of jQuery/Bootstrap, e.g. something like Vue, Pinia and PrimeVue (you get components out of the box, you can use them, you don't have to build a whole design system, if needed can still do theming). Also simpler than similar setups with Vuex or Redux in React world.
Need a back end app? A simple API only project in your stack of choice, whether that's Java with Dropwizard (even simpler than Spring Boot), C# with ASP.NET (reasonably simple out of the box), PHP with Laravel, Ruby with Rails, Python with Flask/Django, Node with Express etc. And not necessarily microservices but monoliths that can still horizontally scale. A boring RESTful API that shuffles JSON over the wire, most of the time you won't need GraphQL or gRPC.
Need a database? PostgreSQL is pretty foolproof, MariaDB or even SQLite can also work in select situations. Maybe something like Redis/Valkey or MinIO/SeaweedFS, or RabbitMQ for specific use cases. The kinds of systems that can both scale, as well as start out as a single container running on a VPS somewhere.
Need a web server? Nginx exists, Caddy exists, as does Apache2.
Need to orchestrate containers? Docker Compose (or even Swarm) still exist, Nomad is pretty good for multi node deployments too, maybe some relatively lightweight Kubernetes clusters like K3s with Portainer/Rancher as long as you don't go wild.
CI/CD? Feed a Dockerfile to your pipeline, put the container in Nexus/Artifactory/Harbor/Hub, call a webhook to redeploy, let your monitoring (e.g. Uptime Kuma) make sure things remain available.
Architectures that can fit in one person's head. Environments where you can take every part of the system and run it locally in Docker/Podman containers on a single dev workstation. This won't work for huge projects, but very few actually have projects that reach the scale where this no longer works.
Yet, this is clearly not what's happening, that puzzles me. If we don't have 20 different job titles involved in a project, then the complexity covered under the "GlassFish app server configuration manager" position shouldn't be there in the first place (once had a project like that, there was supposed to be a person involved who'd configure the app server for the deployments, until people just went with embedded Tomcat inside of deployable containers, that complexity suddenly dissipated).
I’m an experienced engineer. Copilot is worse than useless for me. I spend most of my time understanding the problem space, understanding the constraints and affordances of the environment I’m in and thinking about the code I’m going to write app. When I start typing code, I know what I’m going to write, and so a “helpful” Copilot autocomplete is just distraction for me. It makes my workflow much much worse.
On the other hand, AI is incredibly useful for all of those steps I do before actually coding. And sometimes getting the first draft of something is as simple as a well crafted prompt (informed by all the thinking I’ve done prior to starting. After that, pairing with an LLM to get quick answers for all the little unexpected things that come up is extremely helpful.
So, contrary to this report, I think that if experienced developers use AI well, they could benefit MORE than inexperienced developers.
But Claude Sonnet 3.5 w/ Cursor or Continue.dev is a dramatic improvement. When you have discrete control over the context (ie. being able to select 6-7 files to inject), and with the superior ability of Claude, it is an absolute game changer.
Easy 2-5x speedup depending on what you're doing. In an hour you can craft a production ready 100 loc solution, with a full complement of tests, to something that might otherwise take a half day.
I say this as someone with 26 yoe, having worked in principal/staff/lead roles since 2012. I wouldn't expect nearly the same boost coming at less than senior exp. though, as you have to be quite detailed at what you actually want, and often take the initial solution - which is usually working code - and refine it a half dozen times into something that you feel is ideal and well factored.
Agreed. I feel like coding with AI is distilling the process back to the CS fundamentals of data structures and algorithms. Even though most of those DS&As are very simple it takes experience to know how to express the solution using the language of CS.
I've been using Cursor Composer to implement code after writing some function signatures and types, which has been a dream. If you give it some guardrails in the context, it performs a lot better.
I don't know if I'm losing or improving my skillset. This exercise of development has become almost entirely one of design and architecture, and reading more than writing code.
Maybe this doesn't matter if this is the way software is developed moving forward, and I'm certainly not complaining in working on a 2 person startup
For example, writing IaC especially for AWS, I have to look up tons of stuff. Asking AI gets me answers and examples extremely fast. If I'm learning the IaC for a new service I'll look over the AWS docs, but if I just need a quick answer/refresher, AI is much faster than going and looking it up.
Search is awful when you can't remember the exact term with your language/framework/technology - but highlighting code and asking AI helps out a ton.
Before, I'd search over and over fine-tuning my search until I get what I want. Tools like copilot make that fine-tuning process much shorter.
The more you lean into functional patterns: design some monads, don’t do I/O except at the boundaries, use fluent programming, then it’s highly effective.
This is all in Java, for what it’s worth. Though, I’ll admit, I’m 3.5y into Java, and rely heavily on Java 8+ features. Also, heavy generic usage in my library code gives a lot of leash to the LLM to consistently make the right choice.
I don’t see these gains as much when using quicker/sloppier designs.
Would love to hear more from true FP users (Haskell, OCaml, F#, Scala).
I found the tool to be extremely valuable when working in unfamiliar languages, or when doing rote tasks (where it was easy for me to identify if the generated code was up to snuff or not).
Where I think it falters for me is when I have a very clear idea of what I want to do, and its _similar_ to a bog standard implementation, but I’m doing something a bit more novel. This tends to happen in “reduce”s or other more nebulous procedures.
As I’m a platform engineer though, I’m in a lot of different spaces: Bash, Python, browser, vanilla JS, TS, Node, GitHub actions, Jenkins Java workflows, Docker, and probably a few more. It gives my brain a break while I’m context switching and lets me warm up a bit when I move from area to area.
I think you have nailed it with this comment. I find copilot very useful for boilerplate - stuff that I can quickly validate.
For stuff that is even slightly complicated, like simple if-then-else, I have wasted hours tracking down a subtle bug introduced by copilot (and me not checking it properly)
For hard stuff it is faster and more reliable for me to write the code than to validate copilots code.
it really feels like people building the product do not care about the UX.
A psychology professor I know says this holds in general. For any new tool, who will be able to get the most benefits out of it? Someone with a lot of skill already or someone with fewer skill? With less skill, there is even a chance that the tool has a negative effect.
But recently using Claude Sonnet + Haiku through OpenRouter also with aider, and it is like a new dimension of programming.
Working on new projects in Rust and a separate SPA frontend, it just ... implements whatever you ask like magic. Gets it about 90-95% right at the first prompt. Since I am pretty new to Rust, there are a lot of idiomatic things to learn, and lots of std convenience functions I don't yet know about, but the AI does. Figuring out the best prompt and context for it to be effective is now the biggest task.
It will be crazy to see where things go over the next few years... do all junior programmers just disappear? Do all programmers become prompt engineers first?
The issue in the first case is that you have no idea if it tells you good stuff or garbage.
Also in simple projects it shines, when the project is more complex - it becomes mostly useless.
A junior writing simple code is the exact recipe for disaster when it comes to these tools.
Does it have (some of) the other files of the project in it's context, when you use it in a test file?
Cursor has that plus whatever files you want to specifically add. Or it has a mode where you can feed it the entire project and it searches to decide which files to add
Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.
This was done because we asked for a minor change to be done (talking maybe 5 lines of code) and tested. So now not only are we dealing with new debt, we are dealing with code that no one can explain why it was completely changed (and some of the changes were changes for the sake of change), and we are dealing with those of us that manage this code now looking at completely foreign code.
I keep seeing this with people that are using these tools and they are not higher level engineers. We finally got to the point of denying these PR's and saying to go back and do it again. Loosing any of the time that was theoretically gained from doing it in the first place.
Not saying these tools don't have a place. But people are using it without understanding what it is putting out and not understanding the long term effects it will have on a code base.
It is worse than that. We're all maintaining in our heads the mental sand castle that is the system the code base represents. The abuse of the autocoder erodes that sand castle because the intentions of the changes, which are crucial for mentally updating the sand castle, are not communicated (because they are unknowable). This is same thing with poor commit messages, or poor documentation around requirements/business processes. With enough erosion, plus expected turn over in staff, the sand castle is actually gone.
Ultimately not adopting them will religate you to the same fate as assembly programmers. Sure there are place for it, but you won't be able to get near as much functionally done in the same amount of time and there won't be as much demand for it.
i suspect you are seeing 900% productivity gains on certain narrow tasks (like greenfield prototype-quality code using apis you aren't familiar with) and incorrectly extrapolating to programming as a whole
Thank you, this says what I have been struggling to describe.
The day I lost part of my soul was when I asked a dev if I could give them feedback on a DB schema, they said yes, and then cut me off a few minutes in with, “yeah, I don’t really care [about X].” You don’t care? I’m telling you as the SME for this exactly what can be improved, how to do it, and why you should do so, but you don’t care. Cool.
Cloud was a mistake; it’s inculcated people with the idea that chasing efficiency and optimization doesn’t matter, because you can always scale up or out. I’m not even talking about doing micro-benchmarks (though you should…), I’m talking about dead-simple stuff like “maybe use this data structure instead of that one.”
At the root of it, there's a profound arrogance in putting someone else in a position where they are compelled to tell you you're wrong[1]. Curious, careful people don't do this very often because they are aware of the limits of their knowledge and when they don't know something they go find it out. Unfortunately this is surprisingly rare.
[1] to be clear, I'm speaking here as someone who has been guilty of this before, now regrets it, and hopes to never do it again.
in my defense, though, it's possible the option didn't exist when i first read the objdump manual
They are/will be the management's darling because they too are all about delivering without any interest in technology either.
Well designed technology isn't seen as foundation anymore; it is merely a tool to just keep the machine running. If parts of the machine are being damaged by the lack of judgement in the process, that shouldn't come in the way of this year's bonus; it'll be something to worry about in the next financial year. Nobody knows whats going to happen in the long-term anyway, make hay while the sun shines.
The age of short-term is upon us.
A lot of people who entered the field in the past 6 or so years are here for the money, obviously.
Nothing wrong with that at all, but as someone with a long time programming and technology passion, it’s sad to see that change.
With a few more years under my belt I realized theres nothing wrong with doing good work and providing yourself/your family a decent living. Not everyone needs the passion in their field to become among the best or become a "10x"er to contribute. We all have different passions, but we all need to pay the bills.
Off-the-cuff, three groups:
1. There are people who are motivated by having a solid income, yet they take the professionalism seriously, and do skilled rock-solid work, 9-5. I'd be happy to work with these people.
2. There are other people who are motivated by having a solid or more-than-solid income, and (regardless of skill level), it's non-stop sprint performance art, gaming promotion metrics, resume-driven development, practicing Leetcode, and hopping at the next opportunity regardless of where that leaves the project and team.
3. Then there's those weirdos who are motivated by something about the work itself, and would be doing it even if it didn't pay well. Over the years, these people spend so much time and energy on the something, that they tend to develop more and stronger skills than the others. I'd be happy to work with these people, so long as they can also be professional (including rolling up sleeves for the non-fun parts), or amenable to learning to be professional.
Half-joke: The potential of group #3 is threatening to sharp-elbowed group #2, so group #2 neutralizes them via frat gatekeeping tactics (yeah-but-what-school-did-you-go-to snobbery, Leetcode shibboleth for nothing but whether you rehearsed Leetcode rituals, cliques, culture fit, etc.).
Startups might do well to have a mix of #3 and #1, and to stay far away from #2. But startups -- especially the last decade-plus of too many growth investment scams -- are often run by affluent people who grew up being taught #2 skills (for how you game your way into prestigious school, aggressively self-interested networking and promoting yourself, etc.).
The hardest part of any job I’ve had is doing the not so fun parts (meetings, keeping up with emails, solidly finishing work before moving on to something new)
0 autonomy, the only people who got recognition were those who self promoted on internal blogs, etc.
In the olden days, we used to throw it over to "Ops" and say, "your problem now."
And Junior developers have always been overwhelmed with the details and under pressure to deliver enough to keep their job. None of this is new! I'm a graybeard now, but I remember seniors having the same complaints back then. "Kids these days" never gets old.
I’m more of a solo full stack dev and don’t really have first hand experience building software at scale and the process it takes to manage a codebase the size of the Windows OS, but these are the kinds of issues I see regularly these days and wouldn’t in the past. I also use macOS daily for almost as long and the Apple software has really tanked in terms of quality, I hit bugs and unexpected errors regularly. I generally don’t use their software (Safari, Mail, etc) when I can avoid it. Also have to admit lack of features is a big issue for me on their software.
Similarly Docker is an amazing technology, yet it enabled the dependency tower of babels that we have today. It enabled developers that don't care about cleaning up their depencies.
Kubernetes is amazing technology, yet it enabled the developers that don't care to ship applications that constantly crash, but who cares, kubernetes will automatically restart everything.
Cloud and now AI are similar enabler technologies. They could be used for good, but there are too many people that just don't care.
How many developers do we imagine even know the difference between SIMD and SISD operators, much less whether their software stack knows how to take advantage of SIMD? How many developers do we imagine even know how RAM chips store bits or how a semiconductor works?
We're just watching the bar of "Don't need to care because a reliable system exists" move through something we know and care about in our lifetimes. Progress is great to watch in action.
hopefully some of those that did a computer science degree?
(all of what you've said was part of mine)
We use it internally and the technical debt is an enormous threat that IMO hasn't been properly gauged.
It's very very useful to carpet bomb code with APIs and patterns you're not familiar with, but it also leads to insane amounts of code duplication and unwieldy boilerplate if you're not careful, because:
1. One of the two big bias of the models is the fact that the training data is StackOverflow-type training data, which are examples and don't take context and constraints into account.
2. The other is the existing codebase, and it tends to copy/repeat things instead of suggesting you to refactor.
The first is mitigated by, well, doing your job and reviewing/editing what the LLM spat out.
The second can only be mitigated once diffs/commit history become part of the training data, and that's a much harder dataset to handle and tag, as some changes are good (refactorings) but other might be not (bugs that get corrected in subsequent commits) and no clear distinction as commit messages are effectively lies (nobody ever writes: bug introduced).
Not only that, merges/rebases/squashes alter/remove/add spurious meanings to the history, making everything blurrier.
Either you will go bust, OR you will be able to hire enough people to pay those debts, once you get traction in the market.
it's usually written "feature: ...
Bingo, this, so much this. Every dev i know who loves AI stuff was a dev that I had very little technical respect for pre AI. They got some stuff done but there was no craft or quality to it.
In terms of directly generating technical content, I think he mostly used gen AI for more mechanical stuff such as drafting data schemas or class structures, or for converting this or that to JSON, and perhaps not so much for generating actual program code. Maybe there's a difference to someone who likes to have lots of program logic generated.
I do think there is a difference between a skilled engineer using it for the mechanical things, and an engineer that OFFLOADS thinking/designing to it.
Theres nuance everywhere, but my original comment was definitely implying the people that attempt to lean on it very hard for their core work.
I wrote a kotlin idea plugin in a day; I’ve never used kotlin before and the jetbrains ui framework is a dogs breakfast of obscure edge cases.
I had no skills in this area, so I could happily lean into the assistance provided to get the job done. And it got done.
…but, I don’t use coding assistants day to day in languages I’m very familiar with: because they’re flat out bad compared to what I can do by hand, myself.
Even using Python, generated code is often subtly wrong and it takes more time to make sure it is correct than to do it by hand.
…now, I would assume that a professional kotlin developer would look at my plugin and go: that’s a heap of garbage, you won’t be able to upgrade that when a new version comes out (turns out, they’re right).
So, despite being a (I hope) competent programmer I have three observations:
1) the code I built worked, but was an unmaintainable mess.
2) it only took a day, so it doesn’t matter if I throw it away and build the next one from scratch.
3) There are extremely limited domains where that’s true, and I personally find myself leaning away from LLM anything where maintenance is a long term goal.
So, the point here is not that developers are good/bad:
It’s the LLM generated code is bad.
It is bad.
It is the sort of quick, rubbish prototyping code that often ends up in production…
…and then gets an expensive rewrite later, if it does the job.
The point is that if you’re in the latter phase of working on a project that is not throw away…
You know the saying.
Betty had a bit of bitter butter, so she mixed the bitter butter with the better butter.
But it made the better butter bitter.
.. the exact same content for a screensaver with Todd Rundgren in 1987 on the then-new color Apple Macintosh II in Sausalito, California. A different screensaver called "Flow Fazer" was more popular and sold many copies. The rival at the time was "After Dark" .. whose founder had a PhD in physics from UC Berkeley but also turned out to be independently wealthy, and then one of the wealthiest men in the Bay Area after the dot-com boom.
Thanks for reading my post. It’s nice to know I’m not writing to a complete void!
More stuff will get done, the barrier of entry will be lower etc.
The craft of programming took a significant quality/care hit when it transitioned from "only people who care enough to learn the ins and outs of memory management can feasibly do this" to "now anyone with a technical brain and a business use case can do it". Which makes sense, the code was no longer the point.
The C++ devs rightly felt superior to the new java devs in the narrow niche of "ability to craft code." But that feeling doesn't move the needle business wise in the vast majority of circumstances. Which is always the schism between large technology leaps.
Basically, the argument of "its worse" is not WRONG. Just, the same as it did not really matter in the mid 90s. Does not matter as much now, compared to the ability to "just get something that kinda works."
There are two interpretations of this:
1) Those people are imposters
2) It's about you, not them
I've been personally interested in AI since the early 80's, neural nets since the 90's, and vigilant about "AI" since Alexnet.
I've also been in a tech lead role for the past ~25 years. If someone is talking about newer "AI" models in a nonsensical way, I cringe.
In the scheme of things however, that status hardly matters compared to the "ability to get something shipped quickly" which is what the vast majority of people are paid to do.
So while I might judge those people for not meeting my personal standards or bar. In many cases that does not actually matter. They got something out there, thats all that matters.
However, it does look like LLM's are racing to make these junior devs unnecessary.
The main utility of "junior devs" (regardless of age) is that they can serve as an interface to non-technical business "users". Give them the right tools, and their value will be similar to good business controllers or similar in the org.
A salary of $100-$150k is really low for someone who is really a competent developer. It's kept down by those "junior devs" (of all ages) that apply for the same jobs.
Both kinds of developers will be required until companies use AI in most of those roles, including the controllers, the developers and the business side.
I found this too. But I also found the opposite, including here on HN; people who are interested in technology have almost an aversion against using AI. I personally love tech and I would and do write software for fun, but even that is objectively more fun for me with AI. It makes me far more productive (very much more than what the article states) and, more importantly, it removes the procrastination; whenever I am stuck or procrastinating getting to work or start, I start talking with Aider and before I know it, another task was done that I probably wouldn't have done that day without.
That way I now launch bi weekly open and closed source projects while before that would take months to years. And the cost of having this team of fast experienced devs sitting with me is max a few $ per day.
Personally, I don't use LLMs. But I don't mind people using them as interactive search engines or code/text manipulations as long as they're aware of the hallucination risks and took care of what they're copying into the project. My reasons for it is mostly that I'm a journey guy, not a destination guy. And I love reading books and manuals as they give me an extensive knowledge map. Using LLMs feels like taking guidance from someone who has not ventured 1km outside their village, but heard descriptions from passersby. Too much vigilance required for the occasional good stuff.
And the truth is, there are a lot of great books and manuals out there. And while they teach you how to do stuff, they often teach you why you should not do it. I strongly doubt Copilot imparting architectural and technical reminders alongside the code.
For my never finishing side projects I am too; I enjoy my weekends tinkering on the 'final database' system I have been building in CL for over a decade and will probably never really 'finish'. But to make money, I launch things fast and promote them; AI makes that far easier.
Especially for parts like fronted that I despise; I find 0 pleasure in working with css magic that even seasoned frontenders have to try/fail in a loop to create. I let Sonnet just struggle until it's good enough instead of me having to do that annoying chore; then I ask Aider to attach it to the backend and done.
I wonder if this is an age thing, for many people. I'm old enough to have started reading these discussions on Slashdot in the early 90s.
But between 2000 and 2010, Slashdot changed and became much less open to new ideas.
The same may be happening to HN right now.
It feels like a lot of people are stuck in the tech of 10 years ago.
Like - I presume almost everyone - somewhere in the middle?
That was a helluva dichotomy to offer me...
> how do you keep your AI generated content from turning the whole thing into an incomprehensible superfund site of tech debt?
By reading it, thinking about it and testing it?
Did I somehow give the impression I'm cutting and pasting huge globs of code straight from ChatGPT into a git commit?
There's a weird gulf of incomprehension between people that use AI to help them code and those that don't. I'm sure you're as confused by this exchange as I am.
In my own experience I've worked on repos with <10 other devs where I spent far more effort on consistency and mantainability than getting the thing to work.
I certainly didn't intend it that way, more like a continuum. O(1) --> O(10) --> O(100) --> ...
> Did I somehow give the impression I'm cutting and pasting huge globs of code straight from ChatGPT into a git commit?
Yes, a little. It seemed to me like you were advocating using the LLM to generate large amounts of tedious output.
I use AI either as an unblocker to get me started, or to write a handful of lines that are too complex to do from memory but not so complex that I can't immediately grok them.
I find both types of usage very satisfying and helpful.
But it is far more useful on verbose 'team written' corporate stuff than on the more reuse intensive tech: in CL or Haskell, the community is far more DRY than Go or JS/TS; you tend to create and reuse many things and your much of the end result is (basically) a DSL; current AI is not very good at that in my experience; it will recreate or hallucinate (when you pressure reuse of previously created things, if there are too many, even though it does fit in the context window) functions all over the place. But many people have the same issue; they don't know, cannot search or forget and will just redo things many times over; AI makes that far easier (as in, no work at all often), so that's the new reality.
> I wonder if the study includes the technical debt that more experienced developers had to tackle after the less experienced devs have contributed their AI-driven efforts.
It does notYou also may find this post from the other day more illuminating[0], as I believe the actual result strongly hints at what you're guessing. The study is high schoolers doing math. While GPT only has an 8% error rate for the final answer, it gets the steps wrong half the time. And with coding (like math), the steps are the important bits.
But I think people evaluate very poorly when there's ill defined metrics but some metric exists. They over inflate it's value since it's concrete. Like completing a ticket doesn't mean you made progress. Introducing technical debt would mean taking a step back. A step forward in a very specific direction but away from the actual end goal. You're just outsourcing work to a future person and I think we like to pretend this doesn't exist because it's hard to measure.
Is this a bad thing? Maybe I'm misunderstanding it, but even when I'm working on my own projects, I'm usually trying to solve a problem, and the technology is a means to an end to solving that problem (delivering). I care that it works, and is maintainable, I don't care that much about the technology.
For them programming is a means to an end, and I think it is fine, in a way. But you cannot just ask an AI to write you tiktok clone and expect to get the finished product. Writing software is an iterative process, and LLMs currently used are not good enough for that, because they need not only to answer the questions, but at the very minimum to start asking questions: "why do want to do that ?" "do you prefer this or that", etc., so that they can actually extract all the specification details that the user happily didn't even know he needed before producing an appropriate output. (It's not too different from how some independent developers has to handle their clients, isn't it ?). Probably we will get there, but not too soon.
I also doubt that current tools can keep a project architecturally sound long-term, but that is just an hunch.
I admit though that I may be biased because I don't like much tools like copilot: when I write software, I have in my mind a model of the software that I am writing/I want to write, the AI has another model "in mind" and I need to spend mental energy understanding what it is "thinking". Even if 99/100 it is what I wanted, the remaining 1% is enough to hold me back from trusting it. Maybe I am using it the wrong way, who knows.
The AI tool that work for me would be a "voice controller AI powered pair programmer": I write my code, then from time to time I ask him questions on how to do something, and I can get either an contextual answer depending on the code I am working on, or generate the actual code if I wish so". Are there already plugins working that way for vscode/idea/etc ?
It is very nice in that it gives you a handy diffing tool before you accept, and it very much feels like it puts me in control.
So, like before AI then? I haven't seen AI deliver illogical nonsense that I couldn't even decipher like I have seen some outsourcing companies deliver.
I have. If you're doing niche-er stuff it doesn't have enough data and hallucinates. The worst is when it spits two screens of code instead of 'this cannot be done at the level you want it'.
> that I couldn't even decipher
That's unrelated to code quality. Especially with C++ which has become as write only as perl.
This is one of the challenges of being a tech lead. Sometimes code is hard to comprehend.
In my experience, AI delivered code is no worse than entry level developer code.
AI doesn't do stuff like because it could not, which to me, is a good thing. When it gets better, it might start to do it, I don't know.
People here live in a bubble where they think the world is full of people who read 'beautiful code', make tests, use git or something instead of zip$date and know how DeMorgan works; by far, most don't, not juniors, not seniors.
"Back to that two page function. Yes, I know, it’s just a simple function to display a window, but it has grown little hairs and stuff on it and nobody knows why. Well, I’ll tell you why: those are bug fixes. One of them fixes that bug that Nancy had when she tried to install the thing on a computer that didn’t have Internet Explorer. Another one fixes that bug that occurs in low memory conditions. Another one fixes that bug that occurred when the file is on a floppy disk and the user yanks out the disk in the middle. "
https://www.joelonsoftware.com/2000/04/06/things-you-should-...
Of course, you can try to split the 15000 line function in logical blocks or something, but don't assume the ifs in it are useless.
So, pretty similar to how it was before. Except that motivated junior developers will improve incredibly fast. But that's also kind of always been the case in software development these past two decades?
https://news.ycombinator.com/item?id=41465827
(the two comments might have moved apart by the time you read this).
Edit: yep, they just did.
The few attempts I've made at using genAI to make large-scale changes to code have been failures, and left me in the dark about the changes that were made in ways that were not helpful. I needed suggestions to be in much smaller chunks. paragraph sized. Right now I limit myself to using the genAI line completion suggestions in Pycharm. It very often guesses my intentions and so actually is helpful, particularly when laboriously typing out lots of long literals, e.g. keys in a dictionary.
You get what you measure. Nobody measure software quality.
Those kinds of safeguards should instead be part of the framework you're using. If you need to prevent SQL injection, you need to make sure that all access to the SQL type database pass through a layer that prevents that. If you are worried about the security of your point of access (like an API facing the public), you need to apply safeguards as close to the point of entry as possible, and so on.
I'm a big believer in AI generated code (over a long horizon), but I'm not sure the edge case robustness is the main selling point.
You are not a fucking priest in the temple of engineering, go to fucking CS dep at the local uni and be the one and preach it there. You are worker of the company with customers, which pays you a salary from customers money.
If I don't deliver my startup burns in a year. In my previous role if I didn't deliver the people who were my reports did not get their bonuses. The incentives are very clear, and have always been clear - deliver.
Here's a real example of delivering something now without worrying about being the best engineer I can. I have 2 CSRs, They are swamped with work and we're weeks away from bringing another CSR on board. I find a couple of time consuming tasks that are easy to automate and build those out separately as one-off jobs that work well enough. Instantly it's a solid time gain & stress reducer to CSRs. Are my one-off automation tasks a long term solution? No. Do I care? Not at the moment, and my ego can take a hit for the time being.
It reminds me of a talk Raymond Hettinger put on a while ago about rearranging the flowers in the garden. There is a tendency from new developers to rearrange for no good reason, AI makes it even easier now. This comes down to a culture problem to me, AI is simply the tool but the driver is the human (at least for now).
The abstract and the conclusion only give a single percentage figure (26.08% increase in productivity, which probably has too many decimals) as the result. If you go a bit further, they give figures of 27 to 39 percent for juniors and 8 to 13 percent for seniors.
But if you go deeper, it looks like there's a lot of variation not only by seniority, but also by the company. Beside pull requests, results on the other outcome measures (commits, builds, build success rate) don't seem to be statistically significant at Microsoft, from what I can tell. And the PR increases only seem to be statistically significant for Microsoft, not for Accenture. And even then possibly only for juniors, but I'm not sure I can quite figure out if I've understood that correctly.
Of course the abstract and the conclusion have to summarize. But it really looks like the outcomes vary so much depending on the variables that I'm not sure it makes sense to give a single overall number even as a summary. Especially since statistical significance seems a bit hit-and-miss.
edit: better readability
Accenture is the kind of company that cooperates and co-markets with large orgs like Microsoft. With ~300 devs in the pool they hardly move the population at all, and they cannot be assumed to be objective since they are building a marketing/consulting division around AI workflows.
The third anonymous company didn’t actually have a randomized controlled trial, so it is difficult to say how one should combine their results with the RCTs. Additionally, I am sure that more than one large technology company went through similar trials and were interested in knowing the efficacy of them. That is to say, we can assume other data exist than just those included in the results.
Why did they select these companies, from a larger sample set? Probably because Microsoft and Accenture are incentivized by adoption, and this third company was picked through p-hacking.
In particular, this statement in the abstract is a very bad sign:
> Though each separate experiment is noisy, combined across all three experiments
It is essentially an admission that individually, the companies don’t have statistically significant results, but when we combine these three (and probably only these three) populations we get significant results. This is not science.
The third company seems a bit weird to include in other ways as well. In raw numbers in table 1, there seem to be exactly zero effects from the use of CoPilot. Through the use of their regression model -- which introduces other predictors such as developer-fixed and week-fixed effects -- they somehow get an estimated effect of +54%(!) from CoPilot in the number of PRs. But the standard deviations are so far through the roof that the +54% is statistically insignificant within the population of 3000 devs.
Also, they explain the introduction of the week fixed effect as a means of controlling for holidays etc., but to me it sounds like it could also introduce a lot of unwarranted flexibility into the model. But this is a part where I don't understand their methodology well enough to tell whether that's a problem or not.
I generally err towards the benefit of the doubt when I don't fully understand or know something, which is why I focused more on the presentation of the results than on criticizing the study and its methodology in general. I'd have been okay with the summary saying "we got an increase of 27.3% for Microsoft and no statistically significant results for other participants".
But perhaps I should have been more critical.
They could have said "26% (rounded to 0 dp)" or something, but that conveys even less information about the amount of uncertainty than just saying what the standard error is.
The second decimal point doesn't essentially add any information because the data can't provide information at that precision. It's noise but being included in the summary result makes it implicitly look like information. Which is exactly why including it seems a bit questionable.
That's not the major issue with the study, though, it's just one of the things that caught my eye originally.
There is skill involved in using generative code models and it’s the same skill you need for delegating work to others and integrating solutions from multiple authors into a cohesive system.
My experience is that the LLM isn't just used for "boilerplate" code, but rather called into action when a junior developer is faced with a fairly common task they've still not (fully) understood. The process of experimenting, learning and understanding is then largely replaced by the LLM, and the real skill becomes applying prompt tweaks until it looks like stuff works.
E.g last night I setup my first Linux raid. A task that isn't too hard, but following a tutorial or just "reading the docs" isn't particularly helpful given it takes a few different tools (mount, umount, fstab, blkid, mdadm, fdisk, lsblk, mkfs) and along the way things might not follow the exact steps from a guide. I asked dozens of questions about each tool and step, where previously I would have just "copy paste and prayed".
Two nights ago I was similarly able to fully recover all my data from a failed ssd also using chatgpt to guide my learning along the way. It was really cool to tackle a completely new skill having a "guide" even if it's wrong 20% of the time, that's way better than the average on the open Internet.
For someone who loves learning, it feels like thousand league boots compared to just endlessly sifting through internet crap. Of course everything it says is suspect, just like everything else on the Internet, but boy it cuts out a lot of the hassle.
That's how I handled things e.g. when I needed to resize a partition and a filesystem in a LVM setup. Similarly to your RAID example, doing that required using a bunch of tools on multiple levels of storage abstraction: GPT partitions, LUKS tools, LVM physical and logical volumes, file system tools. I was familiar with some of those but didn't remember the incantations by heart, and for others I needed to learn new tools or concepts.
I think I use a similar approach in programming when I'm getting into something I'm not quite familiar with. Stack Overflow answers and tutorials help give the outline of a possible solution. But if I don't understand some of the details, such as what a particular function does, I google them, preferring to get the details either from official documentation or from otherwise credible-sounding accounts.
I share your hunch, though I would go so far as to call it an informed, strong opinion. I think we're going to pay the price in this industry in a few years, where the pipeline of "clueful junior software developers" is gonna dry way up, replaced by a firehose of "AI-reliant junior software developers", and the distance between those two categories is a GULF. (And of course, it has a knock-on effect on the number of clueful intermediate software developers, and clueful senior software developers, etc...)
Well, at least that's how I use them. And to throw a counter to your hypothesis, I find that sometimes the LLM will use functions or library components that I didn't know of, which actually saves me a lot of time when learning a new language or toolkit. So for me, it actually accelerates learning rather than retarding it.
But for folks who are going to be successful with or without it, it's a godsend in terms of being able to essentially ask stack overflow questions and get immediate non judgemental answers.
Maybe not correct all the time, but that was true with stack overflow as well. So as always, it comes back to the individual.
How many contemporary developers have no idea how to write machine code, when 50 years ago it was basically mandatory if you wanted to be able to write anything?
Are LLM's just going to become another abstraction crutch turned abstraction solid pillar?
I'm seeing a lot of confusion and frustration from beginner programmers when it comes to abstraction, because a lot of abstractions in use today just incur other kinds of complexity. At a glance, React for example can seem deceptively easy, but in truth it requires understanding of a lot of advanced concepts. And sure, a little knowledge can go a long way in E.G. web development, but to really write robust, performant code you have to know a lot about the browser it runs in, not unlike how great programmers of yesteryear had entire 8-bit machines mapped out in their heads.
Considering this, I'm not convinced the LLM crutch will ever solidify into a pillar of understanding and maintainable competence.
When used scientifically, coding copilots boost productivity AND skills.
At least Copilot you’re still more or less driving, whereas ChatGPT you’re more the passenger and not growing intuition.
Thanks for pointing it out with words instead of downvotes.
This tracks with my own experience: Copilot is nice for resolving some tedium and freeing up my brain to focus more on deeper questions, but it's not as world-altering as junior devs describe it as. It's also frequently subtly wrong in ways that a newer dev wouldn't catch, which requires me to stop and tweak most things it generates in a way that a less experienced dev probably wouldn't know to. A few years into it I now have a pretty good sense for when to use Copilot and when not to—so I think it's probably a net positive for me now—but it certainly wasn't always that way.
I also wonder if the possibly-decreased 'productivity' for more senior devs stems in part from the increase in 'productivity' from the juniors in the company. If the junior devs are producing more PRs that have more mistakes and take longer to review, this would potentially slow down seniors, reducing their own productivity gains proportionally.
I'm not great at remembering specific quirks/pitfalls about secondary languages like e.g. what the specific quoting incantations are to write conditionals in Bash, so I rarely wrote bash scripts for automation in the past. Basically only if that was a common enough task to be worth the effort. Same for processing JSON with jq, or parsing with AWK.
Now with LLMs, I'm creating a lot more bash scripts, and it has gotten so easy that I'll do it for process-documentation more often. E.g. what previously was a more static step-by-step README with instructions is now accompanied with an interactive bash script that takes user input.
Bash scripts are essentially automating what you could do at the command line with utility programs, pipes, redirects, filters, and conditionals.
If you're getting very far outside of that scope, bash is probably the wrong tool (though it can be bent to do just about anything if one is determined enough).
As it happens, I think co-pilot is a pretty poor user experience, because it's essentially just an autocomplete, which doesn't really help me all that much, and often gets in my way. I like using Cursor with the autocomplete turned off. It gives you the option to highlight a bit of text and either refactor it with a prompt, or ask a question about it in a side chat window. That puts me in the driver seat, so to speak, so I (the user) can reach out to AI when I want to.
I have seen mostly senior programmers argue why ai tools don't work. Juniors just use them without prejudice.
I find Claude good at helping me find how to do things that I know are possible but I don’t have the right nomenclature for. This is an area where Google fails you, as you’re hoping someone else on the internet used similar terms as you when describing the problem. Once it spits out some sort of jargon I can latch onto, then I can Google and find docs to help. I prefer to use multiple sources vs just LLMs, partially because of hallucination, but also to keep amassing my own personal context. LLMs are excellent as librarians.
The trouble is that they seem to be getting worse. Some time ago I was able to write an entire small application by simply providing some guidance around function names and data structures, with an LLM filling in all of the rest of the code. It worked fantastically and really showed how these tools can be a boon.
I want to taste that same thrill again, but these days I'm lucky if I can get something out of it that will even compile, never mind the logical correctness. Maybe I'm just getting worse at using the tools.
It certainly has its uses - it's awesome at mocking and filling in the boilerplate unit tests.
Anything the difficult or complex, and it's really a coinflip if it's even an advantage, most of the time it's just distracting and giving irrelevant suggestions or bad textbook-style implementations intended to demonstrate a principle but with god-awful performance. Likely because there's simply not enough training data for these types of tasks.
With this in mind, I don't think it's strange that junior devs would be gushing over this and senior devs would be raising a skeptical eyebrow. Both may be correct, depending on what you work on.
But what I really appreciate is, I don't have to do the plug and chug stuff. Those patterns are well defined, I'm more than happy to let the LLM do that and concentrate on steering whether it's making a wise conceptual or architectural choice. It really seems to act like a higher abstraction layer. But I think how the engineer uses the tool matters too.
If you don't even understand your own PR, I'm not sure why you expect other people can.
I have used LLMs myself, but mostly for boilerplate and one-off stuff. I think it can be quite helpful. But as soon as you stop understanding the code it generates you will create subtle bugs everywhere that will cost you dearly in the long run.
I have the strong feeling that if LLMs really outsmart us to the degree that some AI gung-ho types believe, the old Kernighan quote will get a new meaning:
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."
We'll be left with code nobody can debug because it was handed to us by our super smart AI that only hallucinates sometimes. We'll take the words of another AI that the code works. And then we'll hope for the best.
Coding is still a skill acquisition that takes years. We need to stamp out the behavior of not understanding what they take from copilot, but the behavior is not new.
Personally I think that senior devs might fear a conflict within their identity. Hence they draw the 'You and the AI have no cue' card.
Where I do find it useful are
1) questions about frameworks/languages that I don't work in much and for which there is a lot of example content (i.e., Qt, CSS);
2) very specific questions I would have done a Google Search (usually StackOverflow) for ("what's the most efficient way to CPU and RAM usage on Windows using python") - the result is pointing me to a library or some example rather than directly generating code that I can copy/paste
3) boilerplate code that I already know how to write but saves me a little time and avoids typing errors. I have the CoPilot plugin for PyCharm so I'll write it as a comment in the file and then it'll complete the next few lines. Again best results is something that is very short and specific. With anything longer I almost always have to iterate so much with CoPilot that it's not worth it anymore.
4) a quick way to search documentation
Some people have said it's good at writing unit tests but I have not found that to be the case (at least not the right kind of unit tests).
If I had to quantify it, I'd probably give it a 5-10% increase in productivity. Much less than I get from using a full featured IDE like PyCharm over coding in Notepad, or a really good git client over typing the git commands in the CLI. In other words, it's a productivity tool like many other tools, but I would not say it's "revolutionary".
Books and manuals, they're pretty great for introductory materials. And for advanced stuff, you have to grok these first.
> 2) very specific questions I would have done a Google Search (usually StackOverflow) for ("what's the most efficient way to CPU and RAM usage on Windows using python")
I usually go backwards for such questions, searching for not what I want to do, but how it would look like if it exists. And my search-fu have not failed me that much in that regards, but that requires knowledge on how those things work, which again goes back to books and other such materials.
> 3) boilerplate code that I already know how to write but saves me a little time and avoids typing errors.
Snippets and templates in my editor. And example code in the documentation.
4) a quick way to search documentation
I usually have a few browser tabs open for whatever modules I'm using, plus whatever the IDE has, and PDFs and manual pages,...
For me, LLMs feel like building a rocketship to get groceries at the next village, and then hand-waving the risks of explosions and whether it would actually get you there.
So it's not like CoPilot is giving me information that I couldn't get fairly easily before. But it is giving it to me much __faster__ than I could access it before. I liken it to an IDE tool that allows you to look up API methods as you type. Or being able to ask an expert in that particular language/domain, except it's not as good as the expert because if the expert doesn't know something they're not going to make it up, they'll say "don't know".
So how much benefit you get from it is relative to how much you have to look up stuff that you don't know.
I've been using Cursor for around 10 days on a massive Ruby on Rails project (a stack I've been coding in for +13 years).
I didn't enjoy any productivity boost on top of what GitHub Copilot already gave me (which I'd estimate around the 25% mark).
However, for crafting a new project from scratch (empty folder) in, say, Node.js, it's uncanny; I can get an API serving requests from a OpenAPI schema (serving the OpenAPI schema via swagger) in ~5 minutes just by prompting.
Starting a project from scratch, for me at least, is rare, which probably means going back to Copilot and vanilla VSCode.
@workspace /new “scaffold a ${language} project”
that automagically creates a full project structure and boilerplate. It’s been great for one off things for me at leastI haven’t been able to get any mileage out of chat AI beyond treating it like a search engine, then verifying what it said…. Which isn’t a speedy workflow
- writing robust bash and using unix/macos tools
- how to do X in github actions
- which API endpoint do I use to do Y
- synthesizing knowledge on some topic that would require dozens of browser tabs
- enumerating things to consider when investigating things. Like "I'm seeing X, what could be the cause, and how I do check if it's that". For example I told it last week "git rebase is very slow, what can it be?" and it told me to use GIT_TRACE=1 which made me find a slow post-commit hook, and suggested how to skip this hook while rebasing.
Said hysteria was built on the same idea. After all, LLMs themselves are just compilers for a programming language that is incredibly similar to spoken language. But as the programming language is incredibly similar to the spoken language that nearly everyone already knows, the idea was that everyone would become the metaphorical elevator operator, "wiping out" programming as a job just as elevator operators were "wiped out" of a job when operating an elevator became accessible to all.
The key difference, and where the hysteria is likely to fall flat, is that when riding in an elevator there isn't much else to do but be the elevator operator. You may as well do it. Your situation would not be meaningfully improved if another person was there to press the button for you. When it comes to programming, though, there is more effort involved. Even when a new programming language makes programming accessible, there remains a significant time commitment to carry out the work. The business people are still best to leave that work to the peons so they can continue to focus on the important things.
But coding is just a fraction of my weekly workload, and AI has been less impactful for other aspects of project management.
So overall it’s 25%-50% increase in productivity.
Does it increase the number of things that pass QA?
Do the things done with AI assistance have fewer bugs caught after QA?
Are they easier to extend or modify later? Or do they have rigid and inflexible designs?
A tool that can help turn developers into unknown quality code monkeys is not something I’m looking for. I’m looking for a tool that helps developers find bugs or design flaws in what they’re doing. Or maybe write well designed tests.
Just counting PRs doesn’t tell me anything useful. But it triggers my gut feeling that more code per unit time = lower average quality.
Copilot - “okay I can do that for you! Here are your new commits!”
Senior Dev - “why? Your change is atomic. I’ll tell management to f-off, in a kind way, if they bring up those silly change-per-month metrics again”
Microsoft: September 2022 to May 3rd, 2023
Accenture: July 2023 to December 2023
Anonymous Company: October 2023 to ?
Copilot _Chat_ update to GPT-4 was Nov 30, 2023: https://github.blog/changelog/label/copilot/
Even so, AI will propose different things at different times and you still need an experienced developer to make the call. In the end it replaces documentation and typing.
For public facing projects - your documentation just became part of the LLM's training data, so its now extra important your documentation is thorough and accurate because you will have a ton of developers getting answers from that system.
For private projects, your documentation can now be fed into a finetuning dataset or a RAG system, achieving the same effect.
I remember that book explain the command, what the purpose of the command was, the typical scenario for why you would need such command.
It explained the options. No silly explanation like "-sort" sorts the output.
It explained the return values also in detail.
Explained the errors also in detail and what might cause the error.
Most functions today are "explained" by reordering the words in the function name: "canUpdate()" returns true if $x can update.
The output of these tools today is unsafe to use unless you possess the ability to assess its correctness. The less able you are to perform that assessment, the more likely you are to use these tools.
Only one of many problems with this direction, but gravity sucks, doesn't it.
As such, when I do have to debug problems myself, or dream up ideas of improvements, I no longer can do this properly due to lack of internal mental state.
Wonder how people who have used genai coding successfully get around this?
1) You need to be the boss with the AI being your assistant. You are now a project manager coming up with strict requirements of what you'd like done. Your developer (AI) needs context, constraints and needs to be told exactly what you'd like created without necessarily diving into the technical details.
2) Planning - you need to have a high level plan of roughly how you'd like to structure your code. Think of it like you're drawing the outline and AI is filling in the gaps.
3) Separation of concerns - use software principles to drive your code design. Break problems down into separate components, AI is good at filling in components that are well defined.
Once you change your thinking to a higher level, then you can maintain flow state. Of course the AI isn't perfect and will make mistakes - you do need to question it as you go. The more creative you become with a solution the harder time the AI will have and sometimes you'll have to break out and fix things up yourself.
For me it's a huge productivity boost.
I've already fixed a couple of tests like this, where people clearly used AI and didn't think about it, when in reality it was testing something wrong.
Not to mention the rest of the technical debt added... looking at productivity in software development by amount of tasks is so wrong.
Must have seen AI write the implementation as well?
If you're still cognizant of what you're writing on the implementation side, it's pretty hard to see a test go from failing to passing if the test is buggy. It requires you to independently introduce the same bug the LLM did, which, while not completely impossible, is unlikely.
Of course, humans are prone to not understanding the requirements, and introducing what isn't really a bug in the strictest sense but rather a misfeature.
Its pretty easy to add a passing test and call it done without checking if it actually fails in the right circumstances, and then you will get a ton of buggy tests.
Most developers don't do the start out at failing and then to passing ritual, especially junior ones who copies code from somewhere instead of knowing what they wrote.
Let's not forget that developers some times do this, too...
Is that a flag we should be watching out for?
I know preprints don't need polish but this is even below the standard of a preprint, imo.
"However, the table also shows that for all outcomes (with the exception of the Build Success Rate), the standard deviation exceeds the pre-treatment mean, and sometimes by a lot. This high variability will limit our power in our experimental regressions below."
What I find even stranger is that the values in the "control" and "treatment" columns are so similar. That would be highly unlikely given the extreme variability, no?
Dev: Hey einpoklum, how do I do XYZ?
Me: Hmm, I think I remember that... you could try AB and then C.
Dev: Ok, but isn't there a better/easier way? Let me ask ChatGPT.
...
Dev: Hey einpoklum, ChatGPT said I should do AB and then C.
Me: Let me have a look at that for a second.
Me: -Right, so it's just what I read on StackOverflow about this, a couple of years ago.
Sometimes it's even the answer that _I_ wrote on StackOverflow and then I feel cheated.What's next? Royalties from companies that used the answer in their solutions? Do you also want copyright of this comment?
I think it's a big productivity boost, but also a chance that the learning rate might actually be significantly slower.
The experiment in question was to split 95 devs into two groups and see how long it took each group to setup a web server in Javascript. Control took a little under 3 hours on average, the copilot group took 1 hour and 11 minutes on average.
https://github.blog/news-insights/research/research-quantify...
And it is thanks to this weak experiment that Github proudly boasts that Copilot makes devs 55% faster.
By contrast the conclusion that Copilot makes devs ~25% more productive seems reasonable, especially when you read the actual paper and find out that among senior devs the productivity gains are more marginal.
* control the agenda items in a formal meeting
* fill a fixed amount of time in an interview with no rebuttal
* design the benchmark experiments and the presentation of the results
Would love to see it replicated by researchers at a company that does not have a clear financial interest in the outcome (the corresponding author here was working at Microsoft Research during the study period).
> Before moving on, we discuss an additional experiment run at Accenture that was abandoned due to a large layoff affecting 42% of participants
Eek
A minor drawback to that enthusiasm is that a lot of the code I read didn't need to exist in the first place, even before this wave. Lots of it can be attributed to the path dependence of creation as opposed to what it is trying to do. This should be a rich time to change to security / exploit work - the random search tools are great and the target just keeps getting easier.
What our industry really desperately needed was to drive the quality of implementation right down. It's going to be an exciting time to be alive.
And that is why demand for senior developers is going to go through the roof. Who is going to unfuck the giant balls of mud those inexperienced devs are slinging together? Who’s going to keep the lights on?
Both AI tools came back with...garbage. Loops within loops within loops as they iterated through each day to check if the day is a weekend or not, is a leap year and to account for the extra day, is it a holiday or not, etc.
However, chatGPT provided a clever division to cut the dataset down to weeks, then process the result. I ended up using that portion in my final algorithm creation.
So, my take on AI coding tools are: "Buyer beware. Your results may vary".
Because development will become an auction-like activity where the one that accepts more suggestions wins.
I'm actually fairly senior (turning 50 next month) and I notice an effect that AI is having on my own productivity: I now take on mini projects that I used to delegate or avoid doing because they would take too much time or be too tedious. That's not the case anymore. The number of things I can use chat gpt for is gradually expanding. I notice that I'm skilling up a lot more rapidly as well.
This is great because if you want to stay relevant, you need to adapt to modern tools and technology. That's nothing new of course. Changes are a constant in our industry. And there always are a lot of people that learn some tricks when they are young and then never learn anything new again. If you are lucky some of that stuff stays relevant for a few decades. But mostly a lot of stuff gets unceremoniously dumped by younger generations.
The ability to use LLMs is becoming an important skill in itself and one that is now part of what I look for in candidates (including older ones). I don't have a lot of patience for people refusing to use tools that are available to them. Tell me how you use tools to your advantage; not how you have a tool aversion or are unwilling to learn new things.
It will be a machine game, just like assembly is mostly compiler generated today.
The AI will produce faster, smaller, more power efficient and more secure binaries than a human ever can.
The AI will learn all the compilation steps, fuse everything into a simplified pipeline and what we call compilers today will be erased from reality.
Similar to gaming the stock price for a couple quarters as C-level, but now with more incentive for this sort of behavior at IC level
They assimilate companies and leave a bloated hollow mess behind.
No two tasks are the same level of complexity and one task may take 5x longer than another to complete.
When I was using chatgpt to do qualifiers for a CTF called Hack A Sat at defcon 31 I could not get anything to work such as gnu radio programs.
If you have the ability to debug then I have experienced that it is productive but when you don’t understand you run into problems.
However, there's a big question as to whether these are short productivity gains vs longer lasting gains. There's a hypothesis that the AI generate code will slowly spaghetti-fy a codebase.
Is 1-2 years sufficiently long enough to take this into consideration? Or disprove the spaghettification?
For those who are beginners, it can bring their skills up and make them look like better developers than they are.
More insidiously, expert programmers who overuse of AI might also regress to the mean as their skills deteriorate.
This is what I’ve seen too, I don't think less experienced developers have gotten better in their understanding of anything just more exposed and quicker, while I do think more experience developers have stagnated
Is this because they are not using coding assistants? Are they resistant to using them? I have to say that the coding assistant is helpful; it is an ever-present rubber duck that can talk back with useful information.
This is compounded by adherence to misguided corporate policies that broadly prohibit use of LLMs but are meant to only be about putting trade secrets into the cloud, not distinguishing between cloud vs locally run language models. Comfortable people would never challenge this policy with critical thinking, and it requires special interest to look at locally run language models, even just to choose which one to run.
Many developers have not advocated for more RAM on their company issued laptop to run better LLMs.
and I haven't seen any internal language model that the company is running in their intranet. But it would be cool if there was a huggingface-style catalogue and server farm companies could have and let their employees choose models to prompt, always having the latest models to load.
I think this post from the other day adds some important context[0]. In that study kids with access to GPT did way more practice problems but worse on the test. But the most important part was that they found that while GPT usually got the final answer right that the logic was wrong, meaning that the answer is wrong. This is true for math and code.
There's the joke: there's two types of 10x devs, those that do 10x work and those who finish 10x jira tickets. The problem with this study is the assumptions that it makes, which is quite common and naive in our industry. They assume that PRs and commits are measures of productivity and they assume passing review is a good quality metric. These are so variable between teams. Plenty are just "lgtm" reviews.
The issue here is that there's no real solid metric for things like good code. Meeting the goals of a ticket doesn't mean you haven't solved the problem so poorly you are the reason 10 new tickets will be created. This is the real issue here and the only real way to measure it is using Justice Potter's test (I know it when I see it), and requires an expert evaluator. In other words, tech debt. Which is something we're seeing a growing rise in, all the fucking enshitification.
So I don't think that study here contradicts [0], in fact I think they're aligned. But I suspect people who are poor programmers (or non programmers) will use this at evidence for what they want to see. Believing naive things like lines of code, number of commits/PRs, etc are measures of productivity rather than hints of measure. I'm all for "move fast and break things" as long as there's time set aside to clean up the fucking mess you left behind. But there never is. It's like we have businesses ADHD. There's so much lost productivity because so much focus is placed on short term measurements and thinking. I know medium and long term thinking are hard, but humans do hard shit every day. We can do a lot better than a shoddy study like this.
Copilot often saves me a lot of typing on a 1-3 line scope, occasionally surprising me with exactly what I was about to write on a 5-10 line scope. It’s really good during rearrangement and early refactoring (as you are building a new thing and changing your mind as you go about code organization).
ChatGPT, or “Jimmy” - as I like to call him - has been great for answering syntax questions, idiom questions, etc. when applying my general skills based on other languages to ones I’m less familiar with.
It has also been good for “discussing” architecture approaches to a problem with respect to a particular toolset.
With proper guidance and very clear prompting, I usually get highly value responses.
I would rough guess that these two tools have saved me 2-3 months of solo time this year - nay, since April.
One I get down in the deep details, I use Jimmy much less often. But when I hit something new, or something I long since forgot, he’s ready to be relative expert / knowledge base.
Unless one is an absolutely terrible developer working on incredibly simple problems, this is simply impossible.
Being able to ask for an example of something in that domain, and get a useful answer, is much, much faster than hunting down the current documentation (which may be thin or non-existent).
Also being able to say, "I do X in this language. What is the idiomatic way of doing it in Y language?"
My pretty broad knowledge can be directed, with careful wording, at ChatGPT, and ChatGPT is the relative domain expert who can get me quite close to a correct solution very quickly.
If you start low it's easier to get greater growth rates.
The biggest is the first step, 0% to 1% is infinite growth.
If an AI tool makes me more productive, I would probably either spend the time won browsing the internet, or use it to attempt different approaches to solve the problem at hand. In the latter case, I would perhaps make more reliable or more flexible software. Which would also be almost impossible to measure in a scientific investigation.
In my experience, the differences in developer productivity are so enormous (depending on existing domain knowledge, motivation, or management approach), that it seems pretty hard to make any scientific claim based on looking at large groups of developers. For now, I prefer the individual success story.
BUT I think a lot of people mentioned that, you get code - that the person which wrote it do not understand. So the next time you get a bug there, good luck fixing it.
My take so far. AI is great, but only for non critical, non core code. Everything that is done for plotting and scripting is awesome (which can take days to implement and in minutes with AI) - but core lib functions - wouldn't outsource it to the AI right now.
I, for one, only decide whether CoPilot's productivity increase is worth the $10 it costs per month.
It doesn't really matter whether you're an employer getting a 3–30% increase in productivity or whether you pay for it personally and finish 2 hours faster every week and log off illegaly. It's easily worth its money. What more to consider?
Potentially sharing company IP with a 3rd party?