That collapsed during the covid lockdowns. My financial services client cut loose all consultants and killed all 'non-essential' projects, even when mine (that they had already approved) would save them 400K a year, they did not care! Top down the word came to cut everyone -- so they did.
This trend is very much a top down push. Inorganic. People with skills and experience are viewed by HR and their AI software as risky to leave and unlikely to respond to whatever pressures they like to apply.
Since then it's been more of the same as far as consulting.
I've come to the conclusion I'm better served by working on smaller projects I want to build and not chasing big consulting dollars. I'm happier (now) but it took a while.
An unexpected benefit of all the pain was I like making things again... but I am using claude code and gemini. Amazing tools if you have experience already and you know what you want out of them -- otherwise they mainly produce crap in the hands of the masses.
You learn lessons over the years and this is one I learned at some point: you want to work in revenue centers, not cost centers. Aside from the fixed math (i.e. limit on savings vs. unlimited revenue growth) there's the psychological component of teams and management. I saw this in the energy sector where our company had two products: selling to the drilling side was focused on helping get more oil & gas; selling to the remediation side was fulfill their obligations as cheaply as possible. IT / dev at a non-software company is almost always a cost center.
The problem is that many places don't see the cost portions of revenue centers as investment, but still costs. The world is littered with stories of businesses messing about with their core competencies. An infamous example was Hertz(1) outsourcing their website reservation system to Accenture to comically bad results. The website/app is how people reserve cars - the most important part of the revenue generating system.
The logic is simple, if unenlightened: "What if we had cheaper/fewer nerds, but we made them nerd harder?"
So while working in a revenue center is advantageous, you still have to be in one that doesn't view your kind as too fungible.
In many cases I've seen projects increase their revenue substantially by making simple messaging pivots. Ex. Instead of having your website say "save X dollars on Y" try "earn X more dollars using Y". It's incredible how much impact simple messaging can have on your conversion rates.
This extends beyond just revenue. Focusing on revenue centers instead of cost centers is a great career advice as well.
Totally agree. This is a big reason I went into solutions consulting.
In that particular case I mentioned it was a massive risk management compliance solution which they had to have in place, but they were getting bled dry by the existing vendor, due to several architectural and implementation mistakes they had made way back before I ever got involved, that they were sort of stuck with.
I had a plan to unstuck them at 1/5 the annual operating cost and better performance. Presented it to executives, even Amazon who would have been the infr vendor, to rave reviews.
We had a verbal contract and I was waiting for paperwork to sign... and then Feb 2020... and then crickets.
a little earlier very few suspected that our mobile phone is not only listening to our conversations and training some ai model but also all its gyrometers are being used to profile our daily routine. ( keeping mobile for charging near our pillow) looking at mobile first thing in morning.
Now when we are asked to use ai to do our code. I am quite anxious as to what part of our life are we selling now .. perhaps i am no longer their prime focus. (50+) but who knows.
Going with the flow seems like a bad advice. going Analog as in iRobot seems the most sane thing.
I've been doing a lot of photography in the last few years with my smartphone and because of the many things you mentioned, I've forgone using it now. I'm back to a mirrorless camera that's 14 years old and still takes amazing pictures. I recently ran into a guy shutting down his motion picture business and now own three different Canon HDV cameras that I've been doing some interesting video work with.
Its not easy transferring miniDV film to my computer, but the standard resolution has a very cool retro vibe that I've found a LOT of people have been missing and are coming back around too.
I'm in the same age range and couldn't fathom becoming a developer in the early aughts and being in the midst of a gold rush for developer talent to suddenly seeing the entire tech world contract almost over night.
Strange tides we're living in right now.
Instead I found Linux/BSD and it changed my life and I ended up with security clearances writing code at defense contractors, dot com startups, airports, banks, biotech/hpc, on and on...
Exactly right about Github. Facebook is the same for training on photos and social relationships. etc etc
They needed to generate a large body of data to train our future robot overlords to enslave us.
We the 'experienced' are definitely not their target -- too much independence of thought.
To your point I use an old flip phone an voip even though I have written iOS and android apps. My home has no wifi. I do not use bluetooth. There are no cameras enabled on any device (except a camera).
LLMs strike me as mainly useful in the same way. I can get most of the boilerplate and tedium done with LLM tools. Then for core logic esp learning or meta-programming patterns etc. I need to jump in.
Breaking tasks down to bite size, and writing detailed architecture and planning docs for the LLM to work from, is critical to managing increasing complexity and staying within context windows. Also critical is ruthlessly throwing away things that do not fit the vision and not being afraid to throw whole days away (not too often tho!)
For ref I have built stuff that goes way beyond CRUD app with these tools in 1/10th of the time it previously took me or less -- the key though is I already knew how to do and how to validate LLM outputs. I knew exactly what I wanted a priori.
Code generation technically always 'replaced' junior devs and has been around for ages, the results of the generation are just a lot better now., whereas in the past it was mixed bag of benefits/hassles doing code generation regularly, now it works much better and the cost is much less.
I started my career as a developer and the main reasons I became a solutions systems guy were money and that I hated the tedium boilerplate phase of all software development projects over a certain scale. I never stoped coding because I love it -- just not for large enterprise soul destroying software projects.
Two engineers use LLM-based coding tools; one comes away with nothing but frustration, the other one gets useful results. They trade anecdotes and wonder what the other is doing that is so different.
Maybe the other person is incompetent? Maybe they chose a different tool? Maybe their codebase is very different?
"Put this data on a web page" is easy. Complex application-like interactions seem to be more challenging. It's faster/easier to do the work by hand than it is to wait for the LLM, then correct it.
But if you aren't already an expert, you probably aren't looking for complex interaction models. "Put this data on a web page" is often just fine.
Sometimes I don't care for things to be done in a very specific way. For those cases, LLMs are acceptable-to-good. Example: I had a networked device that exposes a proprietary protocol on a specific port. I needed a simple UI tool to control it; think toggles/labels/timed switches. With a couple of iterations, the LLM produced something good enough for my purposes, even if it wasn't particularly doted with the best UX practices.
Other times, I very much care for things to be done in a very specific way. Sometimes due to regulatory constraints, others because of visual/code consistency, or some other reasons. In those cases, getting the AI to produce what I need specifically feels like an exercise in herding incredibly stubborn cats. It will get done faster (and better) if I do it myself.
Protestant Reformation? Done, 7 years ago, different professor. Your brothers are pleased to liberate you for Saturday's house party.
Barter Economy in Soviet Breakaway Republics? Sorry, bro. But we have a Red Square McDonald's feasibility study; you can change the names?
I will say that being social and being in a scene at the right time helps a lot -- timing is indeed almost everything.
I concur with that and that's what I tell every single junior/young dev. that asks for advice: get out there and get noticed!
People who prefer to lead more private lives, or are more reserved in general, have far fewer opportunities coming their way, they're forced to take the hard path.
This is wildly condescending. Holy.
Also wtf did I just read. Op said he uses his network to find work. And you go on a rant about how you're rising and grinding to get that bread, and everything you have ever earned completely comes from you, no help from others? Jesus Christ dude, chill out.
>I'm not for/or against a particular style
... so I'm not sure why some of you took offense in my comment, but I can definitely imagine why :)
>Ex-colleagues reach out to me and ask me to work with them
Never happened to me, that's the point I'm making.
1. I wish work just landed at my feet.
2. As that never happened and most likely was never going to happen, I had to learn another set of skills to overcome that.
3. That made me a much more resilient individual.
(4. This is not meant as criticism to @arthurfirst's style. I wish clients just called me and I didn't have to save all that money/time I spend taking care of that)
... so I'm not sure why some of you took offense in my comment, but I can definitely imagine why :)
Because surrounding your extremely condescending take with "just my opinion"-style hedging still results in an extremely condescending take.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
The market is speaking. Long-term you’ll find out who’s wrong, but the market can usually stay irrational for much longer than you can stay in business.
I think everyone in the programming education business is feeling the struggle right now. In my opinion this business died 2 years ago – https://swizec.com/blog/the-programming-tutorial-seo-industr...
You might as well work on product marketing for ai because that is where the client dollars are allocated.
If it's hype at least you stayed afloat. If it's not maybe u find a new angle if you can survive long enough? Just survive and wait for things to shake out.
I am in a different camp altogether on AI, though, and would happily continue to do business with it. I genuinely do not see the difference between it and the computer in general. I could even argue it's the same as the printing press.
What exactly is the moral dilemma with AI? We are all reading this message on devices built off of far more ethically questionable operations. that's not to say two things cant both be bad, but it just looks to me like people are using the moral argument as a means to avoid learning something new while being able to virtue signal how ethical they are about it, while at the same time they refuse to sacrifice things they are already accustomed to for ethical reasons when they learn more about it. It just all seems rather convenient.
the main issue I see talked about with it is in unethical model training, but let me know of others. Personally, I think you can separate the process from the product. A product isnt unethical just because unethical processes were used to create it. The creator/perpetrator of the unethical process should be held accountable and all benefits taken back as to kill any perceived incentive to perform the actions, but once the damage is done why let it happen in vain? For example, should we let people die rather than use medical knowledge gained unethically?
Maybe we should be targeting these AI companies if they are unethical and stop them from training any new models using the same unethical practices, hold them accountable for their actions, and distribute the intellectual property and profits gained from existing models to the public, but models that are already trained can actually be used for good and I personally see it as unethical not to.
Sorry for the ramble, but it is a very interesting topic that should probably have as much discussion around it as we can get
yes [0]
I believe that they are bringing up a moral argument. Which I'm sympathetic too, having quit a job before because I found that my personal morals didn't align with the company, and the cognitive dissonance to continue working there was weighing heavily on me. The money wasn't worth the mental fight every day.
So, yes, in some cases it is better to be "right" and be forced out of business than "wrong" and remain in business. But you have to look beyond just revenue numbers. And different people will have different ideas of "right" and "wrong", obviously.
Agreed that you cannot be in a toxic situation and not have it affect you -- so if THAT is the case -- by all means exit asap.
If it's perceived ethical conflict the only one you need to worry about is the golden rule -- and I do not mean 'he who has the gold makes the rules' I mean the real one. If that conflicts with what you are doing then also probably make an exit -- but many do not care trust me... They would take everything from you and feel justified as long as they are told (just told) it's the right thing. They never ask themselves. They do not really think for themselves. This is most people. Sadly.
Have they done more harm than, say, Meta?
Yeah, that's why I took a guess at what they were trying to say.
>Is that supposed to intrinsically represent "immorality"?
What? The fact that they linked to Wikipedia, or specifically Raytheon?
Wikipedia does not intrinsically represent immorality, no. But missile manufacturing is a pretty typical example, if not the typical example, of a job that conflicts with morals.
>Have they done more harm than, say, Meta?
Who? Raytheon? The point I'm making has nothing to do with who sucks more between Meta and Raytheon.
Everyone and everything has a website and an app already. Is the market becoming saturated?
It was an offshoot bubble of the bootcamp bubble which was inflated by ZIRP.
Running a services business has always been about being able to identify trends and adapt to market demand. Every small business I know has been adapting to trends or trying to stay ahead of them from the start, from retail to product to service businesses.
The fact is, a lot of new business is getting done in this field, with or without them. If they want to take the "high road", so be it, but they should be prepared to accepts the consequences of worse revenues.
Without knowing the future I cannot answer.
Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.
The expertise and skill still matter. But customers are going to get a lot further without such a studio and the remaining market is going to be smaller and much more competitive.
There's a lot of other work emerging though. IMHO the software integration market is where the action is going to be for the next decade or so. Legacy ERP systems, finance, insurance, medical software, etc. None of that stuff is going away or at risk of being replaced with some vibe coded thing. There are decades worth of still widely used and critically important software that can be integrated, adapted, etc. for the modern era. That work can be partly AI assisted of course. But you need to deeply understand the current market to be credible there. For any new things, the ambition level is just going to be much higher and require more skill.
Arguing against progress as it is happening is as old as the tech industry. It never works. There's a generation of new programmers coming into the market and they are not going to hold back.
So let's all just give zero fucks about our moral values and just multiply monetary ones.
You are misconstruing the original point. They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government. There are a few people who might exaggerate these moral issues for self service but they wouldn't matter in the long term.
That is not to suggest there are absolutely no legitimate moral problems with AI but they will pale in comparison to what the market needs.
If AI can make things 1000x more efficient, humanity will collectively agree in one way or the other to ignore or work around the "moral hazards" for the greater good.
You can start by explaining what your specific moral value is that goes against AI use? It might bring to clarity whether these values are that important at all to begin with.
Is that the promise of the faustian bargain we're signing?
Once the ink is dry, should I expect to be living in a 900,000 sq ft apartment, or be spending $20/year on healthcare? Or be working only an hour a week?
Edit: There is some research covering work time estimates for different ages.
But it requires giving up things a lot of people don't want to, because consuming less once you are used to consuming more sucks. Here is a list of things people can cut from their life that are part of the "consumption has gone up" and "new categories of consumption were opened" that ovi256 was talking about:
- One can give up cell phones, headphones/earbuds, mobile phone plans, mobile data plans, tablets, ereaders, and paid apps/services. That can save $100/mo in bills and amortized hardware. These were a luxury 20 years ago.
- One can give up laptops, desktops, gaming consoles, internet service, and paid apps/services. That can save another $100/months in bills and amortized hardware. These were a luxury 30 years ago.
- One can give up imported produce and year-round availability of fresh foods. Depending on your family size and eating habits, that could save almost nothing, or up to hundreds of dollars every month. This was a luxury 50 years ago.
- One can give up restaurant, take-out, and home pre-packaged foods. Again depending on your family size and eating habits, that could save nothing-to-hundreds every month. This was a luxury 70 years ago.
- One can give up car ownership, car rentals, car insurance, car maintenance, and gasoline. In urban areas, walking and public transit are much cheaper options. In rural areas, walking, bicycling, and getting rides from shuttle services and/or friends are much cheaper options. That could save over a thousand dollars a month per 15,000 miles. This was a luxury 80 years ago.
I could keep going, but by this point I've likely suggested cutting something you now consider necessary consumption. If you thought one "can't just give that up nowadays," I'm not saying you're right or wrong. I'm just hoping you acknowledge that what people consider optional consumption has changed, which means people consume a lot more.
It's not clear that it's still possible to function in society today with out a cell phone and a cell phone plan. Many things that were possible to do before without one now require it.
> - One can give up laptops, desktops, gaming consoles, internet service, and paid apps/services. That can save another $100/months in bills and amortized hardware. These were a luxury 30 years ago.
Maybe you can replace these with the cell phone + plan.
> - One can give up imported produce and year-round availability of fresh foods. Depending on your family size and eating habits, that could save almost nothing, or up to hundreds of dollars every month. This was a luxury 50 years ago.
It's not clear that imported food is cheaper than locally grown food. Also I'm not sure you have the right time frame. I'm pretty sure my parents were buying imported produce in the winter when I was a kid 50 years ago.
> - One can give up restaurant, take-out, and home pre-packaged foods. Again depending on your family size and eating habits, that could save nothing-to-hundreds every month. This was a luxury 70 years ago.
Agreed.
> - One can give up car ownership, car rentals, car insurance, car maintenance, and gasoline. In urban areas, walking and public transit are much cheaper options. In rural areas, walking, bicycling, and getting rides from shuttle services and/or friends are much cheaper options. That could save over a thousand dollars a month per 15,000 miles. This was a luxury 80 years ago.
Yes but in urban areas whatever you're saving on cars you are probably spending on higher rent and mortgage costs compared to rural areas where cars are a necessity. And if we're talking USA, many urban areas have terrible public transportation and you probably still need Uber or the equivalent some of the time, depending on just how walkable/bike-able your neighborhood is.
> It's not clear that it's still possible to function in society today with out a cell phone
Like I said... I've likely suggested cutting something you now consider necessary consumption. If you thought one "can't just give that up nowadays," I'm not saying you're right or wrong. I'm just hoping you acknowledge that what people consider optional consumption has changed, which means people consume a lot more.
---
As an aside, I live in a rural area. The population of my county is about 17,000 and the population of its county seat is about 3,000. We're a good 40 minutes away from the city that centers the Metropolitan Statistical Area. A 1 bedroom apartment is $400/mo and a 2 bedroom apartment is $600/mo. In one month, minimum wage will be $15/hr.
Some folks here do live without a car. It is possible. They get by in exactly the ways I described (except some of the Amish/Mennonites, who also use horses). It's not preferred (except by some of the Amish/Mennonites), but one can make it work.
But if we take "surprisingly small salary" to literally mean salary, most (... all?) salaried jobs require you to work full time, 40 hours a week. Unless we consider cushy remote tech jobs, but those are an odd case and likely to go away if we assume AI is taking over there.
Part time / hourly work is largely less skilled and much lower paid, and you'll want to take all the hours you can get to be able to afford outright necessities like rent. (Unless you're considering rent as consumption/luxury, which is fair)
It does seem like there's a gap in terms of skilled/highly paid but hourly/part time work.
(Not disagreeing with the rest of your post though)
But that's not what they said, they said they want to work less. As the GP post said, they'd still be working a full week.
I do think this is an interesting point. The trend for most of history seems to have been vastly increasing consumption/luxury while work hours somewhat decrease. But have we reached the point where that's not what people want? I'd wager most people in rich developed countries don't particularly want more clothes, gadgets, cars, or fast food. If they can get the current typical middle class share of those things (which to be fair is a big share, and not environmentally sustainable), along with a modest place to live, they (we) mainly want to work less.
But _somebody_ will be living in a 900,000 sq ft apartment and working an hour a week, and the concept of money will be defunct.
You not liking something on purportedly "moral" grounds doesn't matter if it works better than something else.
Where are you going to draw the line? Only if it effects you, or maybe we should go back to using coal for everything, so the mineworkers have their old life back? Or maybe follow the Amish guidelines to ban all technology that threatens sense of community?
If you are going to draw a line, you'll probably have to start living in small communities, as AI as a technology is almost impossible to stop. There will be people and companies using it to it's fullest, even if you have laws to ban it, other countries will allow it.
The goal of AI is NOT to be a tool. It's to replace human labor completely.
This means 100% of economic value goes to capital, instead of labor. Which means anyone that doesn't have sufficient capital to live off the returns just starves to death.
To avoid that outcome requires a complete rethinking of our economic system. And I don't think our institutions are remotely prepared for that, assuming the people runnign them care at all.
The same could be said of social media for which I think the aggregate bad has been far greater than the aggregate good (though there has certainly been some good sprinkled in there).
I think the same is likely to be true of "AI" in terms of the negative impact it will have on the humanistic side of people and society over the next decade or so.
However like social media before it I don't know how useful it will be to try to avoid it. We'll all be drastically impacted by it through network effects whether we individually choose to participate or not and practically speaking those of us who still need to participate in society and commerce are going to have to deal with it, though that doesn't mean we have to be happy about it.
Yes, absolutely.
Just because it's monopolized by evil people doesn't mean it's inherently bad. In fact, mots people here have seen examples of it done in a good way.
Like this very website we're on, proving the parent's point in fact.
AI wouldn't fall into that bucket, it wouldn't be driven entirely by the human at the wheel.
I'm not sold yet whether LLMs are AI, my gut says no and I haven't been convinced yet. We can't lose the distinction between ML and AI though, its extremely important when it comes to risk considerations.
But don’t expect the market to care. Don’t write a blog post whining about your morals, when the market is telling you loud and clear they want X. The market doesn’t give a shit about your idiosyncratic moral stance.
Edit: I’m not arguing that people shouldn’t take a moral stance, even a costly one, but it makes for a really poor sales pitch. In my experience this kind of desperate post will hurt business more than help it. If people don’t want what you’re selling, find something else to sell.
Put differently, is "the market" shaped by the desires of consumers, or by the machinations of producers?
Does it tho? Articles like [1] or [2] seem to be at odd with this interpretation. If it were any different we wouldn't be talking about the "AI bubble" after all.
[1]https://www.pcmag.com/news/microsoft-exec-asks-why-arent-mor...
[2]https://fortune.com/2025/08/18/mit-report-95-percent-generat...
"Jeez there so many cynics! It cracks me up when I hear people call AI underwhelming,”
ChatGPT can listen to you in real time, understands multiply languages very well and responds in a very natural way. This is breath taking and not on the horizon just a few years ago.
AI Transcription of Videos is now a really cool and helpful feature in MS Teams.
Segment Anything literaly leapfroged progress on image segmentation.
You can generate any image you want in high quality in just a few seconds.
There are already human beings being shitier in their daily job than a LLM is.
2) if you had read the paper you wouldn’t use it as an example here.
Good faith discussion on what the market feels about LLMs would include Gemini, ChatGPT numbers. Overall market cap of these companies. And not cherry picked misunderstood articles.
I bet a few Pets.com exec were also wondering why people weren't impressed with their website.
So is it rational for a web design company to take a moral stance that they won't use JavaScript?
Is there a market for that, with enough clients who want their JavaScript-free work?
Are there really enough companies that morally hate JavaScript enough to hire them, at the expense of their web site's usability and functionality, and their own users who aren't as laser focused on performatively not using JavaScript and letting everyone know about it as they are?
This might sound a bit ridiculous but this is what I think a lot of people's real positions on AI are.
This was you interpreting what the parent post was saying. I'm similarly providing a value judgement that you are doing a maximalist AI dismissal. We are not that different.
Maybe the only difference between us is that I think there is a difference between a description and an interpretation, and you don't :)
In the grand scheme of things, is it even worth mentioning? Probably not! :D :D Why focus on the differences when we can focus on the similarities?
>Ok change my qualifier from interpretation to description if it helps.
I... really don't think AI is what's wrong with you.
And if we look at the players who are the winners in the AI race, do you see anyone particularly good participating?
I could obviously give you examples where LLMs have concrete usecases but that's besides the larger point.
Can you explain why I should not be equally suspicious of gaming, social media, movies, carnivals, travel?
> My position on things like this is that if enough people use a service, I must defer to their judgement that they benefit from it.
Enough people doing something doesn't make that something good or desirable from a societal standpoint. You can find examples of things that go in both directions. You mentioned gaming, social media, movies, carnivals, travel, but you can just as easily ask the same question for gambling or heavy drugs use.
Just saying "I defer to their judgment" is a cop-out.
People find a benefit in smoking: a little kick, they feel cool, it’s a break from work, it’s socializing, maybe they feel rebellious.
The point is that people FEEL they benefit. THAT’S the market for many things. Not everything obv, but plenty of things.
I don't disagree, but this also doesn't mean that those things are intrinsically good and then we should all pursuit them because that's what the market wants. And that was what I was pushing against, this idea that since 800M people are using GPT then we should all be ok doing AI work because that's what the market is demanding.
When billions of people watch football, my first instinct is not to decry football as a problem in society. I acknowledge with humility that though I don't enjoy it, there is something to the activity that makes people watch it.
Agree. And that something could be a positive or a negative thing. And I'm not suggesting I know better than them. I'm suggesting that humans are not perfect machines and our brains are very easy to manipulate.
Because there are plenty of examples of things enjoyed by a lot of people who are, as a whole, bad. And they might not be bad for the individuals who are doing them, they might enjoy them, and find pleasure in them. But that doesn't make them desirable and also doesn't mean we should see them as market opportunities.
Drugs and alcohol are the easy example:
> A new report from the World Health Organization (WHO) highlights that 2.6 million deaths per year were attributable to alcohol consumption, accounting for 4.7% of all deaths, and 0.6 million deaths to psychoactive drug use. [...] The report shows an estimated 400 million people lived with alcohol use disorders globally. Of this, 209 million people lived with alcohol dependence. (https://www.who.int/news/item/25-06-2024-over-3-million-annu...)
Can we agree that 3 million people dying as a result of something is not a good outcome? If the reports were saying that 3 million people a year are dying as a result of LLM chats we'd all be freaking out.
–––
> my first instinct is not to decry football as a problem in society.
My first instinct is not to decry nothing as a problem, not as a positive. My first instinct is to give ourselves time to figure out which one of the two it is before jumping in head first. Which is definitely not what's happening with LLMs.
(By way of reminder, the question here is about the harms of LLMs specifically to the people using them, so I'm going to ignore e.g. people losing their jobs because their bosses thought an LLM could replace them, possible environmental costs, having the world eaten by superintelligent AI systems that don't need humans any more, use of LLMs to autogenerate terrorist propaganda or scam emails, etc.)
People become like those they spend time with. If a lot of people are spending a lot of time with LLMs, they are going to become more like those LLMs. Maybe only in superficial ways (perhaps they increase their use of the word "delve" or the em-dash or "it's not just X, it's Y" constructions), maybe in deeper ways (perhaps they adapt their _personalities_ to be more like the ones presented by the LLMs). In an individual isolated case, this might be good or bad. When it happens to _everyone_ it makes everyone just a bit more similar to one another, which feels like probably a bad thing.
Much of the point of an LLM as opposed to, say, a search engine is that you're outsourcing not just some of your remembering but some of your thinking. Perhaps widespread use of LLMs will make people mentally lazier. People are already mostly very lazy mentally. This might be bad for society.
People tend to believe what LLMs tell them. LLMs are not perfectly reliable. Again, in isolation this isn't particularly alarming. (People aren't perfectly reliable either. I'm sure everyone reading this believes at least one untrue thing that they believe because some other person said it confidently.) But, again, when large swathes of the population are talking to the same LLMs which make the same mistakes, that could be pretty bad.
Everything in the universe tends to turn into advertising under the influence of present-day market forces. There are less-alarming ways for that to happen with LLMs (maybe they start serving ads in a sidebar or something) and more-alarming ways: maybe companies start paying OpenAI to manipulate their models' output in ways favourable to them. I believe that in many jurisdictions "subliminal advertising" in movies and television is illegal; I believe it's controversial whether it actually works. But I suspect something similar could be done with LLMs: find things associated with your company and train the LLM to mention them more often and with more positive associations. If it can be done, there's a good chance that eventually it will be. Ewww.
All the most capable LLMs run in the cloud. Perhaps people will grow dependent on them, and then the companies providing them -- which are, after all, mostly highly unprofitable right now -- decide to raise their prices massively, to a point at which no one would have chosen to use them so much at the outset. (But at which, having grown dependent on the LLMs, they continue using them.)
I do agree about ads, it will be extremely worrying if ads bias the LLM. I don't agree about the monopoly part, we already have ways of dealing with monopolies.
In general, I think the "AI is the worst thing ever" concerns are overblown. There are some valid reasons to worry, but overall I think LLMs are a massively beneficial technology.
What became toxic was, arguably, the way in which it was monetized and never really regulated.
- gaming
- netflix
- television
- social media
- hacker news
- music in general
- carnivals
A priori, all of these are equally suspicious as to whether they provide value or not.
My point is that unless you have reason to suspect, people engaging in consumption through their own agency is in general preferable. You can of course bring counter examples but they are more of caveats against my larger truer point.
And today's adapt or die doesn't sound less fascist than in 1930.
If not, for the purpose of paying his bills, your giving a shit is irrelevant. That’s what I mean.
Yes.
I'm not going to be childish and dunk on you for having to update your priors now, but this is exactly the problem with this speaking in aphorisms and glib dismissals. You don't know anyone here, you speak in authoritative tone for others, and redefine what "matters" and what is worthy of conversation as if this is up to you.
> Don’t write a blog post whining about your morals,
why on earth not?
I wrote a blog post about a toilet brush. Can the man write a blog post about his struggle with morality and a changing market?
If every AI lab were to go bust tomorrow, we could still hire expensive GPU servers (there would suddenly be a glut of those!) and use them to run those open weight models and continue as we do today.
Sure, the models wouldn't ever get any better in the future - but existing teams that rely on them would be able to keep on working with surprisingly little disruption.
However, what I don't like is how little the authors are respected in this process. Everything that the AI generates is based on human labour, but we don't see the authors getting the recognition.
In that sense AI has been the biggest heist that has ever been perpetrated.
Authors still get recognition. If they are decent authors producing original, literary work. But the type of author that fills page five of your local news paper, has not been valued for decades. But that was filler content long before AI showed up. Same for the people that do the subtitles on soap operas. The people that create the commercials that show at 4am on your TV. All fair game for AI.
It's not a heist, just progress. People having to adapt and struggling with that happens with most changes. That doesn't mean the change is bad. Projecting your rage, moralism, etc. onto agents of change is also a constant. People don't like change. The reason we still talk about Luddites is that they overreacted a bit.
People might feel that time is treating them unfairly. But the reality is that sometimes things just change and then some people adapt and others don't. If your party trick is stuff AIs do well (e.g. translating text, coming up with generic copy text, adding some illustrations to articles, etc.), then yes AI is robbing you of your job and there will be a lot less demand for doing these things manually. And maybe you were really good at it even. That really sucks. But it happened. That cat isn't going back in the bag. So, deal with it. There are plenty of other things people can still do.
You are no different than that portrait painter in the 1800s that suddenly saw their market for portraits evaporate because they were being replaced by a few seconds exposure in front of a camera. A lot of very decent art work was created after that. It did not kill art. But it did change what some artists did for a living. In the same way, the gramophone did not kill music. The TV did not kill theater. Etc.
Getting robbed implies a sense of entitlement to something. Did you own what you lost to begin with?
See Studio Ghibli's art style being ripped off, Disney suing Midjourney, etc
If they had succeeded in regulation over machines and seeing wealth back into the average factory worker’s hands, of artisans integrated into the workforce instead of shut out, would so much of the bloodshed and mayhem to form unions and regulations have been needed?
Broadly, it seems to me that most technological change could use some consideration of people
Robbing implies theft. The word heist was used here to imply that some crime is happening. I don't think there is such a crime and disagree with the framing. Which is what this is, and which is also very deliberate. Luddites used a similar kind of framing to justify their actions back in the day. Which is why I'm using it as an analogy. I believe a lot of the anti AI sentiment is rooted in very similar sentiments.
I'm not missing the point but making one. Clearly it's a sensitive topic to a lot of people here.
This type of business isn’t going to be hit hard by AI; this type of business owner is going to be hit hard by AI.
I still wondering why I'm not doing my banking in Bitcoins. My blockchain database was replaced by postgres.
So some tech can just be hypeware. The OP has a legitimate standpoint given some technologies track record.
And the doctors are still out on the affects of social media on children or why are some countries banning social media for children?
Not everything that comes out of Silicon Valley is automatically good.
Recently I had to learn SPARQL. What I did is I created an MCP server to connect it to a graph database with SPARQL support, and then I asked the AI: "Can you teach me how to do this? How would I do this in SQL? How would I do it with SPARQL?" And then it would show me.
With examples of how to use something, it really helps that you can ask questions about what you want to know at that moment, instead of just following a static tutorial.
>the type of business that's going to be hit hard by AI [...] will be the ones that integrate AI into their business the most
There. Fixed!
I am an AI skeptic and until the hype is supplanted by actual tangible value I will prefer products that don't cram AI everywhere it doesn't belong.
Prompting isn't a skill, and praying that the next prompt finally spits out something decent is not a business strategy.
well you just describing an chatgpt is, one of the most fastest growing user acquisition user base in history
as much as I agree with your statement but the real world doesn't respect that
By selling a dollar of compute for 90 cents.
We've been here before, it doesn't end like you think it does.
No thanks, I'm good.
same like StackOverflow down today and seems like not everyone cares anymore, back then it would totally cause breakdown because SO is vital
Then there's an oversupply of programmers, salaries will crash, and lots of people will have to switch careers. It's happened before.
"Always", in the same way that five years ago we'd "never" have an AI that can do a code review.
Don't get me wrong: I've watched a decade of promises that "self driving cars are coming real soon now honest", latest news about Tesla's is that it can't cope with leaves; I certainly *hope* that a decade from now will still be having much the same conversation about AI taking senior programmer jobs, but "always" is a long time.
And the LLMs can use the static analysis tools.
Like everything else they do, it's amazing how far you can get even if you're incredibly lazy and let it do everything itself, though of course that's a bad idea because it's got all the skill and quality of result you'd expect if I said "endless hoarde of fresh grads unwilling to say 'no' except on ethical grounds".
regulation still very much a thing
An tech employee posted he looked for job for 6 months, found none and has joined a fast food shop flipping burgers.
That turned tech workers switching to "flipping burgers" into a meme.
I feel like there are a lot of people in school or recently graduated though that had FNG dreams and never considered an alternative. This is going to be very difficult for them. I now feel, especially as tech has gone truly borderless with remote work, that this downturn is now way worse than the .com bust. It has just dragged on for years now, with no real end in sight.
covid overhiring + AI usage = massive layoff we ever see in decades
There are still plenty of tech jobs these days, just less than there were during covid, but tech itself is still in a massive expansionary cycle. We'll see how the AI bubble lasts, and what the fallout of it bursting will be.
The key point is that the going is still exceptionally good. The posts talking about experienced programmers having to flip burgers in the early 2000s is not an exaggeration.
History always repeats itself in the tech industry. The hype cycle for LLMs will probably peak within the next few years. (LLMs are legitimately useful for many things but some of the company valuations and employee compensation packages are totally irrational.)
It's happened before and there's no way we could have learned from that and improved things. It has to be just life changing, life ruining, career crippling. Absolutely no other way for a society to function than this.
When you search programming-related questions, what sites do you normally read? For me, it is hard to avoid SO because it appears in so many top results from Google. And I swear that Google AI just regugitates most of SO these days for simple questions.
But the killer feature of an LLM is that it can synthesize something based on my exact ask, and does a great job of creating a PoC to prove something, and it's cheap from time investment point of view.
And it doesn't downvote something as off-topic, or try to use my question as a teaching exercise and tell me I'm doing it wrong, even if I am ;)
Instead of running a google query or searching in Stackoverflow you just need a chatGPT, Claude or your Ai of choice open in a browser. Copy and paste.
SO was never that bad, even with all their moderation policies, they had no paywalls.
There were more problems. And that's from the point of view of somebody coming from Google to find questions that already existed. Interacting there was another entire can of worms.
Stack Overflow’s moderation is overbearing and all, but that’s nowhere near at the same level as Expert Exchange’s baiting and switching
I don't want to openly write about the financial side of things here but let's just say I don't have enough money to comfortably retire or stop working but course sales over the last 2-3 years have gotten to not even 5% of what it was in 2015-2021.
It went from "I'm super happy, this is my job with contracting on the side as a perfect technical circle of life" to "time to get a full time job".
Nothing changed on my end. I have kept putting out free blog posts and videos for the last 10 years. It's just traffic has gone down to 20x less than it used to be. Traffic dictates sales and that's how I think I arrived in this situation.
It does suck to wake up most days knowing you have at least 5 courses worth of content in your head that you could make but can't spend the time to make them because your time is allocated elsewhere. It takes usually 2-3 full time months to create a decent sized course, from planning to done. Then ongoing maintenance. None of this is a problem if it generates income (it's a fun process), but it's a problem given the scope of time it takes.
You only have to follow the market if you want to continue to stay relevant.
Taking a stand and refusing to follow the market is always an option, but it might mean going out of business for ideological reasons.
So practically speaking, the options are follow the market or find a different line of work if you don’t like the way the market is going.
The ideal version of my job would be partnering with all the local businesses around me that I know and love, elevating their online facilities to let all of us thrive. But the money simply isn’t there. Instead their profits and my happiness are funnelled through corporate behemoths. I’ll applaud anyone who is willing to step outside of that.
Of course. If you want the world to go back to how it was before, you’re going to be very depressed in any business.
That’s why I said your only real options are going with the market or finding a different line of work. Technically there’s a third option where you stay put and watch bank accounts decline until you’re forced to choose one of the first two options, but it’s never as satisfying in retrospect as you imagined that small act of protest would have been.
Even in the linked post the author isn't complaining that it's not fair or whatever, they're simply stating that they are losing money as a result of their moral choice. I don't think they're deluded about the cause and effect.
Isn't that what money is though, a way to get people to stop what they're doing and do what you want them to instead? It's how Rome bent its conquests to its will and we've been doing it ever since.
It's a deeply broken system but I think that acknowledging it as such is the first step towards replacing it with something less broken.
It doesn't have to be. Plenty of people are fulfilled by their jobs and make good money doing them.
We've always tolerated a certain portion of society who finds the situation unacceptable, but don't you suspect that things will change if that portion is most of us?
Maybe we're not there yet, idk, but the article is about the unease vs the data, and I think the unease comes from the awareness that that's where we're headed.
> So practically speaking, the options are follow the market or find a different line of work if you don’t like the way the market is going.
You're correct in this, but I think it's worth making the explicit statement that that's also true because we live in a system of amoral resource allocation.
Yes, this is a forum centered on startups, so there's a certain economic bias at play, but on the subject of morality I think there's a fair case to be made that it's reasonable to want to oppose an inherently unjust system and to be frustrated that doing so makes survival difficult.
We shouldn't have to choose between principles and food on the table.
It's not "swim with the tide or die", it's "float like a corpse down the river, or swim". Which direction you swim in will certainly be a different level of effort, and you can end up as a corpse no matter what, but that doesn't mean the only option you have is to give up.
taking a moral stance isn't inherently ideological
If you found it unacceptable to work with companies that used any kind of digital database (because you found centralization of information and the amount of processing and analytics this enables unbecoming) then you should probably look for another venture instead of finding companies that commit to pen and paper.
Maybe they will, and I bet they'll be content doing that. I personally don't work with AI and try my best to not to train it. I left GitHub & Reddit because of this, and not uploading new photos to Instagram. The jury is still out on how I'm gonna share my photography, and not sharing it is on the table, as well.
I may even move to a cathedral model or just stop sharing the software I write with the general world, too.
Nobody has to bend and act against their values and conscience just because others are doing it, and the system is demanding to betray ourselves for its own benefit.
Life is more nuanced than that.
(But only all of us simultaneously, otherwise won't count! ;))))
The number of triggered Stockholm Syndrome patients in this comment section is terminally nauseating.
Now, I'll probably install a gallery webapp to my webserver and put it behind authentication. I'm not rushing because I don't crave any interaction from my photography. The images will most probably be optimized and resized to save some storage space, as well.
Following the market is also not cravenly amoral, AI or not.
A studio taking on temporary projects isn't investing into AI— they're not getting paid in stock. This is effectively no different from a construction company building an office building, or a bakery baking a cake.
As a more general commentary, I find this type of moral crusade very interesting, because it's very common in the rich western world, and it's always against the players but rarely against the system. I wish more people in the rich world would channel this discomfort as general disdain for the neoliberal free-market of which we're all victims, not just specifically AI, for example.
The problem isn't AI. The problem is a system where new technology means millions fearing poverty. Or one where profits, regardless of industry, matter more than sustainability. Or one where rich players can buy their way around the law— in this case copyright law for example. AI is just the latest in a series of products, companies, characters, etc. that will keep abusing an unfair system.
IMO over-focusing on small moral cursades against specific players like this and not the game as a whole is a distraction bound to always bring disappointment, and bound to keep moral players at a disadvantage constantly second-guessing themselves.
A construction company would still be justified to say no based on moral standards. A clearer example would be refusing to build a bridge if you know the blueprints/materials are bad, but you could also make a case for agreeing or not to build a detention center for immigrants. But the bakery example feels even more relevant, seeing as a bakery refusing to bake a cake base on the owner's religious beliefs ended up in the US Supreme Court [1].
I don't fault those who, when forced to choose between their morals and food, choose food. But I generally applaud those that stick to their beliefs at their own expense. Yes, the game is rigged and yes, the system is the problem. But sometimes all one can do is refuse to play.
[1] https://en.wikipedia.org/wiki/Masterpiece_Cakeshop_v._Colora...
I totally agree. I still think opposing AI makes sense in the moment we're in, because it's the biggest, baddest example of the system you're describing. But the AI situation is a symptom of that system in that it's arisen because we already had overconsolidation and undue concentration of wealth. If our economy had been more egalitarian before AI, then even the same scientific/technological developments wouldn't be hitting us the same way now.
That said, I do get the sense from the article that the author is trying to do the right thing overall in this sense too, because they talk about being a small company and are marketing themselves based on good old-fashioned values like "we do a good job".
Fucking this. What I tend to see is petty 'my guy good, not my guy bad' approach. All I want is even enforcement of existing rules on everyone. As it stands, to your point, only the least moral ship, because they don't even consider hesitating.
No. It is the AI companies that are externalizing their costs onto everyone else by stealing the work of others, flooding the zone with garbage, and then weeping about how they'll never survive if there's any regulation or enforcement of copyright law.
AI is another set of tooling. It can be used well or not, but arguing the morality of a tooling type (e.g drills) vs maybe a specific company (e.g Ryobi) seems an odd take to me.
What stance against AI? Image generation is not the same as code generation.
There are so many open source projects out there, its a huge difference than taking all the images.
AI is also just ML so should i not use image bounding box algorithm? Am i not allowed to take training data online or are only big companies not allowed to?
What am I missing?
I think this is the crux of the entire problem for the author. The author is certain, not just hesitant, that any contribution they would make to project involving AI equals contribution to some imagined evil ( oddly, without explictly naming what they envision so it is harder to respond to ). I have my personal qualms, but run those through my internal ethics to see if there is conflict. Unless author predicts 'prime intellect' type of catastrophe, I think the note is either shifting blame and just justifying bad outcomes with moralistic: 'I did the right thing' while not explaining the assumptions in place.
Do you "run them through" actual ethics, too?
We should have more posts like this. It should be okay to be worried, to admit that we are having difficulties. It might reach someone else who otherwise feels alone in a sea of successful hustlers. It might also just get someone the help they need or form a community around solving the problem.
I also appreciate their resolve. We rarely hear from people being uncompromising on principles that have a clear price. Some people would rather ride their business into the ground than sell out. I say I would, but I don’t know if I would really have the guts.
You can either hope that this shift is not happening or that you are one of these people surviving in your niche.
But the industry / world is shifting, you should start shifting with.
I would call that being innovative, ahead etc.
Google has a ton of code internal.
And million of people happily thumb down or up for their RL / Feedback.
The industry is still shifting. I use LLMs instead of StackOverflow.
You can be as dismissive as you want, but that doesn't change the fact that millions of people use AI tools every single day. People start using AI based tools.
The industry overall is therefore shifting money and goals etc. into direction of AI.
And the author has an issue because of that.
"To run your business with your personal romance of how things should be versus how they are is literally the great vulnerability of business."
What about a functioning market is immoral?
If all of "AI stuff" is a "no" for you, then I think you just signed out off working in most industries to some important degree going forward.
This is also not to say that service providers should not have any moral standards. I just don't understand the expectation in this particular case. You ignore what the market wants and where a lot/most of new capital turns up. What's the idea? You are a service provider, you are not a market maker. If you refuse service with the market that exists, you don't have a market.
Regardless, I really like their aesthetics (which we need more of in the world) and do hope that they find a way to make it work for themselves.
I'm not sure the penetration of AI, especially to a degree where participants must use it, is all that permanent in many of these industries. Already the industry where it is arguably the most "present" (forced in) is SWE and its proving to be quite disappointing... Where I work the more senior you are the less AI you use
Pretty sure the market doesn't want more AI slop.
There is also absolutely very tasteful products that add value using LLM and other more recent advancements.
Both can exist at the same time.
Demand for AI anything is incredible high right now. AI providers are constantly bouncing off of capacity limits. AI apps in app stores are pulling incredible download numbers.
The only thing that's going to change is the quality of the slop will get better by the year.
"High quality AI slop" is a contradiction in terms. The relevant definitions[1] are "food waste (such as garbage) fed to animals", "a product of little or no value."
By definition, the best slop is only a little terrible.
Ok. They are not talking about AI broadly, but LLMs which require insane energy requirements and benefit off the unpaid labor of others.
I started my career in AI, and it certainly didn’t mean LLMs then. some people were doing AI decades ago
I would like to understand where this moral line gets drawn — neural networks that output text? that specifically use the transformer architecture? over some size?
The industry joke is: What do you call AI that works? Machine Learning.
Deciding not to enable a technology that is proving to be destructive except for the very few who benefit from it, is a fine stance to take.
I won't shop at Walmart for similar reasons. Will I save money shopping at Walmart? Yes. Will my not shopping at Walmart bring about Walmart's downfall? No. But I refuse to personally be an enabler.
I wish I had Walmart in my area, the grocery stores here suck.
I trust in your ability to actually differentiate between the machine learning tools that are generally useful and the current crop of unethically sourced "AI" tools being pushed on us.
LLMs are approximately right. That means they're sometimes wrong, which sucks. But they can do things for which no 100% accurate tool exists, and maybe could not possibly exist. So take it or leave it.
It kind of is that clear. It's IP laundering and oligarchic leveraging of communal resources.
I think there is ethical use cases for LLMs. I have no problem leveraging a "common" corpus to support the commons. If they weren't over-hyped and almost entirely used as extensions of the weath-concentration machine, they could be really cool. Locally hosted llms are kinda awesome. As it is, they are basically just theft from the public and IP laundering.
Market has changed -> we disagree -> we still disagree -> business is bad.
It is indeed hard to swim against the current. People have different principles and I respect that, I just rarely - have so much difficulty understanding them - see such clear impact on the bottom line
I started TextQuery[1] with same moralistic standing. Not in respect of using AI or not, but that most software industry is suffering from rot that places more importance on making money, forcing subscription vs making something beautiful and detail-focused. I poured time in optimizing selections, perfecting autocomplete, and wrestling with Monaco’s thin documentation. However, I failed to make it sustainable business. My motivation ran out. And what I thought would be fun multi-year journey, collapsed into burnout and a dead-end project.
I have to say my time was better spent on building something sustainable, making more money, and optimizing the details once having that. It was naïve to obsess over subtleties that only a handful of users would ever notice.
There’s nothing wrong with taking pride in your work, but you can’t ignore what the market actually values, because that's what will make you money, and that's what will keep your business and motivation alive.
Since then I pivoted to AI and Gen AI startups- money is tight and I dont have health insurance but at least I have a job…
> Since then I pivoted to AI and Gen AI startups- money is tight and I dont have health insurance but at least I have a job…
I hope this doesn't come across as rude, but why? My understanding is American tech pays very well, especially on the executive level. I understand for some odd reason your country is against public healthcare, but surely a year of big tech money is enough to pay for decades of private health insurance?
Generally you’re right, though. Working in tech, especially AI companies, would be expected to provide ample money for buying health insurance on your own. I know some people who choose not to buy their own and prefer to self-pay and hope they never need anything serious, which is obviously a risk.
A side note: The US actually does have public health care but eligibility is limited. Over one quarter of US people are on Medicaid and another 20% are on Medicare (program for older people). Private self-pay insurance is also subsidized on a sliding scale based on your income, with subsidies phasing out around $120K annual income for a family of four.
It’s not equivalent to universal public health care but it’s also different than what a lot of people (Americans included) have come to think.
They are outsourcing just as much as US Big Tech. And never mind the slow-mo economic collapse of UK, France, and Germany.
As Americans, getting a long-term visa or residency card is not too hard, provided you have a good job. It’s getting the job that’s become more difficult. For other nationalities, it can range from very easy to very hard.
https://finnish.andrew-quinn.me/
... But, no, it's still a very forbidding language.
However salaries are atrocious and local jobs aren't really available to non mandarin speakers. But if you're looking to kick off your remote consulting career or bootstrap some product you wanna build, there's not really anywhere on earth that combines the quality of life with the cost of living like Taiwan does.
Applied to quite a few EU jobs via LinkedIn but nothing came of it- I suspected they wanted people already in EU countries
Both of us are US Citizens but we don't want to retire in the US it seems to be becoming a s*hole esp around healthcare
I'm not sure the claim "we can use good devs" is true from the perspective of European corporations. But would love to learn otherwise?
And of course: where in Europe?
By that metric, everyone in the USA is responsible for the atrocities the USA war industry has inflicted all over the world. Everyone pays taxes funding Israel, previously the war in Iraq, Afghanistan, Vietnam, etc.
But no one believes this because sometimes you just have to do what you have to do, and one of those things is pay your taxes.
Models that are trained only on public domain material. For value add usage, not simply marketing or gamification gimmicks...
[0] I think the data can be licensed, and not just public domain; e.g. if the creators are suitably compensated for their data to be ingested
None, since 'legal' for AI training is not yet defined, but Olma is trained on the Dolma 3 dataset, which is
1. Common crawl
2. Github
3. Wikipedia, Wikibooks
4. Reddit (pre-2023)
5. Semantic Scholar
6. Project Gutenberg
"AI products" that are being built today are amoral, even by capitalism's standards, let alone by good business or environmental standards. Accepting a job to build another LLM-selling product would be soul-crushing to me, and I would consider it as participating in propping up a bubble economy.
Taking a stance against it is a perfectly valid thing to do, and the author is not saying they're a victim due to no doing of their own by disclosing it plainly. By not seeing past that caveat and missing the whole point of the article, you've successfully averted your eyes from another thing that is unfolding right in front of us: majority of American GDP is AI this or that, and majority of it has no real substance behind it.
But I also understand this is a design and web development company. They're not refusing contracts to build AI that will take people's jobs, or violate copyright, or be used in weapons. They're refusing product marketing contracts; advertising websites, essentially.
This is similar to a bakery next to the OpenAI offices refusing to bake cakes for them. I'll respect the decision, sure, but it very much is an inconsequential self-inflicted wound. It's more amoral to fully pay your federal taxes if you live in the USA for example, considering a good chunk are ultimately used for war, the CIA, NSA, etc, but nobody judges an average US-resident for paying them.
HTML + CSS is also one area where LLMs do surprisingly well. Maybe there’s a market for artisanal, hand-crafted, LLM-free CSS and HTML out there only from the finest experts in all the land, but it has to be small.
I suspect young people are going to flee the industry in droves. Everyone knows corporations are doing everything in their power to replace entry level programmers with AI.
Nobody doubts the prior is better and some people make money doing it, but that market is a niche because most people prioritize price and 80/20 tradeoffs.
Average mass produced clothes are better than average hand made clothing. When we think of hand made clothing now, we think of the boutique hand made clothing of only the finest clothing makers who have survived in the new market by selling to the few who can afford their niche high-end products.
This one. Inferred from context about this individual’s high quality above LLMs.
The only perk artisans enjoy then is uniqueness of the product as opposed to one-size fits all of mass manufacturing. But the end result is that while we still have tailors for when we want to get fancy, our clothes are nearly entirely machine made.
Having the most well tested backend and beautiful frontend that works across all browsers and devices and not just on the main 3 browsers your customers use isn't paying the bills.
When you think 99.99% of company websites are garbage, it might be your rating scale that is broken.
This reminds me of all the people who rage at Amazon’s web design without realizing that it’s been obsessively optimized by armies of people for years to be exactly what converts well and works well for their customers.
In this case, running a studio without using or promoting AI becomes a kind of sub-game that can be “won” on principle, even if it means losing the actual game that determines whether the business survives. The studio is turning down all AI-related work, and it’s not surprising that the business is now struggling.
I’m not saying the underlying principle is right or wrong, nor do I know the internal dynamics and opinions of their team. But in this case the cost of holding that stance doesn’t fall just on the owner, it also falls on the people who work there.
Links:
They have a right to do business with whomever they wish. I'm not suggesting that they change this. However they need to face current reality. What value-add can they provide in areas not impacted by AI?
I'd much rather see these kind of posts on the front page. They're well thought-out and I appreciate the honesty.
I think that, when you're busy following the market, you lose what works for you. For example, most business communication happens through push based traffic. You get assigned work and you have x time to solve all this. If you don't, we'll have some extremely tedious reflection meeting that leads to nowhere. Why not do pull-based work, where you get done what you get done?
Is the issue here that customers aren't informed about when a feature is implemented? Because the alternative is promising date X and delaying it 3 times because customer B is more important
Any white-collar field—high-skill or not—that can be solved logically will eventually face the same pressure. The deeper issue is that society still has no coherent response to a structural problem: skills that take 10+ years to master can now be copied by an AI almost overnight.
People talk about “reskilling” and “personal responsibility,” but those terms hide the fact that surviving the AI era doesn’t just mean learning to use AI tools in your current job. It’s not that simple.
I don’t have a definitive answer either. I’m just trying, every day, to use AI in my work well enough to stay ahead of the wave.
I hope things with the AI will settle soon and there will be applications that actually make sense and some sort of new balance will be established. Right now it's a nightmare. Everyone wants everything with the AI.
All the _investors_ want everything with AI. Lots of people - non-tech workers even - just want a product that works and often doesn't work differently than it did last year. That goal is often at odds with the ai-everywhere approach du jour.
I hope things turn around for them it seems like they do good work
Careful now, if they get their way, they’ll be both the market and the government.
All of this money is being funneled and burned away on AI shit that isn't even profitable nor has it found a market niche outside of enabling 10x spammers, which is why companies are literally trying to force it everywhere they can.
I wonder what their plan was before LLMs seemed promising?
These techbros got rich off the dotcom boom hype and lax regulation, and have spent 20 years since attempting to force themselves onto the throne, and own everything.
We Brits simply don't have the same American attitude towards business. A lot of Americans simply can't understand that chasing riches at any cost is not a particularly European trait. (We understand how things are in the US. It's not a matter of just needing to "get it" and seeing the light)
I intentionally ignored the biggest invention of the 21st century out of strange personal beliefs and now my business is going bankrupt
The equivalent of that comic where the cyclist intentionally spoke-jams themselves and then acts surprised when they hit the dirt.
But since the author puts moral high horse jockeying above money, they've gotten what they paid for - an opportunity to pretend they're a victim and morally righteous.
Par for the course
we say that wordpress would kill front end but years later people still employ developer to fix wordpress mess
same thing would happen with AI generated website
Probably even moreso. I've seen the shit these things put out, it's unsustainable garbage. At least Wordpress sites have a similar starting point. I think the main issue is that the "fixing AI slop" industry will take a few years to blossom.
I have a family member that produces training courses for salespeople; she's doing fantastic.
This reminds me of some similar startup advice of: don't sell to musicians. They don't have any money, and they're well-versed in scrappy research to fill their needs.
Finally, if you're against AI, you might have missed how good of a learning tool LLMs can be. The ability to ask _any_ question, rather than being stuck-on-video-rails, is huge time-saver.
"Moral" is mentioned 91 times at last count.
Where is that coming from? I understand AI is a large part of the discussion. But then where is /that/ coming from? And what do people mean by "moral"?
EDIT: Well, he mentions "moral" in the first paragraph. The rest is pity posting, so to answer my question - morals is one of the few generally interesting things in the post. But in the last year I've noticed a lot more talking about "morals" on HN. "Our morals", "he's not moral", etc. Anyone else?
I don't use AI tools in my own work (programming and system admin). I won't work for Meta, Palantir, Microsoft, and some others because I have to take a moral stand somewhere.
If a customer wants to use AI or sell AI (whatever that means), I will work with them. But I won't use AI to get the work done, not out of any moral qualm but because I think of AI-generated code as junk and a waste of my time.
At this point I can make more money fixing AI-generated vibe coded crap than I could coaxing Claude to write it. End-user programming creates more opportunity for senior programmers, but will deprive the industry of talented juniors. Short-term thinking will hurt businesses in a few years, but no one counting their stock options today cares about a talent shortage a decade away.
I looked at the sites linked from the article. Nice work. Even so I think hand-crafted front-end work turned into a commodity some time ago, and now the onslaught of AI slop will kill it off. Those of us in the business of web sites and apps can appreciate mastery of HTML and CSS and Javascript, beautiful designs and user-oriented interfaces. Sadly most business owners don't care that much and lack the perspective to tell good work from bad. Most users don't care either. My evidence: 90% of public web sites. No one thinks WordPress got the market share it has because of technical excellence or how it enables beautiful designs and UI. Before LLMs could crank out web sites we had an army of amateur designers and business owners doing it with WordPressl, paying $10/hr or less on Upwork and Fiverr.
The market is literally telling them what it wants and potential customers are asking them for work but they are declining it from "a moral standpoint"
and instead blaming "a combination of limping economies, tariffs, even more political instability and a severe cost of living crisis"
This is a failure of leadership at the company. Adapt or die, your bank account doesn't care about your moral redlines.
Can someone explain this?
* The environmental cost of inference in aggregate and training in specific is non-negligible
* Training is performed (it is assumed) with material that was not consented to be trained upon. Some consider this to be akin to plagiarism or even theft.
* AI displaces labor, weakening the workers across all industries, but especially junior folks. This consolidates power into the hands of the people selling AI.
* The primary companies who are selling AI products have, at times, controversial pasts or leaders.
* Many products are adding AI where it makes little sense, and those systems are performing poorly. Nevertheless, some companies shove short AI everywhere, cheapening products across a range of industries.
* The social impacts of AI, particularly generative media and shopping in places like YouTube, Amazon, Twitter, Facebook, etc are not well understood and could contribute to increased radicalization and Balkanization.
* AI is enabling an attention Gish-gallop in places like search engines, where good results are being shoved out by slop.
Hopefully you can read these and understand why someone might have moral concerns, even if you do not. (These are not my opinions, but they are opinions other people hold strongly. Please don't downvote me for trying to provide a neutral answer to this person's question.)
Please note, that there are some accounts downvoting any comment talking about downvoting by principle.
Should people not look for reasons to be concerned?
see the diversity of views.
My experience with large companies (especially American Tech) is that they always try and deliver the product as cheap as possible, are usually evil and never cared about social impacts. And HN has been steadily complaining about the lowering of quality of search results for at least a decade.
I think your points are probably a fair snapshot of peoples moral issue, but I think they're also fairly weak when you view them in the context of how these types of companies have operated for decades. I suspect people are worried for their jobs and cling to a reasonable sounding morality point so they don't have to admit that.
And while some might be doing what you say, others might genuinely have a moral threshold they are unwilling to cross. Who am I to tell someone they don't actually have a genuinely held belief?
Although there’s a ton of hype in “AI” right now (and most products are over-promising and under-delivering), this seems like a strange hill to die on.
imo LLMs are (currently) good at 3 things:
1. Education
2. Structuring unstructured data
3. Turning natural language into code
From this viewpoint, it seems there is a lot of opportunity to both help new clients as well as create more compelling courses for your students.
No need to buy the hype, but no reason to die from it either.
Notice the phrase "from a moral standpoint". You can't argue against a moral stance by stating solely what is, because the question for them is what ought to be.
I wanted to make this point here explicitly because lately I've seen this complete erasure of the moral dimension from AI and tech, and to me that's a very scary development.
But that is exactly what the "is ought problem" manifests, or? If morals are "oughts", then oughts are goal-dependent, i.e. they depend on personally-defined goals. To you it's scary, to others it is the way it should be.
And it continued growing nonstop all the way through ~early Sep 2024, and been slowing down ever since, by now coming to an almost complete stop - to the point i ever fired all sales staff because they were treading water with no even calls let alone deals, for half a year before being dismissed in mid-July this year.
I think it won't return - custom dev is done. The myth of "hiring coders to get rich" is over. No surprise it did, because it never worked, sooner or later people had to realise it. I may check again in 2-3 years how market is doing, but i'm not at all hopeful.
Switched into miltech where demand is real.
We are still nowhere near to get climate change under control. AI is adding fuel to the fire.
You will continue to lose business, if you ignore all the 'AI stuff'. AI is here to stay, and putting your head in the sand will only leave you further behind.
I've known people over the years that took stands on various things like JavaScript frameworks becoming popular (and they refused to use them) and the end result was less work and eventually being pushed out of the industry.
Ironically, while ChatGPT isn’t a great writer, I was even more annoyed by the tone of this article and the incredible overuse of italics for emphasis.
User education, for example, can be done in ways that don't even feel like gen AI in ways that can drastically improve activation e.g. recommendation to use feature X based on activity Y, tailored to their use case.
If you won't even lean into things like this you're just leaving yourself behind.
That's horrifying.
Sounds like a self inflicted wound. No kids I assume?
Two fundamental laws of nature: the strong prey on the weak, and survival of the fittest.
Therefore, why is it that those who survive are not the strong preying on the weak, but rather the "fittest"?
Next year's development of AI may be even more astonishing, continuing to kill off large companies and small teams unable to adapt to the market. Only by constantly adapting can we survive in this fierce competition.