I did not look for a consulting contract for 18 years. Through my old network more quality opportunities found me than I could take on.

That collapsed during the covid lockdowns. My financial services client cut loose all consultants and killed all 'non-essential' projects, even when mine (that they had already approved) would save them 400K a year, they did not care! Top down the word came to cut everyone -- so they did.

This trend is very much a top down push. Inorganic. People with skills and experience are viewed by HR and their AI software as risky to leave and unlikely to respond to whatever pressures they like to apply.

Since then it's been more of the same as far as consulting.

I've come to the conclusion I'm better served by working on smaller projects I want to build and not chasing big consulting dollars. I'm happier (now) but it took a while.

An unexpected benefit of all the pain was I like making things again... but I am using claude code and gemini. Amazing tools if you have experience already and you know what you want out of them -- otherwise they mainly produce crap in the hands of the masses.

>> even when mine (that they had already approved) would save them 400K a year

You learn lessons over the years and this is one I learned at some point: you want to work in revenue centers, not cost centers. Aside from the fixed math (i.e. limit on savings vs. unlimited revenue growth) there's the psychological component of teams and management. I saw this in the energy sector where our company had two products: selling to the drilling side was focused on helping get more oil & gas; selling to the remediation side was fulfill their obligations as cheaply as possible. IT / dev at a non-software company is almost always a cost center.

> You learn lessons over the years and this is one I learned at some point: you want to work in revenue centers, not cost centers.

The problem is that many places don't see the cost portions of revenue centers as investment, but still costs. The world is littered with stories of businesses messing about with their core competencies. An infamous example was Hertz(1) outsourcing their website reservation system to Accenture to comically bad results. The website/app is how people reserve cars - the most important part of the revenue generating system.

1. https://news.ycombinator.com/item?id=32184183

I would go further and say that even at software companies, even for dev that goes directly into the product, engineering is often seen as a cost center.

The logic is simple, if unenlightened: "What if we had cheaper/fewer nerds, but we made them nerd harder?"

So while working in a revenue center is advantageous, you still have to be in one that doesn't view your kind as too fungible.

  • mosura
  • ·
  • 52 minutes ago
  • ·
  • [ - ]
Yeah these days if it isn’t ops to bring in revenue it is seen as cost.
I work as a consultant and tend to focus on helping startups grow their revenue. And what you're saying here is almost word for word what I often recommend as the *first thing* they should do.

In many cases I've seen projects increase their revenue substantially by making simple messaging pivots. Ex. Instead of having your website say "save X dollars on Y" try "earn X more dollars using Y". It's incredible how much impact simple messaging can have on your conversion rates.

This extends beyond just revenue. Focusing on revenue centers instead of cost centers is a great career advice as well.

>> even when mine (that they had already approved) would save them 400K a year You learn lessons over the years and this is one I learned at some point: you want to work in revenue centers

Totally agree. This is a big reason I went into solutions consulting.

In that particular case I mentioned it was a massive risk management compliance solution which they had to have in place, but they were getting bled dry by the existing vendor, due to several architectural and implementation mistakes they had made way back before I ever got involved, that they were sort of stuck with.

I had a plan to unstuck them at 1/5 the annual operating cost and better performance. Presented it to executives, even Amazon who would have been the infr vendor, to rave reviews.

We had a verbal contract and I was waiting for paperwork to sign... and then Feb 2020... and then crickets.

This is golden career advice. Heed it well.
It really is.
Very few people suspected that github is being used to train the ai when we were all pushed the best practice of doing frequent commit.

a little earlier very few suspected that our mobile phone is not only listening to our conversations and training some ai model but also all its gyrometers are being used to profile our daily routine. ( keeping mobile for charging near our pillow) looking at mobile first thing in morning.

Now when we are asked to use ai to do our code. I am quite anxious as to what part of our life are we selling now .. perhaps i am no longer their prime focus. (50+) but who knows.

Going with the flow seems like a bad advice. going Analog as in iRobot seems the most sane thing.

>> Going with the flow seems like a bad advice. going Analog as in iRobot seems the most sane thing.

I've been doing a lot of photography in the last few years with my smartphone and because of the many things you mentioned, I've forgone using it now. I'm back to a mirrorless camera that's 14 years old and still takes amazing pictures. I recently ran into a guy shutting down his motion picture business and now own three different Canon HDV cameras that I've been doing some interesting video work with.

Its not easy transferring miniDV film to my computer, but the standard resolution has a very cool retro vibe that I've found a LOT of people have been missing and are coming back around too.

I'm in the same age range and couldn't fathom becoming a developer in the early aughts and being in the midst of a gold rush for developer talent to suddenly seeing the entire tech world contract almost over night.

Strange tides we're living in right now.

If I had gone with the flow in 1995 I would have got my MCSE and worked for a big government bureaucracy.

Instead I found Linux/BSD and it changed my life and I ended up with security clearances writing code at defense contractors, dot com startups, airports, banks, biotech/hpc, on and on...

Exactly right about Github. Facebook is the same for training on photos and social relationships. etc etc

They needed to generate a large body of data to train our future robot overlords to enslave us.

We the 'experienced' are definitely not their target -- too much independence of thought.

To your point I use an old flip phone an voip even though I have written iOS and android apps. My home has no wifi. I do not use bluetooth. There are no cameras enabled on any device (except a camera).

  • zwnow
  • ·
  • 6 hours ago
  • ·
  • [ - ]
They also produce crap once you leave the realm of basic CRUD web apps... Try using it with Microsofts Business Central bullshit, does not work well.
I have worked with a lot of code generation systems.

LLMs strike me as mainly useful in the same way. I can get most of the boilerplate and tedium done with LLM tools. Then for core logic esp learning or meta-programming patterns etc. I need to jump in.

Breaking tasks down to bite size, and writing detailed architecture and planning docs for the LLM to work from, is critical to managing increasing complexity and staying within context windows. Also critical is ruthlessly throwing away things that do not fit the vision and not being afraid to throw whole days away (not too often tho!)

For ref I have built stuff that goes way beyond CRUD app with these tools in 1/10th of the time it previously took me or less -- the key though is I already knew how to do and how to validate LLM outputs. I knew exactly what I wanted a priori.

Code generation technically always 'replaced' junior devs and has been around for ages, the results of the generation are just a lot better now., whereas in the past it was mixed bag of benefits/hassles doing code generation regularly, now it works much better and the cost is much less.

I started my career as a developer and the main reasons I became a solutions systems guy were money and that I hated the tedium boilerplate phase of all software development projects over a certain scale. I never stoped coding because I love it -- just not for large enterprise soul destroying software projects.

  • ·
  • 5 hours ago
  • ·
  • [ - ]
  • ·
  • 5 hours ago
  • ·
  • [ - ]
Quick note that this has not been my experience. LLMs have been very useful with codebases as far from crud web apps as you can get.
This is consistent pattern.

Two engineers use LLM-based coding tools; one comes away with nothing but frustration, the other one gets useful results. They trade anecdotes and wonder what the other is doing that is so different.

Maybe the other person is incompetent? Maybe they chose a different tool? Maybe their codebase is very different?

I would imagine it has a lot to do with the programming language and other technologies in the project. The LLMs have tons of training data on JS and React. They probably have relatively little on Erlang.
Mass of learning material doesn't equal quality though. The amount of poor react code out there is not to underestimate. I feel like llm generated gleam code was way cleaner (after some agentic loops due to syntactic misunderstanding) than ts/react where it's so biased to produce overly verbose slob.
Even if you're using JS/React, the level of sophistication of the UI seems to matter a lot.

"Put this data on a web page" is easy. Complex application-like interactions seem to be more challenging. It's faster/easier to do the work by hand than it is to wait for the LLM, then correct it.

But if you aren't already an expert, you probably aren't looking for complex interaction models. "Put this data on a web page" is often just fine.

This has been my experience, effectively.

Sometimes I don't care for things to be done in a very specific way. For those cases, LLMs are acceptable-to-good. Example: I had a networked device that exposes a proprietary protocol on a specific port. I needed a simple UI tool to control it; think toggles/labels/timed switches. With a couple of iterations, the LLM produced something good enough for my purposes, even if it wasn't particularly doted with the best UX practices.

Other times, I very much care for things to be done in a very specific way. Sometimes due to regulatory constraints, others because of visual/code consistency, or some other reasons. In those cases, getting the AI to produce what I need specifically feels like an exercise in herding incredibly stubborn cats. It will get done faster (and better) if I do it myself.

  • neilv
  • ·
  • 4 hours ago
  • ·
  • [ - ]
It's like when your frat house has a filing cabinet full of past years' essays.

Protestant Reformation? Done, 7 years ago, different professor. Your brothers are pleased to liberate you for Saturday's house party.

Barter Economy in Soviet Breakaway Republics? Sorry, bro. But we have a Red Square McDonald's feasibility study; you can change the names?

There was actually a good article about this the other day which makes sense to me, it comes down to function vs form kinda: https://www.seangoedecke.com/pure-and-impure-engineering/
  • ·
  • 3 hours ago
  • ·
  • [ - ]
[flagged]
I earned their respect over many years of hard work -- hardly a freebie!

I will say that being social and being in a scene at the right time helps a lot -- timing is indeed almost everything.

>I will say that being social and being in a scene at the right time helps a lot

I concur with that and that's what I tell every single junior/young dev. that asks for advice: get out there and get noticed!

People who prefer to lead more private lives, or are more reserved in general, have far fewer opportunities coming their way, they're forced to take the hard path.

>I'm not for/or against a particular style, it must be real nice if life just solves everything for you while you just chill or whatever. But, a nice upside of being made of talent instead of luck is that when luck starts to run out, well, ... you'll be fine anyway :).

This is wildly condescending. Holy.

  • rozap
  • ·
  • 4 hours ago
  • ·
  • [ - ]
Talent makes luck. Ex-colleagues reach out to me and ask me to work with them because they know the type of work I do, not because it's lucky.

Also wtf did I just read. Op said he uses his network to find work. And you go on a rant about how you're rising and grinding to get that bread, and everything you have ever earned completely comes from you, no help from others? Jesus Christ dude, chill out.

My perspective is just as valid, and I also wrote,

>I'm not for/or against a particular style

... so I'm not sure why some of you took offense in my comment, but I can definitely imagine why :)

>Ex-colleagues reach out to me and ask me to work with them

Never happened to me, that's the point I'm making.

1. I wish work just landed at my feet.

2. As that never happened and most likely was never going to happen, I had to learn another set of skills to overcome that.

3. That made me a much more resilient individual.

(4. This is not meant as criticism to @arthurfirst's style. I wish clients just called me and I didn't have to save all that money/time I spend taking care of that)

>>I'm not for/or against a particular style

... so I'm not sure why some of you took offense in my comment, but I can definitely imagine why :)

Because surrounding your extremely condescending take with "just my opinion"-style hedging still results in an extremely condescending take.

I’ve seen Picallilli’s stuff around and it looks extremely solid. But you can’t beat the market. You either have what they want to buy, or you don’t.

> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that

The market is speaking. Long-term you’ll find out who’s wrong, but the market can usually stay irrational for much longer than you can stay in business.

I think everyone in the programming education business is feeling the struggle right now. In my opinion this business died 2 years ago – https://swizec.com/blog/the-programming-tutorial-seo-industr...

I get the moral argument and even agree with it but we are a minority and of course we expect to be able sell our professional skills -- but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?

You might as well work on product marketing for ai because that is where the client dollars are allocated.

If it's hype at least you stayed afloat. If it's not maybe u find a new angle if you can survive long enough? Just survive and wait for things to shake out.

Yes, actually - being right and out of business is much better than being wrong and in business when it comes to ethics and morals. I am sure you could find a lot of moral values you would simply refuse to compromise on for the sake of business. the line between moral value and heavy preference, however, is blurry - and is probably where most people have AI placed on the moral spectrum right now. Being out of business shouldn't be a death sentence, and if it is then maybe we are overlooking something more significant.

I am in a different camp altogether on AI, though, and would happily continue to do business with it. I genuinely do not see the difference between it and the computer in general. I could even argue it's the same as the printing press.

What exactly is the moral dilemma with AI? We are all reading this message on devices built off of far more ethically questionable operations. that's not to say two things cant both be bad, but it just looks to me like people are using the moral argument as a means to avoid learning something new while being able to virtue signal how ethical they are about it, while at the same time they refuse to sacrifice things they are already accustomed to for ethical reasons when they learn more about it. It just all seems rather convenient.

the main issue I see talked about with it is in unethical model training, but let me know of others. Personally, I think you can separate the process from the product. A product isnt unethical just because unethical processes were used to create it. The creator/perpetrator of the unethical process should be held accountable and all benefits taken back as to kill any perceived incentive to perform the actions, but once the damage is done why let it happen in vain? For example, should we let people die rather than use medical knowledge gained unethically?

Maybe we should be targeting these AI companies if they are unethical and stop them from training any new models using the same unethical practices, hold them accountable for their actions, and distribute the intellectual property and profits gained from existing models to the public, but models that are already trained can actually be used for good and I personally see it as unethical not to.

Sorry for the ramble, but it is a very interesting topic that should probably have as much discussion around it as we can get

> but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?

yes [0]

[0]: https://en.wikipedia.org/wiki/Raytheon

Can you... elaborate?
Not the parent.

I believe that they are bringing up a moral argument. Which I'm sympathetic too, having quit a job before because I found that my personal morals didn't align with the company, and the cognitive dissonance to continue working there was weighing heavily on me. The money wasn't worth the mental fight every day.

So, yes, in some cases it is better to be "right" and be forced out of business than "wrong" and remain in business. But you have to look beyond just revenue numbers. And different people will have different ideas of "right" and "wrong", obviously.

Moral arguments are a luxury of thinkers and only a small percentage of people can be reasoned with that way anyways. You can manipulate on morals but not reason in most cases.

Agreed that you cannot be in a toxic situation and not have it affect you -- so if THAT is the case -- by all means exit asap.

If it's perceived ethical conflict the only one you need to worry about is the golden rule -- and I do not mean 'he who has the gold makes the rules' I mean the real one. If that conflicts with what you are doing then also probably make an exit -- but many do not care trust me... They would take everything from you and feel justified as long as they are told (just told) it's the right thing. They never ask themselves. They do not really think for themselves. This is most people. Sadly.

But the parent didn't really argue anything, they just linked to a Wikipedia article about Raytheon. Is that supposed to intrinsically represent "immorality"?

Have they done more harm than, say, Meta?

>they just linked to a Wikipedia article about Raytheon

Yeah, that's why I took a guess at what they were trying to say.

>Is that supposed to intrinsically represent "immorality"?

What? The fact that they linked to Wikipedia, or specifically Raytheon?

Wikipedia does not intrinsically represent immorality, no. But missile manufacturing is a pretty typical example, if not the typical example, of a job that conflicts with morals.

>Have they done more harm than, say, Meta?

Who? Raytheon? The point I'm making has nothing to do with who sucks more between Meta and Raytheon.

Has anyone considered that the demand for web sites and software in general is collapsing?

Everyone and everything has a website and an app already. Is the market becoming saturated?

I know a guy who has this theory, in essence at least. Businesses use software and other high-tech to make efficiency gains (fewer people getting more done). The opportunities for developing and selling software were historically in digitizing industries that were totally analog. Those opportunities are all but dried up and we're now several generations into giving all those industries new, improved, but ultimately incremental efficiency gains with improved technology. What makes AI and robotics interesting, from this perspective, is the renewed potential for large-scale workforce reduction.
The demand is massively increasing, but filled by less people and more GPUs.
  • xnx
  • ·
  • 1 hour ago
  • ·
  • [ - ]
> In my opinion this business died 2 years ago

It was an offshoot bubble of the bootcamp bubble which was inflated by ZIRP.

I think your post pretty well illustrates how LLMs can and can't work. Favoriting this so I can point people to it in the future. I see so many extreme opinions on it like from how LLM is basically AGI to how it's "total garbage" but this is a good, balanced - and concise! - overview.
markets are not binary though, and this is also what it looks like when you're early (unfortunately similar to when you're late too). So they may totally be able to carve out a valid & sustainable market exactly because theyu're not doing what everyone else is doing right now. I'm currently taking online Spanish lessons with a company that uses people as teachers, even though this area is under intense attack from AI. There is no comparison, and what's really great is using many tools (including AI) to enhance a human product. So far we're a long way from the AI tutor that my boss keeps envisioning. I actually doubt he's tried to learn anything deep lately, let alone validated his "vision".
Not wanting to help the rich get richer means you'll be fighting an uphill battle. The rich typically have more money to spend. And as others have commented, not doing anything AI related in 2025-2026 is going to further limit the business. Good luck though.
Rejecting clients based on how you wish the world would be is a strategy that only works when you don’t care about the money or you have so many clients that you can pick and choose.

Running a services business has always been about being able to identify trends and adapt to market demand. Every small business I know has been adapting to trends or trying to stay ahead of them from the start, from retail to product to service businesses.

Rejecting clients when you have enough is a sound business decision. Some clients are too annoying to serve. Some clients don't want to pay. Sometimes you have more work than you can do... It is easy to think when things are bad that you must take any and all clients (and when things are bad enough you might be forced to), but that is not a good plan and to be avoided. You should be choosing your clients. It is very powerful when you can afford to tell someone I don't need your business.
Sure, but it seems here that they are rejecting everything related to AI, which is probably not a smart business move, as they also remark, since this year was much harder for them.

The fact is, a lot of new business is getting done in this field, with or without them. If they want to take the "high road", so be it, but they should be prepared to accepts the consequences of worse revenues.

Is it though? We don't know the future. Is this just a dip in a growing business, or sign of things to come? Even if AI does better than the most optimistic projections it could still be great for a few people to be anti-ai if they are in the right place selling to the right people.

Without knowing the future I cannot answer.

People who make products with AI are not necessarily rich, often it's solo "vibe coders."
This is the type of business that's going to be hit hard by AI. And the type of businesses that survive will be the ones that integrate AI into their business the most successfully. It's an enabler, a multiplier. It's just another tool and those wielding the tools the best, tend to do well.

Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.

The expertise and skill still matter. But customers are going to get a lot further without such a studio and the remaining market is going to be smaller and much more competitive.

There's a lot of other work emerging though. IMHO the software integration market is where the action is going to be for the next decade or so. Legacy ERP systems, finance, insurance, medical software, etc. None of that stuff is going away or at risk of being replaced with some vibe coded thing. There are decades worth of still widely used and critically important software that can be integrated, adapted, etc. for the modern era. That work can be partly AI assisted of course. But you need to deeply understand the current market to be credible there. For any new things, the ambition level is just going to be much higher and require more skill.

Arguing against progress as it is happening is as old as the tech industry. It never works. There's a generation of new programmers coming into the market and they are not going to hold back.

> Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.

So let's all just give zero fucks about our moral values and just multiply monetary ones.

>So let's all just give zero fucks about our moral values and just multiply monetary ones.

You are misconstruing the original point. They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government. There are a few people who might exaggerate these moral issues for self service but they wouldn't matter in the long term.

That is not to suggest there are absolutely no legitimate moral problems with AI but they will pale in comparison to what the market needs.

If AI can make things 1000x more efficient, humanity will collectively agree in one way or the other to ignore or work around the "moral hazards" for the greater good.

You can start by explaining what your specific moral value is that goes against AI use? It might bring to clarity whether these values are that important at all to begin with.

  • vkou
  • ·
  • 10 hours ago
  • ·
  • [ - ]
> If AI can make things 1000x more efficient,

Is that the promise of the faustian bargain we're signing?

Once the ink is dry, should I expect to be living in a 900,000 sq ft apartment, or be spending $20/year on healthcare? Or be working only an hour a week?

While humans have historically mildly reduced their working time to today's 40h workweek, their consumption has gone up enormously, and whole new categories of consumption were opened. So my prediction is while you'll never live in a 900,000sqft apartment (unless we get O'Neill cylinders from our budding space industry) you'll probably consume a lot more, while still working a full week
40h is probably up from pre-industrial times.

Edit: There is some research covering work time estimates for different ages.

We could probably argue to the end of time about the qualitative quality of life between then and now. In general a metric of consumption and time spent gathering that consumption has gotten better over time.
What was all this free time spent doing in the pre-industrial era?
Alternating between grinding your knife and making wood sculptures.
I don't want to "consume a lot more". I want to work less, and for the work I do to be valuable, and to be able to spend my remaining time on other valuable things.
You can consume a lot less on a surprisingly small salary, at least in the U.S.

But it requires giving up things a lot of people don't want to, because consuming less once you are used to consuming more sucks. Here is a list of things people can cut from their life that are part of the "consumption has gone up" and "new categories of consumption were opened" that ovi256 was talking about:

- One can give up cell phones, headphones/earbuds, mobile phone plans, mobile data plans, tablets, ereaders, and paid apps/services. That can save $100/mo in bills and amortized hardware. These were a luxury 20 years ago.

- One can give up laptops, desktops, gaming consoles, internet service, and paid apps/services. That can save another $100/months in bills and amortized hardware. These were a luxury 30 years ago.

- One can give up imported produce and year-round availability of fresh foods. Depending on your family size and eating habits, that could save almost nothing, or up to hundreds of dollars every month. This was a luxury 50 years ago.

- One can give up restaurant, take-out, and home pre-packaged foods. Again depending on your family size and eating habits, that could save nothing-to-hundreds every month. This was a luxury 70 years ago.

- One can give up car ownership, car rentals, car insurance, car maintenance, and gasoline. In urban areas, walking and public transit are much cheaper options. In rural areas, walking, bicycling, and getting rides from shuttle services and/or friends are much cheaper options. That could save over a thousand dollars a month per 15,000 miles. This was a luxury 80 years ago.

I could keep going, but by this point I've likely suggested cutting something you now consider necessary consumption. If you thought one "can't just give that up nowadays," I'm not saying you're right or wrong. I'm just hoping you acknowledge that what people consider optional consumption has changed, which means people consume a lot more.

> - One can give up cell phones, headphones/earbuds, mobile phone plans, mobile data plans, tablets, ereaders, and paid apps/services. That can save $100/mo in bills and amortized hardware. These were a luxury 20 years ago.

It's not clear that it's still possible to function in society today with out a cell phone and a cell phone plan. Many things that were possible to do before without one now require it.

> - One can give up laptops, desktops, gaming consoles, internet service, and paid apps/services. That can save another $100/months in bills and amortized hardware. These were a luxury 30 years ago.

Maybe you can replace these with the cell phone + plan.

> - One can give up imported produce and year-round availability of fresh foods. Depending on your family size and eating habits, that could save almost nothing, or up to hundreds of dollars every month. This was a luxury 50 years ago.

It's not clear that imported food is cheaper than locally grown food. Also I'm not sure you have the right time frame. I'm pretty sure my parents were buying imported produce in the winter when I was a kid 50 years ago.

> - One can give up restaurant, take-out, and home pre-packaged foods. Again depending on your family size and eating habits, that could save nothing-to-hundreds every month. This was a luxury 70 years ago.

Agreed.

> - One can give up car ownership, car rentals, car insurance, car maintenance, and gasoline. In urban areas, walking and public transit are much cheaper options. In rural areas, walking, bicycling, and getting rides from shuttle services and/or friends are much cheaper options. That could save over a thousand dollars a month per 15,000 miles. This was a luxury 80 years ago.

Yes but in urban areas whatever you're saving on cars you are probably spending on higher rent and mortgage costs compared to rural areas where cars are a necessity. And if we're talking USA, many urban areas have terrible public transportation and you probably still need Uber or the equivalent some of the time, depending on just how walkable/bike-able your neighborhood is.

> rural areas where cars are a necessity

> It's not clear that it's still possible to function in society today with out a cell phone

Like I said... I've likely suggested cutting something you now consider necessary consumption. If you thought one "can't just give that up nowadays," I'm not saying you're right or wrong. I'm just hoping you acknowledge that what people consider optional consumption has changed, which means people consume a lot more.

---

As an aside, I live in a rural area. The population of my county is about 17,000 and the population of its county seat is about 3,000. We're a good 40 minutes away from the city that centers the Metropolitan Statistical Area. A 1 bedroom apartment is $400/mo and a 2 bedroom apartment is $600/mo. In one month, minimum wage will be $15/hr.

Some folks here do live without a car. It is possible. They get by in exactly the ways I described (except some of the Amish/Mennonites, who also use horses). It's not preferred (except by some of the Amish/Mennonites), but one can make it work.

> on a surprisingly small salary

But if we take "surprisingly small salary" to literally mean salary, most (... all?) salaried jobs require you to work full time, 40 hours a week. Unless we consider cushy remote tech jobs, but those are an odd case and likely to go away if we assume AI is taking over there.

Part time / hourly work is largely less skilled and much lower paid, and you'll want to take all the hours you can get to be able to afford outright necessities like rent. (Unless you're considering rent as consumption/luxury, which is fair)

It does seem like there's a gap in terms of skilled/highly paid but hourly/part time work.

(Not disagreeing with the rest of your post though)

  • ·
  • 3 hours ago
  • ·
  • [ - ]
This didn't say they wanted to consume less, presumably their consumption is the right level for them.
you can consume as much as an average person from 1950's by working just a few days a week.
It's not always possible to live like a person from the 1950s due to societal changes. And many jobs that pay well do not allow you to work part time.
Not cigarettes I can't!
Save up and then FIRE; retire early by moving to a lower cost of living area.
So you are agreeing with the parent? If consumption has gone up a lot and input hours has gone down or stayed flat, that means you are able to work less.
> or stayed flat

But that's not what they said, they said they want to work less. As the GP post said, they'd still be working a full week.

I do think this is an interesting point. The trend for most of history seems to have been vastly increasing consumption/luxury while work hours somewhat decrease. But have we reached the point where that's not what people want? I'd wager most people in rich developed countries don't particularly want more clothes, gadgets, cars, or fast food. If they can get the current typical middle class share of those things (which to be fair is a big share, and not environmentally sustainable), along with a modest place to live, they (we) mainly want to work less.

Not unless rent is cheap, it doesn't. It might mean my boss is able to work less.
  • ghaff
  • ·
  • 4 hours ago
  • ·
  • [ - ]
Rent can be pretty cheap depending upon where you live. If you want to live in a high cost of living area, that's a form of consumption.
If I live somewhere, and maintain the building myself, what's being consumed?
The spot of land is being consumed, no? If it's HCoL, clearly that's land that a lot of people wish they could live on but can't.
That sounds like a nightmare. Let’s sell out a generation so that we can consume more. Wow.
They signed it for you as there will be 1000x less workers needed so they didn't need to ask anymore.
You will probably be dead.

But _somebody_ will be living in a 900,000 sq ft apartment and working an hour a week, and the concept of money will be defunct.

What parent is saying is that what works is what will matter in the end. That which works better than something else will become the method that survives in competition.

You not liking something on purportedly "moral" grounds doesn't matter if it works better than something else.

Oxycontin certainly worked, and the markets demanded more and more of it. Who are we to take a moral stand and limit everyone's access to opiates? We should just focus on making a profit since we're filling a "need"
Using LLMs doesn't kill people, I'm sure there are some exceptions like OpenAI's suicide that was in the news, but not to the degree of oxycontin.
Not yet maybe... Once we factor in the environmental damage that generative AI, and all the data centers being built to power it, will inevitably cause - I think it will become increasingly difficult to make the assertion you just did.
  • ako
  • ·
  • 8 hours ago
  • ·
  • [ - ]
AI is just a tool, like most other technologies, it can be used for good and bad.

Where are you going to draw the line? Only if it effects you, or maybe we should go back to using coal for everything, so the mineworkers have their old life back? Or maybe follow the Amish guidelines to ban all technology that threatens sense of community?

If you are going to draw a line, you'll probably have to start living in small communities, as AI as a technology is almost impossible to stop. There will be people and companies using it to it's fullest, even if you have laws to ban it, other countries will allow it.

I was told that Amish (elders) ban technology that separates you from God. Maybe we should consider that? (depending on your personal take on what God is)
You are thinking too small.

The goal of AI is NOT to be a tool. It's to replace human labor completely.

This means 100% of economic value goes to capital, instead of labor. Which means anyone that doesn't have sufficient capital to live off the returns just starves to death.

To avoid that outcome requires a complete rethinking of our economic system. And I don't think our institutions are remotely prepared for that, assuming the people runnign them care at all.

The Amish don’t ban all tech that can threaten community. They will typically have a phone or computer in a public communications house. It’s being a slave to the tech that they oppose (such as carrying that tech with you all the time because you “need” it).
> AI is just a tool, like most other technologies, it can be used for good and bad.

The same could be said of social media for which I think the aggregate bad has been far greater than the aggregate good (though there has certainly been some good sprinkled in there).

I think the same is likely to be true of "AI" in terms of the negative impact it will have on the humanistic side of people and society over the next decade or so.

However like social media before it I don't know how useful it will be to try to avoid it. We'll all be drastically impacted by it through network effects whether we individually choose to participate or not and practically speaking those of us who still need to participate in society and commerce are going to have to deal with it, though that doesn't mean we have to be happy about it.

Regardless of whether you use AI or social media, your happiness (or lack thereof) is largely under your own control.
> The same could be said of social media

Yes, absolutely.

Just because it's monopolized by evil people doesn't mean it's inherently bad. In fact, mots people here have seen examples of it done in a good way.

> In fact, mots people here have seen examples of it done in a good way.

Like this very website we're on, proving the parent's point in fact.

If it is just a tool, it isn't AI. ML algorithms are tools that are ultimately as good or bad as the person using them and how they are used.

AI wouldn't fall into that bucket, it wouldn't be driven entirely by the human at the wheel.

I'm not sold yet whether LLMs are AI, my gut says no and I haven't been convinced yet. We can't lose the distinction between ML and AI though, its extremely important when it comes to risk considerations.

It’s completely reasonable to take a moral stance that you’d rather see your business fail and shut down than do X, even if X is lucrative.

But don’t expect the market to care. Don’t write a blog post whining about your morals, when the market is telling you loud and clear they want X. The market doesn’t give a shit about your idiosyncratic moral stance.

Edit: I’m not arguing that people shouldn’t take a moral stance, even a costly one, but it makes for a really poor sales pitch. In my experience this kind of desperate post will hurt business more than help it. If people don’t want what you’re selling, find something else to sell.

The age old question: do people get what they want, or do they want what they (can) get?

Put differently, is "the market" shaped by the desires of consumers, or by the machinations of producers?

> when the market is telling you loud and clear they want X

Does it tho? Articles like [1] or [2] seem to be at odd with this interpretation. If it were any different we wouldn't be talking about the "AI bubble" after all.

[1]https://www.pcmag.com/news/microsoft-exec-asks-why-arent-mor...

[2]https://fortune.com/2025/08/18/mit-report-95-percent-generat...

He is right though:

"Jeez there so many cynics! It cracks me up when I hear people call AI underwhelming,”

ChatGPT can listen to you in real time, understands multiply languages very well and responds in a very natural way. This is breath taking and not on the horizon just a few years ago.

AI Transcription of Videos is now a really cool and helpful feature in MS Teams.

Segment Anything literaly leapfroged progress on image segmentation.

You can generate any image you want in high quality in just a few seconds.

There are already human beings being shitier in their daily job than a LLM is.

1) it was failure of specific implementation

2) if you had read the paper you wouldn’t use it as an example here.

Good faith discussion on what the market feels about LLMs would include Gemini, ChatGPT numbers. Overall market cap of these companies. And not cherry picked misunderstood articles.

No, I picked those specifically. When Pets.com[1] went down in early 2000 it wasn't neither the idea, nor the tech stack that brought the company down, it was the speculative business dynamics that caused its collapse. The fact we've swapped technology underneath doesn't mean we're not basically falling into ".com Bubble - Remastered HD Edition".

I bet a few Pets.com exec were also wondering why people weren't impressed with their website.

[1]https://en.wikipedia.org/wiki/Pets.com

Do you actually want to get into the details on how frequently do markers get things right vs get things wrong? It would make the priors a bit more lucid so we can be on the same page.
Exactly. Microsoft for instance got a noticeable backlash for cramming AI everywhere, and their future plans in that direction.
Some people maintain that JavaScript is evil too, and make a big deal out of telling everyone they avoid it on moral grounds as often as they can work it into the conversation, as if they were vegans who wanted everyone to know that and respect them for it.

So is it rational for a web design company to take a moral stance that they won't use JavaScript?

Is there a market for that, with enough clients who want their JavaScript-free work?

Are there really enough companies that morally hate JavaScript enough to hire them, at the expense of their web site's usability and functionality, and their own users who aren't as laser focused on performatively not using JavaScript and letting everyone know about it as they are?

[flagged]
This is a YC forum. That guy is giving pretty honest feedback about a business decision in the context of what the market is looking for. The most unkind thing you can do to a founder is tell them they’re right when you see something they might be wrong about.
Which founder is wrong? Not only the brainwashed here are entrepreneurs
What you (and others in this thread) are also doing is a sort of maximalist dismissal of AI itself as if it is everything that is evil and to be on the right side of things, one must fight against AI.

This might sound a bit ridiculous but this is what I think a lot of people's real positions on AI are.

That's definitely not what I am doing, nor implying, and while you're free to think it, please don't put words in my mouth.
>The only thing people don’t give a shit about is your callous and nihilistic dismissal.

This was you interpreting what the parent post was saying. I'm similarly providing a value judgement that you are doing a maximalist AI dismissal. We are not that different.

We are basically 100-ϵ% the same. I have no doubt.

Maybe the only difference between us is that I think there is a difference between a description and an interpretation, and you don't :)

In the grand scheme of things, is it even worth mentioning? Probably not! :D :D Why focus on the differences when we can focus on the similarities?

Ok change my qualifier from interpretation to description if it helps. I describe you as someone who dismisses AI in a maximalist way
>Maybe the only difference between us is that I think there is a difference between a description and an interpretation, and you don't :)

>Ok change my qualifier from interpretation to description if it helps.

I... really don't think AI is what's wrong with you.

  • ·
  • 11 hours ago
  • ·
  • [ - ]
Yet to see anything good come from it, and I’m not talking about machine learning for specific use cases.

And if we look at the players who are the winners in the AI race, do you see anyone particularly good participating?

800 million weekly active users for ChatGPT. My position on things like this is that if enough people use a service, I must defer to their judgement that they benefit from it. To do the contrary would be highly egoistic and suggest that I am somehow more intelligent than all those people and I know more about what they want for themselves.

I could obviously give you examples where LLMs have concrete usecases but that's besides the larger point.

> 1B people in the world smoke. The fact something is wildly popular doesn’t make it good or valuable. Human brains are very easily manipulated, that should be obvious at this point.
Almost all smokers agree that it is harmful for them.

Can you explain why I should not be equally suspicious of gaming, social media, movies, carnivals, travel?

You should be. You should be equally suspicious of everything. That's the whole point. You wrote:

> My position on things like this is that if enough people use a service, I must defer to their judgement that they benefit from it.

Enough people doing something doesn't make that something good or desirable from a societal standpoint. You can find examples of things that go in both directions. You mentioned gaming, social media, movies, carnivals, travel, but you can just as easily ask the same question for gambling or heavy drugs use.

Just saying "I defer to their judgment" is a cop-out.

But “good or desirable from a societal standpoint” isn’t what they said, correct me if I’m wrong. They said that people find a benefit.

People find a benefit in smoking: a little kick, they feel cool, it’s a break from work, it’s socializing, maybe they feel rebellious.

The point is that people FEEL they benefit. THAT’S the market for many things. Not everything obv, but plenty of things.

> The point is that people FEEL they benefit. THAT’S the market for many things.

I don't disagree, but this also doesn't mean that those things are intrinsically good and then we should all pursuit them because that's what the market wants. And that was what I was pushing against, this idea that since 800M people are using GPT then we should all be ok doing AI work because that's what the market is demanding.

Its not that it is intrinsically good but that a lot of people consuming things from their own agency has to mean something. You coming in the middle and suggesting you know better than them is strange.

When billions of people watch football, my first instinct is not to decry football as a problem in society. I acknowledge with humility that though I don't enjoy it, there is something to the activity that makes people watch it.

> a lot of people consuming things from their own agency has to mean something.

Agree. And that something could be a positive or a negative thing. And I'm not suggesting I know better than them. I'm suggesting that humans are not perfect machines and our brains are very easy to manipulate.

Because there are plenty of examples of things enjoyed by a lot of people who are, as a whole, bad. And they might not be bad for the individuals who are doing them, they might enjoy them, and find pleasure in them. But that doesn't make them desirable and also doesn't mean we should see them as market opportunities.

Drugs and alcohol are the easy example:

> A new report from the World Health Organization (WHO) highlights that 2.6 million deaths per year were attributable to alcohol consumption, accounting for 4.7% of all deaths, and 0.6 million deaths to psychoactive drug use. [...] The report shows an estimated 400 million people lived with alcohol use disorders globally. Of this, 209 million people lived with alcohol dependence. (https://www.who.int/news/item/25-06-2024-over-3-million-annu...)

Can we agree that 3 million people dying as a result of something is not a good outcome? If the reports were saying that 3 million people a year are dying as a result of LLM chats we'd all be freaking out.

–––

> my first instinct is not to decry football as a problem in society.

My first instinct is not to decry nothing as a problem, not as a positive. My first instinct is to give ourselves time to figure out which one of the two it is before jumping in head first. Which is definitely not what's happening with LLMs.

Ok, I'll bite: What's the harm of LLMs?
  • gjm11
  • ·
  • 7 hours ago
  • ·
  • [ - ]
As someone else said, we don't know for sure. But it's not like there aren't some at-least-kinda-plausible candidate harms. Here are a few off the top of my head.

(By way of reminder, the question here is about the harms of LLMs specifically to the people using them, so I'm going to ignore e.g. people losing their jobs because their bosses thought an LLM could replace them, possible environmental costs, having the world eaten by superintelligent AI systems that don't need humans any more, use of LLMs to autogenerate terrorist propaganda or scam emails, etc.)

People become like those they spend time with. If a lot of people are spending a lot of time with LLMs, they are going to become more like those LLMs. Maybe only in superficial ways (perhaps they increase their use of the word "delve" or the em-dash or "it's not just X, it's Y" constructions), maybe in deeper ways (perhaps they adapt their _personalities_ to be more like the ones presented by the LLMs). In an individual isolated case, this might be good or bad. When it happens to _everyone_ it makes everyone just a bit more similar to one another, which feels like probably a bad thing.

Much of the point of an LLM as opposed to, say, a search engine is that you're outsourcing not just some of your remembering but some of your thinking. Perhaps widespread use of LLMs will make people mentally lazier. People are already mostly very lazy mentally. This might be bad for society.

People tend to believe what LLMs tell them. LLMs are not perfectly reliable. Again, in isolation this isn't particularly alarming. (People aren't perfectly reliable either. I'm sure everyone reading this believes at least one untrue thing that they believe because some other person said it confidently.) But, again, when large swathes of the population are talking to the same LLMs which make the same mistakes, that could be pretty bad.

Everything in the universe tends to turn into advertising under the influence of present-day market forces. There are less-alarming ways for that to happen with LLMs (maybe they start serving ads in a sidebar or something) and more-alarming ways: maybe companies start paying OpenAI to manipulate their models' output in ways favourable to them. I believe that in many jurisdictions "subliminal advertising" in movies and television is illegal; I believe it's controversial whether it actually works. But I suspect something similar could be done with LLMs: find things associated with your company and train the LLM to mention them more often and with more positive associations. If it can be done, there's a good chance that eventually it will be. Ewww.

All the most capable LLMs run in the cloud. Perhaps people will grow dependent on them, and then the companies providing them -- which are, after all, mostly highly unprofitable right now -- decide to raise their prices massively, to a point at which no one would have chosen to use them so much at the outset. (But at which, having grown dependent on the LLMs, they continue using them.)

I don't agree with most of these points, I think the points about atrophy, trust, etc will have a brief period of adjustment, and then we'll manage. For atrophy, specifically, the world didn't end when our math skills atrophied with calculators, it won't end with LLMs, and maybe we'll learn things much more easily now.

I do agree about ads, it will be extremely worrying if ads bias the LLM. I don't agree about the monopoly part, we already have ways of dealing with monopolies.

In general, I think the "AI is the worst thing ever" concerns are overblown. There are some valid reasons to worry, but overall I think LLMs are a massively beneficial technology.

We don't know yet? And that's how things usually go. It's rare to have an immediate sense of how something might be harmful 5, 10, or 50 years in the future. Social media was likely considered all fun and good in 2005 and I doubt people were envisioning all the harmful consequences.
Yet social media started as individualized “web pages” and journals on myspace. It was a natural outgrowth of the internet at the time, a way for your average person to put a little content on the interwebules.

What became toxic was, arguably, the way in which it was monetized and never really regulated.

I don't disagree with your point and the thing you're saying doesn't contradict the point I was making. The reason why it became toxic is not relevant. The fact that wasn't predicted 20 years ago is what matters in this context.
[flagged]
I don’t do zero sum games, you can normalize every bad thing that ever happened with that rhetoric. Also, someone benefiting from something doesn’t make it good. Weapons smuggling is also extremely beneficial to the people involved.
Yes but if I go with your priors then all of these are similarly to be suspect

- gaming

- netflix

- television

- social media

- hacker news

- music in general

- carnivals

A priori, all of these are equally suspicious as to whether they provide value or not.

My point is that unless you have reason to suspect, people engaging in consumption through their own agency is in general preferable. You can of course bring counter examples but they are more of caveats against my larger truer point.

Social media for sure and television and Netflix in general absolutely. But again, providing value is not the same as something being good. A lot of people think inaccuracies by LLMs to be of high value because it’s provided with nice wrappings and the idea that you’re always right.
This line of thinking made many Germans who thought they're on the right side of history simply by the virtue of joining the crowd, to learn the hard way in 1945.

And today's adapt or die doesn't sound less fascist than in 1930.

Are you going to hire him?

If not, for the purpose of paying his bills, your giving a shit is irrelevant. That’s what I mean.

You mean, when evaluating suppliers, do I push for those who don't use AI?

Yes.

I'm not going to be childish and dunk on you for having to update your priors now, but this is exactly the problem with this speaking in aphorisms and glib dismissals. You don't know anyone here, you speak in authoritative tone for others, and redefine what "matters" and what is worthy of conversation as if this is up to you.

> Don’t write a blog post whining about your morals,

why on earth not?

I wrote a blog post about a toilet brush. Can the man write a blog post about his struggle with morality and a changing market?

That's how it works. You can be morally righteous all you want, but this isn't a movie. Morality is a luxury for the rich. Conspicuous consumption. The morally righteous poor people just generally end up righteously starving.
This seems rather black and white. Defining the morals probably makes sense, then evaluating whether they can be lived or whether we can compromise in the face other priorities?
I think it's just as likely that business who have gone all-in on AI are going to be the ones that get burned. When that hose-pipe of free compute gets turned off (as it surely must), then any business that relies on it is going to be left high and dry. It's going to be a massacre.
The latest DeepSeek and Kimi open weight models are competitive with GPT-5.

If every AI lab were to go bust tomorrow, we could still hire expensive GPU servers (there would suddenly be a glut of those!) and use them to run those open weight models and continue as we do today.

Sure, the models wouldn't ever get any better in the future - but existing teams that rely on them would be able to keep on working with surprisingly little disruption.

I understand that website studios have been hit hard, given how easy it is to generate good enough websites with AI tools. I don't think human potential is best utilised when dealing with CSS complexities. In the long term, I think this is a positive.

However, what I don't like is how little the authors are respected in this process. Everything that the AI generates is based on human labour, but we don't see the authors getting the recognition.

> we don't see the authors getting the recognition.

In that sense AI has been the biggest heist that has ever been perpetrated.

Only in exactly the same sense that portrait painters were robbed of their income by the invention of photography. In the end people adapted and some people still paint. Just not a whole lot of portraits. Because people now take selfies.

Authors still get recognition. If they are decent authors producing original, literary work. But the type of author that fills page five of your local news paper, has not been valued for decades. But that was filler content long before AI showed up. Same for the people that do the subtitles on soap operas. The people that create the commercials that show at 4am on your TV. All fair game for AI.

It's not a heist, just progress. People having to adapt and struggling with that happens with most changes. That doesn't mean the change is bad. Projecting your rage, moralism, etc. onto agents of change is also a constant. People don't like change. The reason we still talk about Luddites is that they overreacted a bit.

People might feel that time is treating them unfairly. But the reality is that sometimes things just change and then some people adapt and others don't. If your party trick is stuff AIs do well (e.g. translating text, coming up with generic copy text, adding some illustrations to articles, etc.), then yes AI is robbing you of your job and there will be a lot less demand for doing these things manually. And maybe you were really good at it even. That really sucks. But it happened. That cat isn't going back in the bag. So, deal with it. There are plenty of other things people can still do.

You are no different than that portrait painter in the 1800s that suddenly saw their market for portraits evaporate because they were being replaced by a few seconds exposure in front of a camera. A lot of very decent art work was created after that. It did not kill art. But it did change what some artists did for a living. In the same way, the gramophone did not kill music. The TV did not kill theater. Etc.

Getting robbed implies a sense of entitlement to something. Did you own what you lost to begin with?

The claim of theft is simple: the AI companies stole intellectual property without attribution. Knowing how AIs are trained and seeing the content they produce, I'm not sure how you can dispute that.
In the exact same way that it’s not theft if an artist-in-training goes to a museum to look at how other painters created their works.
False equivalence - a random person can't go to a museum and then immediately go and paint exactly like another artist, but that's what the current LLM offerings allow

See Studio Ghibli's art style being ripped off, Disney suing Midjourney, etc

it's not the "exactly same sense". If an AI generated website is based on a real website, it's not like photography and painting, it is the same craft being compared.
But DID the Luddites overreact? They sought to have machines serve people instead of the other way around.

If they had succeeded in regulation over machines and seeing wealth back into the average factory worker’s hands, of artisans integrated into the workforce instead of shut out, would so much of the bloodshed and mayhem to form unions and regulations have been needed?

Broadly, it seems to me that most technological change could use some consideration of people

It's also important that most of AI content created is slop. On this website most people stand against AI generated writing slop. Also, trust me, you don't want a world where most music is AI generated, it's going to drive you crazy. So, it's not like photography and painting it is like comparing good and shitty quality content.
Photography takes pictures of objects, not of paintings. By shifting the frame to "robbed of their income", you completely miss the point of the criticism you're responding to… but I suspect that's deliberate.
I don't think it's a meaningful distinction.

Robbing implies theft. The word heist was used here to imply that some crime is happening. I don't think there is such a crime and disagree with the framing. Which is what this is, and which is also very deliberate. Luddites used a similar kind of framing to justify their actions back in the day. Which is why I'm using it as an analogy. I believe a lot of the anti AI sentiment is rooted in very similar sentiments.

I'm not missing the point but making one. Clearly it's a sensitive topic to a lot of people here.

Reasonable people disagree about whether copying is theft, but everyone agrees that plagiarism is theft.
it is totally valid to NOT play the game - Joshua taught us this way back in the 80's
Totally agree, but I’d state it slightly differently.

This type of business isn’t going to be hit hard by AI; this type of business owner is going to be hit hard by AI.

> Arguing against progress as it is happening is as old as the tech industry. It never works.

I still wondering why I'm not doing my banking in Bitcoins. My blockchain database was replaced by postgres.

So some tech can just be hypeware. The OP has a legitimate standpoint given some technologies track record.

And the doctors are still out on the affects of social media on children or why are some countries banning social media for children?

Not everything that comes out of Silicon Valley is automatically good.

  • rob74
  • ·
  • 11 hours ago
  • ·
  • [ - ]
I don't know about you, but I would rather pay some money for a course written thoughtfully by an actual human than waste my time trying to process AI-generated slop, even if it's free. Of course, programming language courses might seem outdated if you can just "fake it til you make it" by asking an LLM everytime you face a problem, but doing that won't actually lead to "making it", i.e. developing a deeper understanding of the programming environment you're working with.
  • ako
  • ·
  • 8 hours ago
  • ·
  • [ - ]
But what if the AI generated course was actually good, maybe even better than the human generated course? Which one would you pick then?
The answer is "the highest-ranked free one in a Google Search".
When a single such "actually good" AI-generated course actually exists, this question might be worth engaging with.
  • ako
  • ·
  • 3 hours ago
  • ·
  • [ - ]
Actually, I already prefer AI to static training materials these days. But instead of looking for a static training material, I treated it like a coach.

Recently I had to learn SPARQL. What I did is I created an MCP server to connect it to a graph database with SPARQL support, and then I asked the AI: "Can you teach me how to do this? How would I do this in SQL? How would I do it with SPARQL?" And then it would show me.

With examples of how to use something, it really helps that you can ask questions about what you want to know at that moment, instead of just following a static tutorial.

Sure, and it takes five whole paragraphs to have a nuanced opinion on what is very obvious to everyone :-)

>the type of business that's going to be hit hard by AI [...] will be the ones that integrate AI into their business the most

There. Fixed!

> And the type of businesses that survive will be the ones that integrate AI into their business the most successfully.

I am an AI skeptic and until the hype is supplanted by actual tangible value I will prefer products that don't cram AI everywhere it doesn't belong.

AI is not a tool, it is an oracle.

Prompting isn't a skill, and praying that the next prompt finally spits out something decent is not a business strategy.

  • rob74
  • ·
  • 11 hours ago
  • ·
  • [ - ]
Do you remember the times when "cargo cult programming" was something negative? Now we're all writing incantations to the great AI, hoping that it will drop a useful nugget of knowledge in our lap...
Seeing how many successful businesses are a product of pure luck, using an oracle to roll the dice is not significantly different.
"praying that the next prompt finally spits out something decent is not a business strategy."

well you just describing an chatgpt is, one of the most fastest growing user acquisition user base in history

as much as I agree with your statement but the real world doesn't respect that

> one of the most fastest growing user acquisition user base in history

By selling a dollar of compute for 90 cents.

We've been here before, it doesn't end like you think it does.

Hot takes from 2023, great. Work with AIs has changed since then, maybe catch up? Look up how agentic systems work, how to keep them on task, how they can validate their work etc. Or don't.
> if you combine the Stone Soup strategy with Clever Hans syndrome you can sell the illusion of not working for 8 billable hours a day

No thanks, I'm good.

what happen if the market is right and this is "new normal"?????

same like StackOverflow down today and seems like not everyone cares anymore, back then it would totally cause breakdown because SO is vital

  • lmm
  • ·
  • 14 hours ago
  • ·
  • [ - ]
> what happen if the market is right and this is "new normal"?????

Then there's an oversupply of programmers, salaries will crash, and lots of people will have to switch careers. It's happened before.

It's not as simple as putting all programmers into one category. There can be oversupply of web developers but at the same time undersupply of COBOL developers. If you are a very good developer, you will always be in demand.
  • ben_w
  • ·
  • 12 hours ago
  • ·
  • [ - ]
> If you are a very good developer, you will always be in demand.

"Always", in the same way that five years ago we'd "never" have an AI that can do a code review.

Don't get me wrong: I've watched a decade of promises that "self driving cars are coming real soon now honest", latest news about Tesla's is that it can't cope with leaves; I certainly *hope* that a decade from now will still be having much the same conversation about AI taking senior programmer jobs, but "always" is a long time.

Five years ago we had pretty good static analysis tools for popular languages which could automate certain aspects of code reviews and catch many common defects. Those tools didn't even use AI, just deterministic pattern matching. And yet due to laziness and incompetence many developers didn't even bother taking full advantage of those tools to maximize their own productivity.
  • ben_w
  • ·
  • 1 hour ago
  • ·
  • [ - ]
The devs themselves can still be lazy, claude and copilot code review can be automated on all pull requests by demand of the PM — and the PM can be lazy and ask the LLMs to integrate themselves.

And the LLMs can use the static analysis tools.

ai can do code review? do people actually believe this? we have a mr llm bot, it is wrong 95% of the time
  • ben_w
  • ·
  • 1 hour ago
  • ·
  • [ - ]
I have used it for code review.

Like everything else they do, it's amazing how far you can get even if you're incredibly lazy and let it do everything itself, though of course that's a bad idea because it's got all the skill and quality of result you'd expect if I said "endless hoarde of fresh grads unwilling to say 'no' except on ethical grounds".

  • pb7
  • ·
  • 6 hours ago
  • ·
  • [ - ]
I've been taking self-driving cars to get around regularly for a year or more.
waymo and tesla already operate in certain areas, even if tech is ready

regulation still very much a thing

“certain areas” is a very important qualifier, though. Typically areas with very predictable weather. Not discounting the achievement just noting that we’re still far away from ubiquity.
I'm young, please when was that and in what industry
After the year 2000. dot com burst.

An tech employee posted he looked for job for 6 months, found none and has joined a fast food shop flipping burgers.

That turned tech workers switching to "flipping burgers" into a meme.

What was a little different then was that tech jobs paid about 30% more than other jobs, it wasn't anything like the highs we have seen the last few years. I used to describe it as you used to have the nicer house on the block, but then in the 2010s+ FNG salaries had people living in whole other neighborhoods. So switching out of the industry, while painful was not as traumatic. Obviously though having to actually flip burgers was a move of desperation and traumatic. The .com bust was largely centered around SV as well, in NYC (where I live) there was some fallout, but there was still a tailwind of businesses of all sorts expanding their tech footprint, so while you may not have been able to land at a hot startup and dream of getting rich in an IPO, by the end of 2003 it was mostly stabilized and you could likely have landed a somewhat boring corporate job even if it was just building internal apps.

I feel like there are a lot of people in school or recently graduated though that had FNG dreams and never considered an alternative. This is going to be very difficult for them. I now feel, especially as tech has gone truly borderless with remote work, that this downturn is now way worse than the .com bust. It has just dragged on for years now, with no real end in sight.

I used to watch all of the "Odd Todd" episodes religiously. Does anyone else remember that Adobe Flash-based "TV show" (before YouTube!)?
The defense industry in southern California used to be huge until the 1980s. Lots and lots of ex-defense industry people moved to other industries. Oil and gas has gone through huge economic cycles of massive investment and massive cut-backs.
.com implosion, tech jobs of all kinds went from "we'll hire anyone who knows how to use a mouse" to the tech jobs section of the classifieds was omitted entirely for 20 months. There have been other bumps in the road since then but that was a real eye-opener.
well same like covid right??? digital/tech company overhiring because everyone is home and at the same time the rise of AI reduce the number of headcount

covid overhiring + AI usage = massive layoff we ever see in decades

It was nothing like covid. The dot com crash lasted years where tech was a dead sector. Equity valuations kept declining year after year. People couldn't find jobs in tech at all.

There are still plenty of tech jobs these days, just less than there were during covid, but tech itself is still in a massive expansionary cycle. We'll see how the AI bubble lasts, and what the fallout of it bursting will be.

The key point is that the going is still exceptionally good. The posts talking about experienced programmers having to flip burgers in the early 2000s is not an exaggeration.

After the first Internet bubble popped, service levels in Silicon Valley restaurants suddenly got a lot better. Restaurants that had struggled to hire competent, reliable employees suddenly had their pick of applicants.

History always repeats itself in the tech industry. The hype cycle for LLMs will probably peak within the next few years. (LLMs are legitimately useful for many things but some of the company valuations and employee compensation packages are totally irrational.)

Some people will lose their homes. Some marriages will fail from the stress. Some people will chose to exit life because of it all.

It's happened before and there's no way we could have learned from that and improved things. It has to be just life changing, life ruining, career crippling. Absolutely no other way for a society to function than this.

That's where the post-scarcity society AI will enable comes in! Surely the profits from this technology will allow these displaced programmers to still live comfortable lives, not just be hoarded by a tiny number of already rich and powerful people. /s
I haven’t visited StackOverflow for years.
I don't get these comments. I'm not here to shill for SO, but it is a damn good website, if only for the archive. Can't remember how to iterate over entries in JavaScript dictionary (object)? SO can tell you, usually much better than W3Schools can, which attracts so much scorn. (I love that site: So simple for the simple stuff!)

When you search programming-related questions, what sites do you normally read? For me, it is hard to avoid SO because it appears in so many top results from Google. And I swear that Google AI just regugitates most of SO these days for simple questions.

It's not a pejorative statement, I used to live in Stack Overflow.

But the killer feature of an LLM is that it can synthesize something based on my exact ask, and does a great job of creating a PoC to prove something, and it's cheap from time investment point of view.

And it doesn't downvote something as off-topic, or try to use my question as a teaching exercise and tell me I'm doing it wrong, even if I am ;)

I think that's OP's point though, Ai can do it better now. No searching, no looking. Just drop your question into Ai with your exact data or function and 10 seconds later you have a working solution. Stackoverflow is great but Ai is just better for most people.

Instead of running a google query or searching in Stackoverflow you just need a chatGPT, Claude or your Ai of choice open in a browser. Copy and paste.

I stopped using it much even before the AI wave.
  • ido
  • ·
  • 13 hours ago
  • ·
  • [ - ]
Ive honestly never intentionally visited it (as in, went to the root page and started following links) - it was just where google sent me when searching answers to specific technical questions.
It became as annoying as experts exchange the very thing it railed against!
Nope. The main problem with expertsexchange was their SEO + paywall - they'd sneak into top Google hits by showing crawler full data, then present a paywall when actual human visits. (Have no idea why Google tolerated them btw...)

SO was never that bad, even with all their moderation policies, they had no paywalls.

What was annoying about it?
Often the answer to the question was simply wrong, as it answered a different question that nobody made. A lot of times you had to follow a maze of links to related questions, that may have an answer or may lead to a different one. The languages that it was most useful (due to bad ecosystem documentation) evolved in a rate way faster than SO could update their answers, so most of the answers on those were outdated...

There were more problems. And that's from the point of view of somebody coming from Google to find questions that already existed. Interacting there was another entire can of worms.

They SEOd their way into being a top search result by showing crawlers both questions and answers, but when you visited the answer would be paywalled

Stack Overflow’s moderation is overbearing and all, but that’s nowhere near at the same level as Expert Exchange’s baiting and switching

That despite their url's claim, they didn't actually have and sex change experts.
  • ·
  • 6 hours ago
  • ·
  • [ - ]
the gatekeeping, gaming the system, capricious moderation (e.g. flagged as duplicate), and general attitude led it to be quite an insufferable part of the internet. There was a meme about how the best way to get a response is to answer your own question in an obviously incorrect fashion, because people want to tell you why you're wrong rather than actively help.
  • nxor
  • ·
  • 5 hours ago
  • ·
  • [ - ]
Why do you think those people behave that way?
Unpaid labor finds a variety of impulses to satisfy
  • ·
  • 8 hours ago
  • ·
  • [ - ]
you mixed up "is dead" with "is vital" :-)
  • m463
  • ·
  • 14 hours ago
  • ·
  • [ - ]
buggywhips are having a temporary setback.
I had a "milk-up-the-nose" laughter moment when I read this comment.
leaded gasoline is making a killing, though
As someone who has sold video tech courses since 2015, I don't know about the future.

I don't want to openly write about the financial side of things here but let's just say I don't have enough money to comfortably retire or stop working but course sales over the last 2-3 years have gotten to not even 5% of what it was in 2015-2021.

It went from "I'm super happy, this is my job with contracting on the side as a perfect technical circle of life" to "time to get a full time job".

Nothing changed on my end. I have kept putting out free blog posts and videos for the last 10 years. It's just traffic has gone down to 20x less than it used to be. Traffic dictates sales and that's how I think I arrived in this situation.

It does suck to wake up most days knowing you have at least 5 courses worth of content in your head that you could make but can't spend the time to make them because your time is allocated elsewhere. It takes usually 2-3 full time months to create a decent sized course, from planning to done. Then ongoing maintenance. None of this is a problem if it generates income (it's a fun process), but it's a problem given the scope of time it takes.

In contrast to others, I just want to say that I applaud the decision to take a moral stance against AI, and I wish more people would do that. Saying "well you have to follow the market" is such a cravenly amoral perspective.
> Saying "well you have to follow the market" is such a cravenly amoral perspective.

You only have to follow the market if you want to continue to stay relevant.

Taking a stand and refusing to follow the market is always an option, but it might mean going out of business for ideological reasons.

So practically speaking, the options are follow the market or find a different line of work if you don’t like the way the market is going.

I still don’t blame anyone for trying to chart a different course though. It’s truly depressing to have to accept that the only way to make a living in a field is to compromise your principles.

The ideal version of my job would be partnering with all the local businesses around me that I know and love, elevating their online facilities to let all of us thrive. But the money simply isn’t there. Instead their profits and my happiness are funnelled through corporate behemoths. I’ll applaud anyone who is willing to step outside of that.

> It’s truly depressing to have to accept that the only way to make a living in a field is to compromise your principles.

Of course. If you want the world to go back to how it was before, you’re going to be very depressed in any business.

That’s why I said your only real options are going with the market or finding a different line of work. Technically there’s a third option where you stay put and watch bank accounts decline until you’re forced to choose one of the first two options, but it’s never as satisfying in retrospect as you imagined that small act of protest would have been.

I don't think we're really disagreeing here. You're saying "this is the way things are", I'm saying "I salute anyone who tries to change the way things are".

Even in the linked post the author isn't complaining that it's not fair or whatever, they're simply stating that they are losing money as a result of their moral choice. I don't think they're deluded about the cause and effect.

> It’s truly depressing to have to accept that the only way to make a living in a field is to compromise your principles.

Isn't that what money is though, a way to get people to stop what they're doing and do what you want them to instead? It's how Rome bent its conquests to its will and we've been doing it ever since.

It's a deeply broken system but I think that acknowledging it as such is the first step towards replacing it with something less broken.

> Isn't that what money is though, a way to get people to stop what they're doing and do what you want them to instead?

It doesn't have to be. Plenty of people are fulfilled by their jobs and make good money doing them.

Some users might not mind the lack of control, but beyond a certain point it stops making sense to strive to be in that diminishing set and starts making sense to fix the bug.

We've always tolerated a certain portion of society who finds the situation unacceptable, but don't you suspect that things will change if that portion is most of us?

Maybe we're not there yet, idk, but the article is about the unease vs the data, and I think the unease comes from the awareness that that's where we're headed.

I don't think that's necessarily what money is, but it is kind of what sufficiently unregulated capitalism is, which is what we've had for a while now.
I was talking to a friend of mine about a related topic when he quipped that he realized he started disliking therapy when he realized they effectively were just teaching him coping strategies for an economic system that is inherently amoral.

> So practically speaking, the options are follow the market or find a different line of work if you don’t like the way the market is going.

You're correct in this, but I think it's worth making the explicit statement that that's also true because we live in a system of amoral resource allocation.

Yes, this is a forum centered on startups, so there's a certain economic bias at play, but on the subject of morality I think there's a fair case to be made that it's reasonable to want to oppose an inherently unjust system and to be frustrated that doing so makes survival difficult.

We shouldn't have to choose between principles and food on the table.

Sometimes companies become irrelevant while following the market, while other companies revolutionize the market by NOT following it.

It's not "swim with the tide or die", it's "float like a corpse down the river, or swim". Which direction you swim in will certainly be a different level of effort, and you can end up as a corpse no matter what, but that doesn't mean the only option you have is to give up.

> it might mean going out of business for ideological reasons

taking a moral stance isn't inherently ideological

No, of course you don't have to – but don't torture yourself. If the market is all AI, and you are a service provider that does not want to work with AI at all then get out of the business.

If you found it unacceptable to work with companies that used any kind of digital database (because you found centralization of information and the amount of processing and analytics this enables unbecoming) then you should probably look for another venture instead of finding companies that commit to pen and paper.

> If the market is all AI, and you are a service provider that does not want to work with AI at all then get out of the business.

Maybe they will, and I bet they'll be content doing that. I personally don't work with AI and try my best to not to train it. I left GitHub & Reddit because of this, and not uploading new photos to Instagram. The jury is still out on how I'm gonna share my photography, and not sharing it is on the table, as well.

I may even move to a cathedral model or just stop sharing the software I write with the general world, too.

Nobody has to bend and act against their values and conscience just because others are doing it, and the system is demanding to betray ourselves for its own benefit.

Life is more nuanced than that.

Good on you. Maybe some future innovation will afford everyone the same opportunity.
Maybe one day we will all become people again!

(But only all of us simultaneously, otherwise won't count! ;))))

The number of triggered Stockholm Syndrome patients in this comment section is terminally nauseating.

How large an audience do you want to share it to? Self host photo album software, on hardware you own, behind a password, to people you trust.
Before that AI craze, I liked the idea of having a CC BY-NC-ND[0] public gallery to show what I took. I was not after any likes or anything. If I got professional feedback, that'd be a bonus. I even allowed EXIF-intact high resolution versions to be downloaded.

Now, I'll probably install a gallery webapp to my webserver and put it behind authentication. I'm not rushing because I don't crave any interaction from my photography. The images will most probably be optimized and resized to save some storage space, as well.

[0]: https://creativecommons.org/licenses/by-nc-nd/4.0/

Yeah but the business seems to be education for web front end. If you are going to shun new tech you should really return to the printing press or better copying scribes. If you are going to do modern tech you kind of need to stick with the most modern tech.
Printing press and copying scribes is a sarcastic comment, but these web designers are still actively working and their industry is 100s of years from the state of those old techs. The joke isn’t funny enough nor is the analogy apt enough to make sense.
No it is a pretty good comparison. There is absolutely AI slop but you have to be sticking your head in the sand if you don’t think AI will not continue to shape this industry. If you are selling learning courses and are sticking your head in the sand, well that’s pretty questionable.
AI is amoral is an opinion.

Following the market is also not cravenly amoral, AI or not.

  • ·
  • 1 hour ago
  • ·
  • [ - ]
Well if they're going to go out of business otherwise...
I understand this stance, but I'd personally differentiate between taking the moral stand as a consumer, where you actively become part of the growth in demmand that fuels further investment, and as a contractor, where you're a temporary cost, especially if you and people who depend on you necessitate it to survive.

A studio taking on temporary projects isn't investing into AI— they're not getting paid in stock. This is effectively no different from a construction company building an office building, or a bakery baking a cake.

As a more general commentary, I find this type of moral crusade very interesting, because it's very common in the rich western world, and it's always against the players but rarely against the system. I wish more people in the rich world would channel this discomfort as general disdain for the neoliberal free-market of which we're all victims, not just specifically AI, for example.

The problem isn't AI. The problem is a system where new technology means millions fearing poverty. Or one where profits, regardless of industry, matter more than sustainability. Or one where rich players can buy their way around the law— in this case copyright law for example. AI is just the latest in a series of products, companies, characters, etc. that will keep abusing an unfair system.

IMO over-focusing on small moral cursades against specific players like this and not the game as a whole is a distraction bound to always bring disappointment, and bound to keep moral players at a disadvantage constantly second-guessing themselves.

> This is effectively no different from a construction company building an office building, or a bakery baking a cake.

A construction company would still be justified to say no based on moral standards. A clearer example would be refusing to build a bridge if you know the blueprints/materials are bad, but you could also make a case for agreeing or not to build a detention center for immigrants. But the bakery example feels even more relevant, seeing as a bakery refusing to bake a cake base on the owner's religious beliefs ended up in the US Supreme Court [1].

I don't fault those who, when forced to choose between their morals and food, choose food. But I generally applaud those that stick to their beliefs at their own expense. Yes, the game is rigged and yes, the system is the problem. But sometimes all one can do is refuse to play.

[1] https://en.wikipedia.org/wiki/Masterpiece_Cakeshop_v._Colora...

> As a more general commentary, I find this type of moral crusade very interesting, because it's very common in the rich western world, and it's always against the players but rarely against the system. I wish more people in the rich world would channel this discomfort as general disdain for the neoliberal free-market of which we're all victims, not just specifically AI, for example.

I totally agree. I still think opposing AI makes sense in the moment we're in, because it's the biggest, baddest example of the system you're describing. But the AI situation is a symptom of that system in that it's arisen because we already had overconsolidation and undue concentration of wealth. If our economy had been more egalitarian before AI, then even the same scientific/technological developments wouldn't be hitting us the same way now.

That said, I do get the sense from the article that the author is trying to do the right thing overall in this sense too, because they talk about being a small company and are marketing themselves based on good old-fashioned values like "we do a good job".

<< over-focusing on small moral cursades against specific players like this and not the game as a whole

Fucking this. What I tend to see is petty 'my guy good, not my guy bad' approach. All I want is even enforcement of existing rules on everyone. As it stands, to your point, only the least moral ship, because they don't even consider hesitating.

Its cravenly amoral until your children are hungry. The market doesn't care about your morals. You either have a product people are willing to pay money for or you don't. If you are financially independent to the point it doesn't matter to you then by all means, do what you want. The vast majority of people are not.
  • _ttg
  • ·
  • 12 hours ago
  • ·
  • [ - ]
nobody is against his moral stance. the problem is that he’s playing the “principled stand” game on a budget that cannot sustain it, then externalizing the cost like a victim. if you're a millionaire and can hold whatever moral line you want without ever worrying about rent, food, healthcare, kids, etc. then "selling out" is optional and bad. if you're joe schmoe with a mortgage and 5 months of emergency savings, and you refuse the main kind of work people want to pay you for (which is not even that controversial), you’re not some noble hero, you’re just blowing up your life.
> he’s playing the “principled stand” game on a budget that cannot sustain it, then externalizing the cost like a victim

No. It is the AI companies that are externalizing their costs onto everyone else by stealing the work of others, flooding the zone with garbage, and then weeping about how they'll never survive if there's any regulation or enforcement of copyright law.

The ceo of every one of those Ai companies drives an expensive car home to a mansion at the end of the workday. They are set. The average person does not and they cannot afford to play the principled stand game. Its not a question of right or wrong for most, its a question of putting food on the table
I'm not sure I understand this view. Did seamstresses see sewing machines as amoral? Or carpenters with electric and air drills and saws?

AI is another set of tooling. It can be used well or not, but arguing the morality of a tooling type (e.g drills) vs maybe a specific company (e.g Ryobi) seems an odd take to me.

Plagiarism is also "another set of tooling." Likewise slavery, and organized crime. Tools can be immoral.
Man, y'all gotta stop copying each other homework.
  • ikamm
  • ·
  • 6 hours ago
  • ·
  • [ - ]
It's said often because it's very true. It's telling that you can't even argue against it and just have to attack the people instead.
I find this very generic what you are saying and they.

What stance against AI? Image generation is not the same as code generation.

There are so many open source projects out there, its a huge difference than taking all the images.

AI is also just ML so should i not use image bounding box algorithm? Am i not allowed to take training data online or are only big companies not allowed to?

I'm just some random moron, but I just clicked on TFA, and it looks like a very pretty ad.

What am I missing?

'I wouldn’t personally be able to sleep knowing I’ve contributed to all of that, too.'

I think this is the crux of the entire problem for the author. The author is certain, not just hesitant, that any contribution they would make to project involving AI equals contribution to some imagined evil ( oddly, without explictly naming what they envision so it is harder to respond to ). I have my personal qualms, but run those through my internal ethics to see if there is conflict. Unless author predicts 'prime intellect' type of catastrophe, I think the note is either shifting blame and just justifying bad outcomes with moralistic: 'I did the right thing' while not explaining the assumptions in place.

>I have my personal qualms, but run those through my internal ethics to see if there is conflict

Do you "run them through" actual ethics, too?

I feel like this person might be just a few bad months ahead of me. I am doing great, but the writing is on the wall for my industry.

We should have more posts like this. It should be okay to be worried, to admit that we are having difficulties. It might reach someone else who otherwise feels alone in a sea of successful hustlers. It might also just get someone the help they need or form a community around solving the problem.

I also appreciate their resolve. We rarely hear from people being uncompromising on principles that have a clear price. Some people would rather ride their business into the ground than sell out. I say I would, but I don’t know if I would really have the guts.

Its a global industry shift.

You can either hope that this shift is not happening or that you are one of these people surviving in your niche.

But the industry / world is shifting, you should start shifting with.

I would call that being innovative, ahead etc.

The industry is not really shifting. It's not shifting to anything. It's just that the value is being captured by parasitic companies. They still need people like me to feed them training data while they destroy the economics of producing that data.
They pay people in Malaysia to solve issues.

Google has a ton of code internal.

And million of people happily thumb down or up for their RL / Feedback.

The industry is still shifting. I use LLMs instead of StackOverflow.

You can be as dismissive as you want, but that doesn't change the fact that millions of people use AI tools every single day. People start using AI based tools.

The industry overall is therefore shifting money and goals etc. into direction of AI.

And the author has an issue because of that.

Do you know what my industry is? It might be worth showing curiosity before expressing judgement.
Not a big fan of his these days but Gary Vaynerchuk has my favorite take on this:

"To run your business with your personal romance of how things should be versus how they are is literally the great vulnerability of business."

It's very likey the main reason that small businesses like local restaurants, bakeries, etc. fail. People start them based on a fantasy and don't know how to watch the hard realities of expenses and income. But like gravity, there's no escaping those unless you are already wealthy enough for it all to just be a hobby.
  • ·
  • 4 hours ago
  • ·
  • [ - ]
Maybe you're not the biggest fan precisely because the endgame of that statement is to develop a business without any moral grounding.
That's a choice. I can fish where the fish are without having to bait the hook with my soul.
Gary's point is: sell what people are buying. But you think: that's immoral.

What about a functioning market is immoral?

> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff

If all of "AI stuff" is a "no" for you, then I think you just signed out off working in most industries to some important degree going forward.

This is also not to say that service providers should not have any moral standards. I just don't understand the expectation in this particular case. You ignore what the market wants and where a lot/most of new capital turns up. What's the idea? You are a service provider, you are not a market maker. If you refuse service with the market that exists, you don't have a market.

Regardless, I really like their aesthetics (which we need more of in the world) and do hope that they find a way to make it work for themselves.

> If all of "AI stuff" is a "no" for you, then I think you just signed out off working in most industries to some important degree going forward.

I'm not sure the penetration of AI, especially to a degree where participants must use it, is all that permanent in many of these industries. Already the industry where it is arguably the most "present" (forced in) is SWE and its proving to be quite disappointing... Where I work the more senior you are the less AI you use

Just the opposite where I work. Seniors are best positioned to effectively use AI and are using it enthusiastically.
> what the market wants

Pretty sure the market doesn't want more AI slop.

  • ikamm
  • ·
  • 6 hours ago
  • ·
  • [ - ]
Nobody that actually understands the market right now would say that
There is absolutely AI slop out there. Many companies rushed to add AI, a glorified chat bot to their existing product, and have marketed it as AI.

There is also absolutely very tasteful products that add value using LLM and other more recent advancements.

Both can exist at the same time.

I understand it as the market wanting more content about competing in an AI world
Pretty sure HN has become completely detached from the market at this point.

Demand for AI anything is incredible high right now. AI providers are constantly bouncing off of capacity limits. AI apps in app stores are pulling incredible download numbers.

  • Geste
  • ·
  • 6 hours ago
  • ·
  • [ - ]
Sora's app has a 4.8 rating on the app store with 142K rating. It seems to me that the market does not care about slop or not, whether I like it or not.
I don't understand why you're being downvoted, you're not wrong. I think Suno being successful bums me out, I really hate it, but people that are not me love it. I can't do anything about that.
The market wants a lot more high quality AI slop and that's going to be the case perpetually for the rest of the time that humanity exists. We are not going back.

The only thing that's going to change is the quality of the slop will get better by the year.

> The market wants a lot more high quality AI slop

"High quality AI slop" is a contradiction in terms. The relevant definitions[1] are "food waste (such as garbage) fed to animals", "a product of little or no value."

By definition, the best slop is only a little terrible.

[1] https://www.merriam-webster.com/dictionary/slop

Being broadly against AI is a strange stance. Should we all turn off swipe to type on our phones? Are we supposed to boycott cancer testing? Are we to forbid people with disabilities reading voicemail transcriptions or using text to speech? Make it make sense.
> Make it make sense.

Ok. They are not talking about AI broadly, but LLMs which require insane energy requirements and benefit off the unpaid labor of others.

These arguments are becoming tropes with little influence. Find better arguments.
Does the truth of the arguments have no bearing?
haha this sounds like a slave master saying “again, free the slaves? really? i’ve heard that 100s of times, be more original”
Definitely a head scratcher.
i think when ppl mean AI they mean “LLMs in every consumer facing production”
You might be right, and I think tech professionals should be expected to use industry terminology correctly.
There is not a single person in this thread that thinks of swiping on phones when the term "AI" is mentioned, apart from people playing the contrarian.
counter example: me! autocorrect, spam filters, search engines, blurred backgrounds, medical image processing, even revenue forecasting with logistic regression are “AI” to me and others in the industry

I started my career in AI, and it certainly didn’t mean LLMs then. some people were doing AI decades ago

I would like to understand where this moral line gets drawn — neural networks that output text? that specifically use the transformer architecture? over some size?

You take a pile of input data, use a bunch of code on it to create a model, which is generally a black box, and then run queries against that black box. No human really wrote the model. ML has been in use for decades, in various places. Google Translate was an "early" convert. Credit card fraud models as well.

The industry joke is: What do you call AI that works? Machine Learning.

What do LLMs have to do with typing on phones, cancer research, or TTS?

Deciding not to enable a technology that is proving to be destructive except for the very few who benefit from it, is a fine stance to take.

I won't shop at Walmart for similar reasons. Will I save money shopping at Walmart? Yes. Will my not shopping at Walmart bring about Walmart's downfall? No. But I refuse to personally be an enabler.

I don't agree that Walmart is a similar example. They benefit a great many people - their customers - through their large selection and low prices. Their profit margins are considerably lower than the small businesses they displaced, thanks to economies of scale.

I wish I had Walmart in my area, the grocery stores here suck.

  • ·
  • 4 hours ago
  • ·
  • [ - ]
Intentionally or not, you are presenting a false equivalency.

I trust in your ability to actually differentiate between the machine learning tools that are generally useful and the current crop of unethically sourced "AI" tools being pushed on us.

One person's unethical AI product is another's accessibility tool. Where the line is drawn isn't as obvious as you're implying.
It is unethical to me to provide an accessibility tool that lies.
LLMs do not lie. That implies agency and intentionality that they do not have.

LLMs are approximately right. That means they're sometimes wrong, which sucks. But they can do things for which no 100% accurate tool exists, and maybe could not possibly exist. So take it or leave it.

So provide one that "makes a mistake" instead.
If it was actually being given away as an accessiblity tool, then I would agree with you.

It kind of is that clear. It's IP laundering and oligarchic leveraging of communal resources.

How am I supposed to know what specific niche of AI the author is talking about when they don't elaborate? For all I know they woke up one day in 2023 and that was the first time they realized machine learning existed. Consider my comment a reminder that ethical use of AI has been around of quite some time, will continue to be, and even that much of that will be with LLMs.
You have reasonably available context here. "This year" seems more than enough on it's own.

I think there is ethical use cases for LLMs. I have no problem leveraging a "common" corpus to support the commons. If they weren't over-hyped and almost entirely used as extensions of the weath-concentration machine, they could be really cool. Locally hosted llms are kinda awesome. As it is, they are basically just theft from the public and IP laundering.

There's a moral line that every person has to make about what work they're willing to do. Things aren't always so black and white, we straddle that line The impression I got reading the article is that they didn't want to work for bubble ai companies trying to generate for the sake of generate. Not that they hated anything with a vector db
> we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that. Our reputation is everything, so being associated with that technology as it increasingly shows us what it really is, would be a terrible move for the long term. It is such an “interesting” statement in on many levels.

Market has changed -> we disagree -> we still disagree -> business is bad.

It is indeed hard to swim against the current. People have different principles and I respect that, I just rarely - have so much difficulty understanding them - see such clear impact on the bottom line

> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that

I started TextQuery[1] with same moralistic standing. Not in respect of using AI or not, but that most software industry is suffering from rot that places more importance on making money, forcing subscription vs making something beautiful and detail-focused. I poured time in optimizing selections, perfecting autocomplete, and wrestling with Monaco’s thin documentation. However, I failed to make it sustainable business. My motivation ran out. And what I thought would be fun multi-year journey, collapsed into burnout and a dead-end project.

I have to say my time was better spent on building something sustainable, making more money, and optimizing the details once having that. It was naïve to obsess over subtleties that only a handful of users would ever notice.

There’s nothing wrong with taking pride in your work, but you can’t ignore what the market actually values, because that's what will make you money, and that's what will keep your business and motivation alive.

[1]: https://textquery.app/

Software is a means to an end. It always has been. There are a privileged few who have the luxury of being able to thoughtfully craft software. The attention to detail needs to go into what people see, not in the code underneath.
That is a beautiful product. How unfortunate!
Sorry for them- after I got laid off in 2023 I had a devil of a time finding work to the point my unemployment ran out - 20 years as a dev and tech lead and full stack, including stints as a EM and CTO

Since then I pivoted to AI and Gen AI startups- money is tight and I dont have health insurance but at least I have a job…

> 20 years as a dev and tech lead and full stack, including stints as a EM and CTO

> Since then I pivoted to AI and Gen AI startups- money is tight and I dont have health insurance but at least I have a job…

I hope this doesn't come across as rude, but why? My understanding is American tech pays very well, especially on the executive level. I understand for some odd reason your country is against public healthcare, but surely a year of big tech money is enough to pay for decades of private health insurance?

Not parent commenter, but in the US when someone’s employment doesn’t include health insurance it’s commonly because they’re operating as a contractor for that company.

Generally you’re right, though. Working in tech, especially AI companies, would be expected to provide ample money for buying health insurance on your own. I know some people who choose not to buy their own and prefer to self-pay and hope they never need anything serious, which is obviously a risk.

A side note: The US actually does have public health care but eligibility is limited. Over one quarter of US people are on Medicaid and another 20% are on Medicare (program for older people). Private self-pay insurance is also subsidized on a sliding scale based on your income, with subsidies phasing out around $120K annual income for a family of four.

It’s not equivalent to universal public health care but it’s also different than what a lot of people (Americans included) have come to think.

As CTO I wasnt in a big tech company, it was a 50 person digital studio in the south my salary as was 275k at the highest point in my career- so I never made FAANG money
That’s 1% money. At that point the issue isn’t how much money you made but what you did with it.
Come to Europe. Salaries are (much) lower, but we can use good devs and you'll have vacation days and health care.
The tech sector in UK/EU is bad, too. And the cost of living in big cities is terrible for the salaries.

They are outsourcing just as much as US Big Tech. And never mind the slow-mo economic collapse of UK, France, and Germany.

Moving to Europe is anything but trivial. Have you looked at y'all's immigration processes recently? It can be a real bear.
Yeah. It is much harder now than it used to be. I know a couple of people who came from the US ~15 to 10 years ago and they had it easy. It was still a nightmare with banks that don’t want to deal with US citizens, though.

As Americans, getting a long-term visa or residency card is not too hard, provided you have a good job. It’s getting the job that’s become more difficult. For other nationalities, it can range from very easy to very hard.

Yeah it depends on which countries you're interested in. Netherlands, Ireland, and the Scandinavian ones are on the easier side as they don't require language fluency to get (dev) jobs, and their languages aren't too hard to learn either.
Do you count Finland? I heard that Finnish is very hard to learn.
  • nxor
  • ·
  • 5 hours ago
  • ·
  • [ - ]
Finnish people are probably nice when people try to learn their language. Hahaha. Can't say that about the other places.
Most Scandinavians would rather speak English than listen to a foreigner try to speak their language.
Luckily a certain American to Finland HN:er has been making it slightly easier ... :^)

https://finnish.andrew-quinn.me/

... But, no, it's still a very forbidding language.

Counter: come to Taiwan! Anyone with a semi active GitHub can get a Gold Cars Visa. 6 months in you're eligible for national health insurance (about 30$ usd/month). Cost of living is extremely low here.

However salaries are atrocious and local jobs aren't really available to non mandarin speakers. But if you're looking to kick off your remote consulting career or bootstrap some product you wanna build, there's not really anywhere on earth that combines the quality of life with the cost of living like Taiwan does.

+1, Taiwan is a great place
If you have a US or Japanese passport and want to try NL: https://expatlaw.nl/dutch-american-friendship-treaty aka https://en.wikipedia.org/wiki/DAFT . It applies to freelancers.
Interesting thanks!
Yeah, I'm in NL, so this is my frame of reference. Also, in many companies English is the main language, so that helps.
I made a career out of understanding this. In Germany it’s quite feasible. The only challenge is finding affordable housing, just like elsewhere. The other challenge is the speed of the process, but some cities are getting better, including Berlin. Language is a bigger issue in the current job market though.
Thanks - my wife and I actually have a long term plan to shift to the EU

Applied to quite a few EU jobs via LinkedIn but nothing came of it- I suspected they wanted people already in EU countries

Both of us are US Citizens but we don't want to retire in the US it seems to be becoming a s*hole esp around healthcare

What's the unemployment rate like?

I'm not sure the claim "we can use good devs" is true from the perspective of European corporations. But would love to learn otherwise?

And of course: where in Europe?

It would be worth it mathematically to be unemployed in the US for up to 3-5 years in hopes of landing another US job.
  • 0xy
  • ·
  • 5 hours ago
  • ·
  • [ - ]
Taking a 75% pay cut for free Healthcare that costs 1k a month anyway doesn't math. Not to mention the higher taxes for this privilege. European senior developers routinely get paid less than US junior developers.
  • _ttg
  • ·
  • 14 hours ago
  • ·
  • [ - ]
I want to sympathize but enforcing a moral blockade on the "vast majority" of inbound inquiries is a self-inflicted wound, not a business failure. This guy is hardly a victim when the bottleneck is explicitly his own refusal to adapt.
Survival is easy if you just sell out.
It's unfair to place all the blame on the individual.

By that metric, everyone in the USA is responsible for the atrocities the USA war industry has inflicted all over the world. Everyone pays taxes funding Israel, previously the war in Iraq, Afghanistan, Vietnam, etc.

But no one believes this because sometimes you just have to do what you have to do, and one of those things is pay your taxes.

Honestly yeah. You are complicit and it is your fault. Either donate significant amounts, protest, or move.
  • _ttg
  • ·
  • 14 hours ago
  • ·
  • [ - ]
if the alternative to 'selling out' is making your business unviable and having to beg the internet for handouts(essentially), then yes, you should "sell out" every time.
The guy won’t work with AI, but works with Google…
Thank you. I would imagine the entire Fortune 500 list passes the line of "evil", drawing that line at AI is weird. I assume it's a mask for fear people have of their industry becoming redundant, rather than a real morality argument.
Selling out is easy when your children have no food.
Bingo. Moral grandstanding only works during the boom, not the come down. And despite being as big an idealist as they come, sometimes you just gotta do what you gotta do. You can crusade, but you're just making your future self more miserable trying to pretend that you are more important than you think. Not surprising in an era of unbridled narcissism, but hey, that's where we are. People who have nothing to lose fail to understand this, whereas if you have a family, you don't have time for drum circles and bullshit: you've got mouths to feed.
Surely there's AI usage that's not morally reprehensible.

Models that are trained only on public domain material. For value add usage, not simply marketing or gamification gimmicks...

How many models are only trained on legal[0] data? Adobe's Firefly model is one commercial model I can think of.

[0] I think the data can be licensed, and not just public domain; e.g. if the creators are suitably compensated for their data to be ingested

> How many models are only trained on legal[0] data?

None, since 'legal' for AI training is not yet defined, but Olma is trained on the Dolma 3 dataset, which is

1. Common crawl

2. Github

3. Wikipedia, Wikibooks

4. Reddit (pre-2023)

5. Semantic Scholar

6. Project Gutenberg

* https://arxiv.org/pdf/2402.00159

Nice, I hadn't heard of this. For convenience, here are HuggingFace models trained on Olma:

https://huggingface.co/datasets/allenai/dolma

https://huggingface.co/models?dataset=dataset:allenai/dolma

  • ·
  • 10 hours ago
  • ·
  • [ - ]
I wonder if there is a pivot where they get to keep going but still avoid AI. There must be for a small consultancy.
> "a self-inflicted wound"

"AI products" that are being built today are amoral, even by capitalism's standards, let alone by good business or environmental standards. Accepting a job to build another LLM-selling product would be soul-crushing to me, and I would consider it as participating in propping up a bubble economy.

Taking a stance against it is a perfectly valid thing to do, and the author is not saying they're a victim due to no doing of their own by disclosing it plainly. By not seeing past that caveat and missing the whole point of the article, you've successfully averted your eyes from another thing that is unfolding right in front of us: majority of American GDP is AI this or that, and majority of it has no real substance behind it.

I too think AI is a bubble, and besides the way this recklessness could crash the US economy, there's many other points of criticism to what and how AI is being developed.

But I also understand this is a design and web development company. They're not refusing contracts to build AI that will take people's jobs, or violate copyright, or be used in weapons. They're refusing product marketing contracts; advertising websites, essentially.

This is similar to a bakery next to the OpenAI offices refusing to bake cakes for them. I'll respect the decision, sure, but it very much is an inconsequential self-inflicted wound. It's more amoral to fully pay your federal taxes if you live in the USA for example, considering a good chunk are ultimately used for war, the CIA, NSA, etc, but nobody judges an average US-resident for paying them.

I'm sure author's company does good work, but the marketplace doesn't respond well to, "we're really, _really_ good,", "trust me," "you won't be disappointed." It not only feels desperate, but is proof-free. Show me your last three great projects and have your customers tell me what they loved about working with you. Anybody can say, "seriously, we're really good."
They have a website. With a portfolio. That does that.
Andy Bell is absolute top tier when it comes to CSS + HTML, so when even the best are struggling you know it's starting to get hard out there.
I don’t doubt it at all, but CSS and HTML are also about as commodity as it gets when it comes to development. I’ve never encountered a situation where a company is stuck for months on a difficult CSS problem and felt like we needed to call in a CSS expert, unlike most other specialty niches where top tier consulting services can provide a huge helpful push.

HTML + CSS is also one area where LLMs do surprisingly well. Maybe there’s a market for artisanal, hand-crafted, LLM-free CSS and HTML out there only from the finest experts in all the land, but it has to be small.

I think it's more likely that software training as an industry is dead.

I suspect young people are going to flee the industry in droves. Everyone knows corporations are doing everything in their power to replace entry level programmers with AI.

  • nkmnz
  • ·
  • 12 hours ago
  • ·
  • [ - ]
How do you measure „absolute top tier“ in CSS and HTML? Honest question. Can he create code for difficult-to-code designs? Can he solve technical problems few can solve in, say, CSS build pipelines or rendering performance issues in complex animations? I never had an HTML/CSS issue that couldn’t be addressed by just reading the MDN docs or Can I Use, so maybe I’ve missed some complexity along the way.
Look at his work? I had a look at the studio portfolio and it's damn solid.
  • nkmnz
  • ·
  • 4 hours ago
  • ·
  • [ - ]
If one asks you "Why do you consider Pablo Picasso's work to be outstanding", then "Look at his work?" is not a helpful answer. I've been asking about parent's way to judge the outstandingness of HTML/CSS work. Just writing "damn solid" websites isn't distinguishing.
Being absolute top tier at what has become a commodity skillset that can be done “good enough” by AI for pennies for 99.9999% of customers is not a good place to be…
Which describes a gigantic swath of the labor market.
When 99.99% of the customers have garbage as a website, 0.01% will grow much faster and topple the incumbents, nothing changed.
Hmm. This is hand made clothes and furniture vs factory mass production.

Nobody doubts the prior is better and some people make money doing it, but that market is a niche because most people prioritize price and 80/20 tradeoffs.

> Nobody doubts the prior is better

Average mass produced clothes are better than average hand made clothing. When we think of hand made clothing now, we think of the boutique hand made clothing of only the finest clothing makers who have survived in the new market by selling to the few who can afford their niche high-end products.

> we think of the boutique hand made clothing of only the finest clothing makers

This one. Inferred from context about this individual’s high quality above LLMs.

Quality also varied over time, if I recall correctly. Machine made generally starts worse, but with refinement ends up better from superhuman specialization of machines to provide fine detail with tighter tolerances than even artisans can manage.

The only perk artisans enjoy then is uniqueness of the product as opposed to one-size fits all of mass manufacturing. But the end result is that while we still have tailors for when we want to get fancy, our clothes are nearly entirely machine made.

  • dewey
  • ·
  • 7 hours ago
  • ·
  • [ - ]
A lesson many developers have to learn is that code quality / purity of engineering is not a thing that really moves the needle for 90% of companies.

Having the most well tested backend and beautiful frontend that works across all browsers and devices and not just on the main 3 browsers your customers use isn't paying the bills.

Amazon has "garbage as a website" and they seem to be doing just fine.
> When 99.99% of the customers have garbage as a website

When you think 99.99% of company websites are garbage, it might be your rating scale that is broken.

This reminds me of all the people who rage at Amazon’s web design without realizing that it’s been obsessively optimized by armies of people for years to be exactly what converts well and works well for their customers.

  • nkmnz
  • ·
  • 11 hours ago
  • ·
  • [ - ]
Lots of successful companies have garbage as a website (successful in whatever sense, from Fortune 500 to neighbourhood stores).
His business seems to be centered around UI design and front-end development and unfortunately this is one of the things that AI can do decently well. The end result is worse than a proper design but from my experience people don't really care about small details in most cases.
I can definitely tell. Some sites just seem to give zero fucks about usability, just that it looks pretty. It's a shame
After reading the post I kept thinking about two other pieces, and only later realized it was Taylor who had submitted it. His most recent essay [0] actually led me to the Commoncog piece “Are You Playing to Play, or Playing to Win?” [1], and the idea of sub-games felt directly relevant here.

In this case, running a studio without using or promoting AI becomes a kind of sub-game that can be “won” on principle, even if it means losing the actual game that determines whether the business survives. The studio is turning down all AI-related work, and it’s not surprising that the business is now struggling.

I’m not saying the underlying principle is right or wrong, nor do I know the internal dynamics and opinions of their team. But in this case the cost of holding that stance doesn’t fall just on the owner, it also falls on the people who work there.

Links:

[0] https://taylor.town/iq-not-enough

[1] https://commoncog.com/playing-to-play-playing-to-win/

The author has painted themselves into a corner. They refuse to do business with companies that use AI, and they try to support their business with teaching courses, which is also being impacted by AI.

They have a right to do business with whomever they wish. I'm not suggesting that they change this. However they need to face current reality. What value-add can they provide in areas not impacted by AI?

  • Havoc
  • ·
  • 12 hours ago
  • ·
  • [ - ]
Everyone gets to make their own choices and take principled stances of their choosing. I don’t find that persuasive as a buy my course pitch though
My post had the privilege of being on front page for a few minutes. I got some very fair criticism because it wasn't really a solid article and was written when traveling on a train when I was already tired and hungry. I don't think I was thinking rationally.

I'd much rather see these kind of posts on the front page. They're well thought-out and I appreciate the honesty.

I think that, when you're busy following the market, you lose what works for you. For example, most business communication happens through push based traffic. You get assigned work and you have x time to solve all this. If you don't, we'll have some extremely tedious reflection meeting that leads to nowhere. Why not do pull-based work, where you get done what you get done?

Is the issue here that customers aren't informed about when a feature is implemented? Because the alternative is promising date X and delaying it 3 times because customer B is more important

  • ·
  • 1 hour ago
  • ·
  • [ - ]
I don’t think they’re unique. They’re simply among the first to run into the problems AI creates.

Any white-collar field—high-skill or not—that can be solved logically will eventually face the same pressure. The deeper issue is that society still has no coherent response to a structural problem: skills that take 10+ years to master can now be copied by an AI almost overnight.

People talk about “reskilling” and “personal responsibility,” but those terms hide the fact that surviving the AI era doesn’t just mean learning to use AI tools in your current job. It’s not that simple.

I don’t have a definitive answer either. I’m just trying, every day, to use AI in my work well enough to stay ahead of the wave.

  • rckt
  • ·
  • 6 hours ago
  • ·
  • [ - ]
Wishing these guys all the best. It's not just about following the market. It's about the ability to just be yourself. When everyone around you is telling you that you just have to start doing something and it's not even about the moral side of that thing. You simply just don't want to do it. Yeah, yeah, it's a cruel world. But this doesn't mean that we all need to victim blame everyone who doesn't feel comfortable in this trendy stream.

I hope things with the AI will settle soon and there will be applications that actually make sense and some sort of new balance will be established. Right now it's a nightmare. Everyone wants everything with the AI.

> Everyone wants everything with the AI.

All the _investors_ want everything with AI. Lots of people - non-tech workers even - just want a product that works and often doesn't work differently than it did last year. That goal is often at odds with the ai-everywhere approach du jour.

  • theiz
  • ·
  • 11 hours ago
  • ·
  • [ - ]
I had a discussion yesterday with someone that owns a company creating PowerPoints for customers. As you might understand, that is also a business that is to be hit hard by AI. What he does is offer an AI entry level option, where basically the questions he asks the customer (via a Form) will lead to a script for running AI. With that he is able to combine his expertise with the AI demand from the market, and gain a profit from that.
ceaseless AI drama aside, this blog and the set-studio website look and feel great

I hope things turn around for them it seems like they do good work

On this thread what people are calling “the market” is just 6 billionaire guys trying to hype their stuff so they can pass the hot potato to someone else right before the whole house of cards collapses.
> On this thread what people are calling “the market” is just 6 billionaire guys trying to hype their stuff so they can pass the hot potato to someone else right before the whole house of cards collapses.

Careful now, if they get their way, they’ll be both the market and the government.

  • 0x3f
  • ·
  • 5 hours ago
  • ·
  • [ - ]
That might well be the current 'market' for SWE labor though. I totally agree it's a silly bubble but I'm not looking forward to the state of things when it pops.
It's very funny reading this thread and seeing the exact same arguments I saw five years ago for the NFT market and the metaverse.

All of this money is being funneled and burned away on AI shit that isn't even profitable nor has it found a market niche outside of enabling 10x spammers, which is why companies are literally trying to force it everywhere they can.

It's also the exact same human beings who were doing the NFT and metaverse bullshit and insisting they were the next best things and had to jump ship to the next "Totally going to change everything" grift because the first two reached the end of their runs.

I wonder what their plan was before LLMs seemed promising?

These techbros got rich off the dotcom boom hype and lax regulation, and have spent 20 years since attempting to force themselves onto the throne, and own everything.

Isn't this a bit of an ad?
  • xdc0
  • ·
  • 6 hours ago
  • ·
  • [ - ]
This article was posted a few days ago, it was flagged and removed within an hour or two. I don't know what is different this time.
A “bit”? This is self-immolation as an ad, posing as moral superiority.
I'm glad I wasn't the only one that thought that!
Completly agree
  • ·
  • 14 hours ago
  • ·
  • [ - ]
Tough crowd here. Though to be expected - I'm sure a lot of people have a fair bit of cash directly or indirectly invested in AI. Or their employer does ;)

We Brits simply don't have the same American attitude towards business. A lot of Americans simply can't understand that chasing riches at any cost is not a particularly European trait. (We understand how things are in the US. It's not a matter of just needing to "get it" and seeing the light)

some would say historically that isn’t quite the case lol
LOL. Some would say it's been beaten out of us too...which makes Americans telling us to be enterprising even funnier.
>especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that.

I intentionally ignored the biggest invention of the 21st century out of strange personal beliefs and now my business is going bankrupt

I don't think it's fair to call them "strange" personal beliefs
It probably depends on your circle. I find those beliefs strange, seems like moral relativism.
I personally would call them ignorant beliefs.
Yes I find this a bit odd. AI is a tool, what specific part of it do you find so objectionable OP? For me, I know they are never going to put the genie back in the bottle, we will never get back the electricity spent on it, I might as well use it. We finally got a pretty good Multivac we can talk to and for me it usually gives the right answers back. It is a once in a lifetime type invention we get to enjoy and use. I was king of the AI haters but around Gemini 2.5 it just became so good that if you are hating it or criticizing it you aren’t looking at it objectively anymore.
Corrected title: "we have inflicted a very hard year on ourselves with malice aforethought".

The equivalent of that comic where the cyclist intentionally spoke-jams themselves and then acts surprised when they hit the dirt.

But since the author puts moral high horse jockeying above money, they've gotten what they paid for - an opportunity to pretend they're a victim and morally righteous.

Par for the course

Man, I definitely feel this, being in the international trade business operating an export contract manufacturing company from China, with USA based customers. I can’t think of many shittier businesses to be in this year, lol. Actually it’s been pretty difficult for about 8 years now, given trade war stuff actually started in 2017, then we had to survive covid, now trade war two. It’s a tough time for a lot of SMEs. AI has to be a handful for classic web/design shops to handle, on top of the SMEs that usually make up their customer base, suffering with trade wars and tariff pains. Cash is just hard to come by this year. We’ve pivoted to focus more on design engineering services these past eight years, and that’s been enough to keep the lights on, but it’s hard to scale, it is just a bandwidth constrained business, can only take a few projects at a time. Good luck to OP navigating it.
Maybe they dont need to "create" website anymore, fixing other website that LLM generated is the future now

we say that wordpress would kill front end but years later people still employ developer to fix wordpress mess

same thing would happen with AI generated website

> same thing would happen with AI generated website

Probably even moreso. I've seen the shit these things put out, it's unsustainable garbage. At least Wordpress sites have a similar starting point. I think the main issue is that the "fixing AI slop" industry will take a few years to blossom.

Software people are such a "DIY" crowd, that I think selling courses to us (or selling courses to our employers) is a crappy prospect. The hacker ethos is to build it yourself, so paying for courses seems like a poor mismatch.

I have a family member that produces training courses for salespeople; she's doing fantastic.

This reminds me of some similar startup advice of: don't sell to musicians. They don't have any money, and they're well-versed in scrappy research to fill their needs.

Finally, if you're against AI, you might have missed how good of a learning tool LLMs can be. The ability to ask _any_ question, rather than being stuck-on-video-rails, is huge time-saver.

I noticed a phenomenon on this post - many people are tying this person's business decisions to some sort of moral framework, or debating the morality of their plight.

"Moral" is mentioned 91 times at last count.

Where is that coming from? I understand AI is a large part of the discussion. But then where is /that/ coming from? And what do people mean by "moral"?

EDIT: Well, he mentions "moral" in the first paragraph. The rest is pity posting, so to answer my question - morals is one of the few generally interesting things in the post. But in the last year I've noticed a lot more talking about "morals" on HN. "Our morals", "he's not moral", etc. Anyone else?

> ... we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that

I don't use AI tools in my own work (programming and system admin). I won't work for Meta, Palantir, Microsoft, and some others because I have to take a moral stand somewhere.

If a customer wants to use AI or sell AI (whatever that means), I will work with them. But I won't use AI to get the work done, not out of any moral qualm but because I think of AI-generated code as junk and a waste of my time.

At this point I can make more money fixing AI-generated vibe coded crap than I could coaxing Claude to write it. End-user programming creates more opportunity for senior programmers, but will deprive the industry of talented juniors. Short-term thinking will hurt businesses in a few years, but no one counting their stock options today cares about a talent shortage a decade away.

I looked at the sites linked from the article. Nice work. Even so I think hand-crafted front-end work turned into a commodity some time ago, and now the onslaught of AI slop will kill it off. Those of us in the business of web sites and apps can appreciate mastery of HTML and CSS and Javascript, beautiful designs and user-oriented interfaces. Sadly most business owners don't care that much and lack the perspective to tell good work from bad. Most users don't care either. My evidence: 90% of public web sites. No one thinks WordPress got the market share it has because of technical excellence or how it enables beautiful designs and UI. Before LLMs could crank out web sites we had an army of amateur designers and business owners doing it with WordPressl, paying $10/hr or less on Upwork and Fiverr.

"Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that"

The market is literally telling them what it wants and potential customers are asking them for work but they are declining it from "a moral standpoint"

and instead blaming "a combination of limping economies, tariffs, even more political instability and a severe cost of living crisis"

This is a failure of leadership at the company. Adapt or die, your bank account doesn't care about your moral redlines.

> we won’t work on product marketing for AI stuff, from a moral standpoint

Can someone explain this?

Some folks have moral concerns about AI. They include:

* The environmental cost of inference in aggregate and training in specific is non-negligible

* Training is performed (it is assumed) with material that was not consented to be trained upon. Some consider this to be akin to plagiarism or even theft.

* AI displaces labor, weakening the workers across all industries, but especially junior folks. This consolidates power into the hands of the people selling AI.

* The primary companies who are selling AI products have, at times, controversial pasts or leaders.

* Many products are adding AI where it makes little sense, and those systems are performing poorly. Nevertheless, some companies shove short AI everywhere, cheapening products across a range of industries.

* The social impacts of AI, particularly generative media and shopping in places like YouTube, Amazon, Twitter, Facebook, etc are not well understood and could contribute to increased radicalization and Balkanization.

* AI is enabling an attention Gish-gallop in places like search engines, where good results are being shoved out by slop.

Hopefully you can read these and understand why someone might have moral concerns, even if you do not. (These are not my opinions, but they are opinions other people hold strongly. Please don't downvote me for trying to provide a neutral answer to this person's question.)

  • lukan
  • ·
  • 13 hours ago
  • ·
  • [ - ]
"Please don't downvote me for trying to provide a neutral answer to this person's question"

Please note, that there are some accounts downvoting any comment talking about downvoting by principle.

These points are so wide and multi dimensionsal that one must really wonder whether they were looking for reasons for concern.
Let's put aside the fact that the person you replied to was trying to represent a diversity of views and not attribute them all to one individual, including the author of the article.

Should people not look for reasons to be concerned?

I can show you many instances of people or organisations representing diversity of views. Example: https://wiki.gentoo.org/wiki/Project:Council/AI_policy
I'm not sure it's helpful to accuse "them" of bad faith, when "them" hasn't been defined and the post in question is a summary of reasons many individual people have expressed over time.
i have noticed this pattern too frequently https://wiki.gentoo.org/wiki/Project:Council/AI_policy

see the diversity of views.

I'm fairly sure all the first three points are true for each new human produced. The environmental cost vs output is probably significantly higher per human, and the population continues to grow.

My experience with large companies (especially American Tech) is that they always try and deliver the product as cheap as possible, are usually evil and never cared about social impacts. And HN has been steadily complaining about the lowering of quality of search results for at least a decade.

I think your points are probably a fair snapshot of peoples moral issue, but I think they're also fairly weak when you view them in the context of how these types of companies have operated for decades. I suspect people are worried for their jobs and cling to a reasonable sounding morality point so they don't have to admit that.

Plenty of people have moral concerns with having children too.

And while some might be doing what you say, others might genuinely have a moral threshold they are unwilling to cross. Who am I to tell someone they don't actually have a genuinely held belief?

Explanation: this article is a marketing piece trying to appeal to anti-AI group.
[dead]
> we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that

Although there’s a ton of hype in “AI” right now (and most products are over-promising and under-delivering), this seems like a strange hill to die on.

imo LLMs are (currently) good at 3 things:

1. Education

2. Structuring unstructured data

3. Turning natural language into code

From this viewpoint, it seems there is a lot of opportunity to both help new clients as well as create more compelling courses for your students.

No need to buy the hype, but no reason to die from it either.

> imo LLMs are (currently) good at 3 things

Notice the phrase "from a moral standpoint". You can't argue against a moral stance by stating solely what is, because the question for them is what ought to be.

Really depends what the moral objection is. If it's "no machine may speak my glorious tongue", then there's little to be said; if it's "AI is theft", then you can maybe make an argument about hypothetical models trained on public domain text using solar power and reinforced by willing volunteers; if it's "AI is a bubble and I don't want to defraud investors", then you can indeed argue the object-level facts.
Indeed, facts are part of the moral discussion in ways you outlined. My objection was that just listing some facts/opinions about what AI can do right now is not enough for that discussion.

I wanted to make this point here explicitly because lately I've seen this complete erasure of the moral dimension from AI and tech, and to me that's a very scary development.

> because lately I've seen this complete erasure of the moral dimension from AI and tech, and to me that's a very scary development.

But that is exactly what the "is ought problem" manifests, or? If morals are "oughts", then oughts are goal-dependent, i.e. they depend on personally-defined goals. To you it's scary, to others it is the way it should be.

Get with the program dude. Where we're going, we don't need morals.
I think some people prefer living in reality
[dead]
Interesting. I agree that this has been a hard year, hardest in a decade. But comparison with 2020 is just surprising. I mean, in 2020 crazy amounts of money were just thrown around left and right no? For me, it was the easiest year of my career when i basically did nothing and picked up money thrown at me.
Why would your company or business suddenly require no effort due to covid.
Too much demand, all of a sudden. Money got printed and i went from near bankruptcy in mid-Feb 2020 to being awash with money by mid-June.

And it continued growing nonstop all the way through ~early Sep 2024, and been slowing down ever since, by now coming to an almost complete stop - to the point i ever fired all sales staff because they were treading water with no even calls let alone deals, for half a year before being dismissed in mid-July this year.

I think it won't return - custom dev is done. The myth of "hiring coders to get rich" is over. No surprise it did, because it never worked, sooner or later people had to realise it. I may check again in 2-3 years how market is doing, but i'm not at all hopeful.

Switched into miltech where demand is real.

I simply have a hard time following the refusal to work on anything AI related. There is AI slop but also a lot of interesting value add products and features for existing products. I think it makes sense to be thoughtful of what to work on but I struggle with the blanket no to AI.
I'm critical of AI because of climate change. Training and casual usage of AI takes a lot of resources. The electricity demand is way too high. We have made great progress in bringing a lot of regenerative energy to the grid, but AI eats up a huge part of it, so that other sectors can't decarbonize as much.

We are still nowhere near to get climate change under control. AI is adding fuel to the fire.

  • ·
  • 6 hours ago
  • ·
  • [ - ]
Interesting how someone can clearly be brilliant in one area and totally have their head buried under the sand in another, and not even realize it.
  • ·
  • 12 hours ago
  • ·
  • [ - ]
This thread has some serious dickheads
  • ·
  • 7 hours ago
  • ·
  • [ - ]
"especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that."

You will continue to lose business, if you ignore all the 'AI stuff'. AI is here to stay, and putting your head in the sand will only leave you further behind.

I've known people over the years that took stands on various things like JavaScript frameworks becoming popular (and they refused to use them) and the end result was less work and eventually being pushed out of the industry.

[dead]
[dead]
[flagged]
It’s ironic that Andy calls himself “ruthlessly pragmatic”, but his business is failing because of a principled stand in turning down a high volume of inbound requests. After reading a few of his views on AI, it seems pretty clear to me that his objections are not based in a pragmatic view that AI is ineffective (though he claims this), but rather an ideological view that they should not be used.

Ironically, while ChatGPT isn’t a great writer, I was even more annoyed by the tone of this article and the incredible overuse of italics for emphasis.

Yeah. For all the excesses of the current AI craze there's a lot of real meat to it that will obviously survive the hype cycle.

User education, for example, can be done in ways that don't even feel like gen AI in ways that can drastically improve activation e.g. recommendation to use feature X based on activity Y, tailored to their use case.

If you won't even lean into things like this you're just leaving yourself behind.

All the AI-brained people are acting like the very AIs they celebrate.

That's horrifying.

  • lijok
  • ·
  • 11 hours ago
  • ·
  • [ - ]
> especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that

Sounds like a self inflicted wound. No kids I assume?

I agree that this year has been extremely difficult, but as far as I know, a large number of companies and individuals still made a fortune.

Two fundamental laws of nature: the strong prey on the weak, and survival of the fittest.

Therefore, why is it that those who survive are not the strong preying on the weak, but rather the "fittest"?

Next year's development of AI may be even more astonishing, continuing to kill off large companies and small teams unable to adapt to the market. Only by constantly adapting can we survive in this fierce competition.