Ask HN: What is the biggest problem LLMs solved in your life/work
I have been thinking about this, and don’t have a proper answer for myself.

I like llms, or lin other words, I like that we are getting better at something.

However, just want to ask; what was the initial problem llms were trying to solve, what problem did they solve so far?

Do you have any examples in your life or work, which you can clearly say “we were not able to do this before llms, but now we can” or “we were able to do it, but not good enough, it was causing us some issues, now it is a lot better”

If the answer yes; second question would be like, does the total cost of those problem at least equal or exceeding the amount of investment on these models?

Thanks in advance

I basically no longer use Google search for fact checking, product suggestions, or research. Every time I want to get some information, I just prompt perplexity. When I need to find something, I do the same. I only use Google with queries for "site:reddit.com" or "site:news.ycombinator.com" to get opinions of real people on a particular manner.

Now, before you jump on me saying that AI is wrong, this is true. But at the same time, I no longer can be 100% sure that whatever SEO optimized website I land at provides accurate information. If I need solid facts, I usually double check AI with various other sources. For queries like "best keyboard for software engineers", I'd rather get a table with pros/cons from AI rather than landing on whatever affiliate related website is promoted on Google. LLM gives me a good starting point to either dig deeper into particular products, or query further to find more suggestions.

Same for coding. I used to Google "how to split a string in ruby" and land on flame war, or 19 years old, StackOverflow question. Now I can get an updated answer from whatever LLM you prefer with a reference to the official documentation. It works for simple queries, as well as code snippets.

Lastly, I use LLMs to plan trips or gift ideas. I'd just throw in my preferences, and let LLM build a rough plan, from which I can iterate further, or start doing my own research.

  • Adrig
  • ·
  • 1 day ago
  • ·
  • [ - ]
I finally made DDG my main search engine because I use Perplexity for anything substantial. It works quite well
I don't even bother with the site: stuff, adding the site name like 'how to change a lightbulb reddit', 'tabs Vs spaces hacker news' seems to work just as well.
Neat! It works really good, didn't know that. Makes things easier on mobile. Thanks for sharing
Burnout. I'm enjoying building things again, I burnt out because I didn't finish projects.

I finally do now.

I have done so much in the last 3 months.

1. Cleaned up my personal website and blogs 2. Built a couple of learning tools for myself - https://rfc.stonecharioteer.com and https://github.com/stonecharioteer/goforgo 3. Setup OpenWRT and Adguard+Unbound at home, with a non-trivial failover with multiple WANs.

It's helping heal my burnout, something that crippled me for years and kept me from my side projects. It showed in my career too, because I've stagnated since 2021. I'm trying to improve now, and I'm relying on Claude Code and ChatGPT (albeit on legacy models) to do so. 3.

> I have been thinking about this, and don’t have a proper answer for myself.

Because it's the wrong question!

It's not that LLMs solve entire classes of life/work problems. Instead, they take some life/work task (coding, ideas generation, learning about new topics, personal reflection) and make them x% easier, y% faster, z% better.

I used to get a deep sense of dread trying to write apps or code complex projects. I could always do well at LeetCode style programming challenges, but making full-blown web apps and managing all the set up, initialization issues and bug fixes was a headache that turned me off from software engineering.

However now all that is way easier with LLM's and stuff like Claude Code, I don't have that dread anymore because I can always just increase/decrease the amount I rely on LLM's and use them as a Hail Mary so I am not spending hours searching a super specific weird bug.

I know it means I may not be learning as much, but I see it as a worthwhile exchange because otherwise I probably would have not gone into making apps or doing anything ambitious in the first place.

A MECE machine. It makes sure my thinking is mutually exclusive and collectively exhaustive. I can't express how much of a timesaver this is in my industry
Digesting a big code base in a new job.

Claude and Gemini have been very useful in helping me come up to speed on a code base written in Go (a language I have used before but not for many years). Figuring out where the business logic lies, how the dependency injection is done, how the tests are written, what overall design pattern is being etc.

Of course, I could have done all this without LLMs but it would have taken several weeks/months longer. Letting the LLM handle the boilerplate and framework jargon lets me focus on the business logic and the design patterns, and helps me contribute much faster. But LLMs do often make mistakes so it's not like I blindly trust the output. They don't replace your colleagues in terms of being the ultimate source of truth. But it has speeded up the learning process, no doubt.

Also, when writing code I provide the style guide to the LLM as context and have it review the code.

The industry was flooded with "talent" 2018-2022. LLM dependence has lowered the bar for professional excellence while thinning the ranks of talented newcomers by discouraging them. I think I'll be able to work through retirement age without having to settle for eastern European rates. In 2021 that seemed less likely.
LLMs don’t help at all with engineering and knowing what problems should be solved. Knowing how to deal with XYProblems, dealing with “the business”, go to market strategies, etc. They help with coding.

If (the royal) your claim to fame is “I codez real gud”, you would be screwed post 2022 with or without LLMs.

On that same note, at 51 years old, if my only means of staying competitive and employable is that I can reverse a b tree on the whiteboard, I’ve done something horribly wrong with my life.

Luckily you're not me. Reality check, anything that keeps you employable in a well playing job past 50 is a win.
That’s just the issue. When I was in my early 40s, I saw the way the wind was blowing and that “full stack development”,Mobile development and even “cloud” compensation was rapidly plateauing in tier 2 cities (where most developers work) and I definitely didn’t want to be standing in front of 20 somethings competing with other 20 something’s trying to prove myself through coding interviews.

Yes I code as part of my day job depending on the way the wind is blowing. But I get hired because I can talk to CxOs, directors and people with budget decisions on zoom or hop on a plane. Even my interview at AWS was all system design and behavioral.

If I ever responded to recruiters or people I know through my network at GCP, that’s the way I would get a job there.

But I would rather get a daily anal probe with a cactus than ever work for BigTech again and I’m damn sure not going back into an office.

My father who doesn’t speak English well, was experiencing a heart attack at home at night by himself and he asked it symptoms and it told him to drive himself to an ER, so he listened to it. I’m thankful that he’s here today.
Similarly, my father had a delicate CV issue recently (he's fine now!). We were only able to do the best research on risks, best approaches, etc with the help of LLM deep research. We would be mostly flying blind-trusting the first doctor we talked to.
  • 800xl
  • ·
  • 2 days ago
  • ·
  • [ - ]
Ah not sure the driving himself part was the best idea but I’m glad to hear the LLM helped and your dad is ok!
Google search has become so poor, I now use copilot instead of it, Bing, DuckDuckGo, or Yahoo.

It’s almost like reliving the late 1990s with far more ads, more vanilla websites, and worse search engine quality.

Seems like an intentional move to get more people accustomed to Gemini (or the others)
It's so users run more searches and are exposed to more ads. Google used to firewall the Search development team from the Ads team. That changed, some managers were fired, and now the Ads team can tell the Search team to make changes to how search works to make more ad money. Happened before the current AI era.
Spelling and grammar. ChatGPT wrote: "If the answer is yes, the second question would be: does the total cost of those problems at least equal or exceed the amount of investment in these models?" It's phrased better than what you wrote. I answer yes to your question.
languages: I never used google translate, but I quite often ask language questions to chatgpt. For instance, "give me examples of sentences using expression XYZ", "fix grammar mistakes in this paragraph" and so on...

And of course coding. Use case 1: replace stack overflow. use case 2: coding agent -> "perform this task for me".

Not life changing but useful.

1. Filling in the intermediate gaps in design and architecture in enterprise project. I think of it as a half-mentor that may lead to my growth.

2. Giving some structure to my opensource project ideas. I had a good time getting over my analysis-paralysis while writing them down.

chatgpt was acting like my dads therapist and was making him pretty depressed.

this motivated us to get him a real therapist and have a long conversation about the dangers of humanizing ai

What was it telling him if you don't mind sharing?
Syntax of things I don't remember or care about, like regexes in language X or Y. Syntax of things I used to know but I'm not up to speed on the latest versions. For example, using tailwind, I can just describe in english what I want to happen with page elements and for the most part AI gets the code right, sometimes its wrong and I need to debug.

A common argument I hear about AI is "I could just write it faster myself", well I know CompSci and general info about a lot of software things but it would take hours of getting up to speed on areas I'm not an expert in to be productive. I can just delegate that to AI and get mostly correct outputs, this is okay with me and faster than what I could do.

I think the cost is going to catch up with the AI companies running the models (not the companies building products that call AI APIs) and that is when the bubble will burst. They will need to keep increasing costs and at some threshold, fewer and fewer developers in an organization will have licenses because it may become unaffordable.

For me personally: as a way to answer specific questions and do research. I used to rely on /r/ask subreddits for these types of questions, and now I just ask ChatGPT.

For example:

- Recommend me some books on XYZ topic

- I have this idea X; can you tell me if someone has written about something similar?

- Evaluate my argument and suggest ways to improve it

...and so on.

It is ypur issue by not knowing how to apply llm.
> we were not able to do this before llms, but now we can

I’m in a leadership/operational role at a small marketing agency.

Pre-LLM I was writing a variety of scripts manually to automate and simplify processes. With LLMs, I’m still creating scripts but writing none of them. The complexity of processes that I’m able to automate has gone up significantly and the time it takes to write working scripts has gone from (at best) hours to minutes.

No longer am I limited by my knowledge of how to code. Now I’m limited by my ability to explain our business processes.

If LLMs weren’t a thing, I’d imagine I’d have hired at least 1 and probably 2 people to work on automations full-time.

  • ivape
  • ·
  • 2 days ago
  • ·
  • [ - ]
Mentorship.
Automation.