Neither X nor AI is particularly relevant to the average person, British or not, unless they're terminally online and/or the kind of person to write unhinged nonsense on LinkedIn.
Sure quite a few people might use ChatGPT or whatever sometimes, but they use Excel too and they're probably not especially "interested" in that either.
According to the article X reached 26 million UK adults in May 2022. That's half of the UK adult population.
It might just mean 26 million uk adults have heard of a tweet. Think covfefe, barely anyone read that tweet and yet it reached everyone.
"Forty-three percent had used one for work" … "The numbers are slightly different for the under-16s. Fifty-four percent said they had used a GenAI tool, with more than half (53 percent) of those saying they had used it for schoolwork."
https://www.ofcom.org.uk/siteassets/resources/documents/rese...
More interesting would be the percentage of people that have used GenAI in the past year with a purpose.
I suspect a significant number have just played with it. Likely many didn't make it a habit, let alone ever do anything useful with it. It would be a bit of a stretch to say anyone in this group has 'adopted' AI.
Also how many would continue to use genAI if it weren't artificially cheap/free. Remember that nearly all AI companies are losing money hand over fist, there's bound to be user attrition when the walls close in and they have to start shoving in ads or aggressively upselling paid subscriptions.
If we keep testing young people in AI-friendly tasks, they will keep using AI to a larger rate than the rest of us.
As a lifelong computer nerd I've always dreamed of having a computer friend who knows all about my life and who I can bounce ideas off, it's a core sci-fi trope. But I tried these new LLMs everyone was so hype about and... nah, they all feel like I'm wasting my time. Anything I want to actually know I can look up myself faster than negotiating with these things to try guess what they want me to tell them so they will actually do what I wanted. And as far as just being a nice personality to spend my time with goes, they all come across to me as awkward or pretentious or fussy, and not what I want my computer friend to sound like. If I need to spend 20 minutes trying to "train" my computer friend how to sound vaguely like a real person, it's a fail, sorry. It feels like trying to socialize on Reddit or Discord, you waste a bunch of time going back and forth with strangers who are trying to be funny or clever but never actually say anything meaningful. Which is why I don't use those sites either.
But then I have real life friends who have mentioned they talk to these chatbots all the time, almost like a therapist, and they ask them questions about their lives. Obviously they don't take the advice as seriously as they might coming from a professional, but they still consider it a valuable interaction. To them, having this kind of stilted interaction with a hokey-sounding AI is worth something. And I don't get what they are seeing that I am missing. Perhaps my standards are too high? Or perhaps they just have a greater tolerance for wasting time?
Anyway, when surveys ask me if I've used AI, I always say yes. The worst surveys then do not ask a follow-up question of "did you think it was actually useful/compelling/fun", but of course they don't because the whole point of them is to build hype to sell the gimmick to another company who will shoehorn it into whatever.
but anyhow, the part about the lame personality (they are prompted as branding now) were the main focus of the current AI renascence, which everyone likes to lie to themselves started with attention paper, when in reality is started with a new wave of companion bots, which ironically were praise for adapting their personality really well.
I will confess to enjoying bad AI _image_ generation, mind you. Look at the images on this website! https://theonlineaccountants.uk/file-company-accounts.htm
AI generated text generally just makes me cringe, tho.
"Beyond all recognition" seems like quite an extraordinary claim.
Your wording choice there is excellent as "transforming" can either be extremely positive, or negative, or just different. Given how many students now, judging by various outcries on social media by teaching staff are borderline illiterate in middle school, it does make one wonder what the long term effects of this will be.
That's honestly pretty low considering the sheer number of products that now have AI integration. I honestly think a lot of the hype around AI taking over jobs is because the people who are spreading it are more likely working in some pretty dull coding environments where copilot can shine, because you just plain don't write a ton of interesting or otherwise off-the-beaten-path code, and that's what copilot excels at. And then those people have spare time at work to evangelize about this great technology.
And before anybody starts with me, yes I have tried it. It's fine. A lot of what I work on isn't standardized or boilerplate enough to where copilot can really help me, and the situations where it can, I found the time I spent describing what I wanted, getting the answer back, copying it into my IDE and then customizing it as required was frankly just better spent writing it myself. Maybe it made me SLIGHTLY faster? But nothing approaching what I would suggest the term "coding AI" implies, and frankly, I enjoy writing the easier code on occasion because it's a nice break from the harder stuff that copilot can't touch.
Like if you're a freelancer who jumps between gigs of refactoring ancient websites into something vaguely web 2.0, you would probably get a lot of mileage from copilot where you're just describing something as written (or hell, giving it the existing code) and asking for it to be rewritten. But if you're doing something novel, something that hasn't been posted on StackOverflow a thousand times, you will run into it's limitations quite quickly and, if you're like me anyway, resign it to the bin because fundamentally asking it to make something, finding out it can't, and then making it myself is FAR more annoying than just assuming it can't and moving on.
I have a friend who works in logistics for GE and they’re getting training on the basics of GenAI and then they have to go out and find ways to integrate it into their workflow. The problem is that management isn’t doing the legwork to understand how to integrate the tooling, they’re just handing that responsibility off to the actual users. Those people wind up complaining that taking time to integrate these tools winds up slowing them down and they struggle to find meaningful applications for the LLMs.
It’s like management is saying “here’s a new hammer, we don’t know how to use it but the guy who sold it to us convinced us you can figure out how to use it. So go out and do it and be better and faster at your job, good luck”.
I'm not anti-AI. I don't mind asking an LLM to write me some boilerplate. But I don't want to change my tooling, nor have an integrated assistant. My output is fine.
I'll hold out as long as I can, but it feels like this may one day just become the reality.
There's several free options, so what's stopping you?
Here are a few things keeping Me from looking into it.
I might feel so entrenched in my process that I can’t figure out where to start.
The current crop of offerings only interest me from a self hosted point of view. I’m interested in an ai that is aligned with closely to my own world view. If there is a bias built in I would rather be generated by me.
When this ai wave first started the story was it was so complex the origin of the answers was considered in unauditable. Which I took as deceptive or the pushers didn’t understand what they were doing.
When I stated my interest it goes back to the ‘80s and I was playing with OpenCYC about 15 years ago. I keep saying I will take a look but other projects keep grabbing my interest.
Good news on a few of those points, but not all of them.
> I might feel so entrenched in my process that I can’t figure out where to start.
This is the easy one. You can just tell them that, in those words, with what you're up to, and they suggest stuff back in English. (And if you're not a native English speaker, use your native language and many will respond in that language instead of in English).
> The current crop of offerings only interest me from a self hosted point of view.
My experience with current self-hosted ones is… they're not worth bothering with. (But also, my machine is limited in the model size it can use locally, so this may not impact you so much).
> I’m interested in an ai that is aligned with closely to my own world view. If there is a bias built in I would rather be generated by me.
Good news here, but for what I consider to be bad reasons. They're often wildly sycophantic, so they'll often display whatever bias they "think" you have.
> When this ai wave first started the story was it was so complex the origin of the answers was considered in unauditable. Which I took as deceptive or the pushers didn’t understand what they were doing.
It's getting slightly better, but unfortunately the problem is mostly the "don't understand", there's too many different teams for me to feel it's deceptive. If you really need auditable results, I'd stay away from them too — they're definitely not in the vein of AI that CYC ever was.
This summer I was identifying every weed in my yard because every photo has that info. I also try to keep spell check turned off.
Your point about sycophantic ai is what I'm looking for. I see it as an opportunity to remove the white noise of the internet. I also understand your point about regarding the dangers.
My response is that with ai being no different than any technology.
Know thy self.
this included teens doing homework, and everyone else filling job applications. same mindset.
ai? sure. google search and adapting any template on the very first page? sure!
my homeowners association used genai to go through years of transaction to create a budget etc…
… Yikes. Hope they checked it afterwards. These things are quite bad at financial stuff; they’re good at producing something which looks superficially reasonable, but when you look closer, well, it’s all nonsense.
I wonder if the same trend is happening for younger age groups as well. I was surprised at my two nephews scrolling X yesterday reading memes after dinner. They're 14 and 16 and I guess deeply in the "gamers" culture? They shared some of them with me and I wasn't into the edge-lord stuff, but they insist that it's just ironic usage.
Much like every generation, I'm likely just not hip enough to understand the youth.
https://medium.com/@srhbuttstruth/5-reasons-you-shouldn-t-st...
It would benefit you to educate yourself on how to evaluate the reliability of claims you read online. Here's a tip to help you get started: anonymously written online posts that rely on other anonymous posts should be considered with a very high degree of scrutiny.
Did you bother to check the links in the article and tumblr post? You would have found archives of the web sites in question, with Nyberg's horrifying words dating back to 2006 preserved. The chat logs have been preserved and distributed all over the place as well; they're not hard to find.
Nothing in your comment changes the fact that you linked to an anonymously written article full of baseless speculation that relies on an anonymous Tumblr post that is also full of baseless speculation.
Not a single credible source in the entire mess of nonsense that you linked to. You should be embarrassed.
I signed up and it's a breath of fresh air without the artificially promoted toxic posts showing up in my feed.
Not unlike X, you only hear what you want to hear.
With this strategy I entirely missed blockchain, crypto and NFTs and am in the process of missing AI.
AI is a completely different story: you likely not even realize you are already a heavy user just as a side effect of everything technological you use, from voice dictation, to medical, to all the images you see around. Soon also: when you are going to watch a movie. Moreover LLMs are already transforming the way people work.
These two entities, blockchian and AI, have very little in common if not the hype.
What I am saying is that the applications of AI cannot be fulfilled to the level of the promises made. The promises were made to solicit hype to generate cash, not because the idea was viable or achievable on proof. When we reach maturity, we'll see what is left and I'll wait for that. That's fine. In the mean time I'll have to put up with cats appearing every time I search for dogs in Apple Photos and arguing with ChatGPT about its understanding of the relative magnitude of 9.9 and 9.11, while everyone tells me repeatedly with sweat on their brow that WhateverMODEL+1 will make that problem go away, which it didn't on WhateverMODEL-3,-2,-1,0. Only another $2 billion of losses and we'll nail it then!!!
The end game for all technology changes is not what we think it will be. Been in this game a long time and that is the only certainty.
For general Q&A they can hallucinate, but so long as you are using it to augment your productivity and not as a driver this isn't any different than using stack overflow, or any other kind of question you might ask on the internet. It's basically a non issue too if you upload a document into its context window and stick to asking questions about that document though.
AI wiping out programming as a career. AI wiping out writing stories. AI replacing the need for doctors to diagnose illness. AI generally replacing all white collar jobs.
LLMs are useful assistants, but they are nowhere near the hype flooded everywhere a year or so ago.
Did everyone think it would take two months and all the doctors in the world would lose their jobs to ChatGPT?
AI is a societal shift that will take place over the next 20, 30, and 40 years, much like what happened with personal computing. This is a time horizon that impacts investments right now. Professions that existed for thousands of years will cease to exist. That is an unbelievably big change.
> AI wiping out writing stories
FWIW, I think LLMs make better stories than quite a lot of the human writers on Reddit.
Not that many of the Redditors were ever going to go on to be successful novelists of course (and I say that as someone who is struggling to finish writing this darn book for the best part of a decade now…)
As for the other points, I rather like to spend some time thinking on them personally. If you're not connected to the decision yourself, what are you?
Just so we are clear, for Japanese to English translation, DeepL is hot garbage compared to a top class LLM with the right prompt. DeepL translations are basically unreadable, and regularly just cut sections out entirely! So I wouldn't call DeepL "best of breed" by any means, it's not even at the starting line. Can't comment on English <-> French/Spanish/German/etc though, never tried it with those.
In my case the epub was technically a replacement for a fan translation I was reading, which was decent enough, but with a simple script and instructions to keep the vibes of a light novel, it got very good, I remain impressed. Next I plan to convert it all to markdown to see if I can help encourage it to structure paragraphs properly, the html tags have so far limited it to line by line translation.
When I've experimented with officially translated works, meaning cases where I've translated the raw and compared it to the official, it's still not up to par, but good enough in my opinion. I'm not aware of any payed service that streamlines this yet though, not sure why. It's nothing like traditional MTL.
> As for the other points, I rather like to spend some time thinking on them personally. If you're not connected to the decision yourself, what are you?
What? It's a dialogue, a conversation, I bounce ideas off it and ask for advice to help guide the direction of my thinking, have you ever even used an LLM? I do this with my friends and co-workers too, do you not do this?
This comes off as a bit presumptuous, an LLM lacks executive thinking, if I'm not directing the conversation then the LLM has nothing to give.
There's a lot of stuff that LLMs can do for me that Google never could, like synthesise a new python script to solve some idea I want to iterate on.
But also, Google results nose-dived and only recently seemed to get less bad… though now it seems to be the turn of the YouTube home page to be 60 items of which 45 are bad, 14 are in my watch later list already, and only 1 is both new to me and interesting.
Well, so long as the monarchy and the castles remain.
The House of Lords though, now that is weird.
I see people who are doing white collar jobs where most of it is doing stuff on computers being absolutely not interested in any of it.
I work with generally people from all over Europe and it mostly is the same so I would not say Brits are like that but more generally people have bias towards doing things the way they’ve always been doing.
Last month our company released new interface because old one is built on unsupported tech and with all the regulations we have to change it anyway - outrage lasted 2 weeks - people are getting used to new way and in 2 months no one will remember the old way.
Much better than having sheepish mentality and chasing what rest of the crowd is chasing too, shows some character and thinking for oneself and not being an easy subject to manipulation. X was almost pure toxicity even before musk's ego trip and I never understood why I should care about some random brainfarts of people 'I should be following', don't people have their own opinions formed by their own experience? Thats rather poor way of spending limited time we have here, on top of training oneself in quick cheap dopamine kicks which messes up people for rest of their lives.
ChatGPT at least tries to be added value, but beyond a bit better search, hallucinating some questionable code and some random cute pics (of which novelty wears off extremely fast), I don't see it, I mean I see the potential, just not reality right now. Plus that code part - I want to keep training myself and my analytical mind, I don't want to be handed easy answers and become more passive and lazy. That's why I do git via command line tool and not just clicking around. That's why I don't mind at all doing some easier algorithms instead of having them served. My employer only wants good result, I am not working in sweat shop being paid by number of lines of code per day.
Quality life is about completely different things anyway. IMHO UK is fine in this regard.
the set of people interested in AI seems to be quite specific: techno-optimists, fad-seeking "entrepreneurs", people who can get with low-quality outputs.
Looks like various breakdowns of cohorts by age - in one section there's 7 groups.
A thousand people per group is respectable, but OTOH each responder's answers are being extrapolated to assume the behaviour of ~10,000 people.
Someone please correct me if I'm wrong but I think the sample size of 7300 should be enough for UK's population.
some of the oss + scifi author crowd has been making it work on mastodon but may come to bsky. (think like charlie stross)
celeb + sports accounts are joining bsky now post-election, and in theory a bunch of users that will jump with them; insta/threads may still win this slice though
rn on bsky there's an early twitter dynamic of like mark hammill following tech journalists
I think twitter aged artificially fast after it stopped investing in moderation and also shut off external researchers' access to the firehose (they were helping discover bad actor networks)
bsky seems more open to integrations at this point; like w/ subreddits, ownership of communities by those communities makes maintenance cheaper
Lmao
I'm following only SF/hn flavoured tech crowd on there - the thinking being that it'll be about tech. An intentional attempt to buy into a bubble if you will. That used to work well.
Since musk take-over and the current election cycle even that is intolerable. Tech gang no longer tweet about kubernetes etc, instead it's about immigrant, Joe Rogan and whatever has Marc Andreessen's knickers in a twist today.
Now I’ve switched to Bluesky and my feed is immensely better.
Realistically though, as an outsider, I think X.com has just become their echo chamber and they've all lost the plot.
- Use a system that allows you to choose your sources and choose to follow a diverse set of content
- Use a system that does not allow such choice and trust that its algorithm supplies a diverse set of content
- Don't use social media
I happen to choose the last one, but between the other two I believe user choice is more important. Musk has put his right wing thumb firmly on X's scale so that rules it out imo. Anywhere that respects your choices is only an echo chamber if you make it one.
I get that it's important to you, but US independence barely features in our history lessons.