Eventually some AI models may be able to do this but they won’t just be an LLM. Some kind of intelligence needs to exist that attempts to understand underlying fundamentals rather than just finding the most likely next word.
The hallmarks of this style are easy to spot. Lots of vacuous sentences, which (usually) make grammatical sense but no logical sense. It's very "first-draft-esque". Lots of sentences which, if read a second time, signify nothing, or otherwise just don't hold up to scrutiny.
Before RLHF they don't sound like that at all.
Obviously this is all subjective, but OP implied it would be easier to make an LLM that doesn't sound like a stochastic parrot.
I wish LLMs were better. But currently, when I read that drivel, I feel repulsed.
It sounds good on a surface level, but it's empty linguistic calories. It's Instagram writing. Lord Dorwin would write like that.
It's also tricky to prompt an LLM as you would an (engaged) human copywriter, because the human has extra cues like "why am I being asked to do this anyway", "who asked me" and "what about this coffeeshop is special but isn't in or isn't emphasised by the prompt/original copy". Plus extra research they may do. By the time you've prompted it carefully enough, and verified the emphasis is where you want it, you're basically just writing it.
Of course what the LLM will do here is the same crap job that you can get from a guy of Fiverr that they bang out in a few minutes. Because that's where most copy comes from in the first place and that's the training data. And the LLM and the Fiverr guy have the same lack of investment in the job. And, as the Internet, full of the breathless writing about pasionate, committed experience packages proves, that's good enough for most people.
It's a bit like smartphone cameras wrecking the low-end camera market. If you want to sell standalone cameras now, you have to sell very good cameras.
However when I read the overly verbose and weirdly flowery text of the late 19th century, as an example, it feels different than the weirdly flowery press release style LLMs favor.
Perhaps it's a result that writing and reading were both a mark of education and more complex writing was valued over a quick parse by the reader, which is highly valued today. Or, maybe, the use of hand- or type-writing led to train-of-thought subclause-rich writing, because it's hard to recast the sentence without a cursor, and one iteration of fair copying was probably enough for most purposes. Furthermore, much less writing was even done, far fewer people did it and it was expensive and laborious to do, transmit and reproduce. And what was written was often not written by, or intended for reading by, people without expensive educations.
Most people probably read a lot more text now than most people did back then, and the do it faster and more disposably. Things these days are often written to be quickly scanned, perhaps only by automatic search crawlers, have keywords recognised and pulled out then the chaff is discarded. Which works better with short, simple sentences. And prominent keywords are important, leading to self-reinforcement of both a set of shibboleth words that "all good copy" should have and how and when those special words are to be used.
If you have one normal sentence and one overly verbose, the latter will have more tokens and therefore more weight.
Also, it seems like the reason that it does it is because you're trying to convert a couple of sentences into a full blog post/press release/article. All it has to go off is the input sentence - unlike an actual writer who (probably) has all the details about the subject. If you don't have the details and the result must be at least X length, then you're going to get a ton of useless padding around the only details that you've given it, because the only other thing that it could do would be making things up.
It crosses my mind every time i realise I am in the midst of some verbose, contentless, llm drivel, that i would prefer to read the prompt than the garbage that gets spat out.
There are a few tools floating around that do prompt extraction. I must admit i haven't gotten around to trying them out, but I have been toying with an idea to make a local browser plugin to strip a page down to its prompts and maybe find out why I was there in the first place.
And he replied, in earnest, that some of the papers are needlessly and deliberately complicated and obfuscated for a whole host of reasons (political reasons, career reasons etc).
This jargon-heavy circumlocutious academic speak you find often in papers, it's a gate-barrier... to keep out non-native English speakers, up-and-coming scholars? I don't know, but whatever the case, let's hope the outflow of papers written with LLM assistance takes a blow at the problem, when the corpus of literature is so diluted with all this that corporate-speak and jargon-heavy speak as a differentiator for in-group or whatever has a lesser incentive to exist.
It is my opinion that most companies and government organisation put very little thought into writing and communication. Organisations will have large communications departments and yet they repeatably fail to communicate in a clear and easy to read language. Often there will multiple spelling error, despite of spellcheckers being almost as old as computers, but people can't use them or trust them blindly. Sentence makes no sense, or as in the article, they are empty and filled with cliches, and this is from professional communication departments.
Frequently my municipality will send out information that is either not required, full of factual errors, poor wording and completely fail to provide you with enough details for you to actually act.
Communications is difficult and yet it is given very little attention in modern organisations. ChatGPT won't work for people who already take writing serious, for everyone else, it's probably not far off or at least not significantly worse.
This, translated, is “Please can you either leave a set of keys with the front desk on Monday, or let me know when you will be home so that we can knock on the door.”
I think this is the sort of thing that AI actually CAN help with: give the product details of what needs to happen and it can write a (hilariously) a more “human-sounding” email vs this sort of dreadful “property agent with a stick up his ass” email.
I write at a much lower level than what i did do, cus a) my audience can't seem to comprehend even words like 'comprehend ' and 2) it's not worth getting in trouble at work, where d( apparently i work with 'the finest'.
I wish the result is that anything you would ask AI to write like this people would just replace with nothing, it clearly doesn't matter if it's good or interesting.
We must do whatever we can to avoid a future where we read AI crap like this forever. It’s already seeping in everywhere. It is like reciting the phone book.
ChatGPT is a bad writer, and is the wrong tool for copy writing. That is a consequence of its post-training, though. I don't expect this to be true forever.
Using Claude or Writer would produce better outputs by an unskilled hand. And even with ChatGPT it's possible to get good outputs if you know what you're doing.
This implies that perhaps there will still be a role for copywriters as skilled users of LLMs. Though, I think it's more likely that the service of copywriting will transform copywriting products that centralize LLM know-how and expose simple self-service tools.
I have a feeling that the early style guides were weighted heavily in favor of poetic forms because it was extremely impressive to see AI create poetry.
But now we are used to it and we just want effective, boring writing!
The article doesn't seem to mention how they prompted the llm. Without knowing that, it's impossible to say how well GPT actually did.
You don’t just give some instructions to an LLM and pick up the first version it generates. You work with the LLM to edit and refine till you get a good copy.
Now if you have no idea to begin with, obviously GIGO.
I've experimented with getting ChatGPT to produce the titles and teasers that I write for each video and link.
No matter how I prompt it, and how many examples of the newsletter it is given, it refuses to write in my style. It always falls back into the the ChatGPT style - there are only so many times you want to "dive" into something or "delve" into a topic.
It is quite useful for generating an initial summary, but needs a good editor to turn it into useful copy.
I will occasionally, when really pressed for time, use it for the opening intro, but even then it just sounds like drivel.
“Journalist who doesn’t understand technology tells chatgpt ‘write me a 200 word article from this press release’ and rails against bland outputs” is a better headline for her piece.
I partly work in product management at a news company.
I have a “reduce press release to article” prompt that I’ve developed over the past few months. It’s got 5 sections and 63 instructions to handle everything from formatting to tense to attribution to catching hallucinations.
The output of that prompt goes into another that specifically focuses on catching hallucination, paraphrasing and mis-attribution.
That might sound like overkill but it works brilliantly and turns out copy that is often better than the human writers doing the same thing. By taking this off their plate they can focus on high value and interesting writing, not simply regurgitating a press release.
Though as someone else has pointed out, many writers are bland and good for little else than regurgitating press releases. They should find alternative employment fast, because they will be the first writers to feel the true impacts of LLMs.
I didn’t really believe in “prompt engineering” before I worked on this, and regarded it as techbro snake oil akin to “SEO hacking” but going through the process of building out this mini-product (which also looks at incoming press releases and score for relevancy against topic, companies mentioned and people mentioned) has shown me how incredibly important having a well thought out and well structured prompt is. If you’re just asking your LLM to rubber duck with you it’s not important but if you want high quality replicable outputs it’s fundamental.
Yes, if you put a press release into chatgpt and don’t think about the outputs, you’re going to get slop out the other side.
All the more depressing because she’s promoting her book about syntax - and syntax is a core competency for prompt engineering.
Online news is about to be left behind. Yet again.
It's a bit like image generation. It looks impressive at first glance, but generic, and when you focus you realize the image doesn't make sense, the details are wrong, the layout is confused.
You can't generate working schematics or infographics with diffusion models - all you can make is filler content. Fine-tuning can give you different art styles, but it can't give you coherence. Fine-tuning can change an LLM's prose style, but it will still generate superficial, vacuous filler prose.
Why would I pay the LA Times for drivel?