If you want to experiment with reported news using untested tools that have known quality problems, do it in a strictly controlled environment where the output can be carefully vetted. Senior editor(s) need to be in the loop. Start with something easier, not controversial or high-profile articles.
One other thing. If the author cut corners because he's too sick to write, but did so anyway because he thought his job would be in jeopardy if he didn't publish, maybe it's time for some self-reflection at Ars regarding the work culture and sick leave/time-off policies.
It sounds like you're implying that's what happened here, but I don't see any of that in the article. Was additional info shared elsewhere?
Edit: oh, I see links to the article author's social media saying this. Nevermind my question, and I agree.
Quality took a nosedive, which may or may not have quickened the death spiral.
All that to say, there may not even be senior editors around to put in the loop.
I don't know journalism from the inside, though of course it's one of those professions that everyone things they understand and has an opinion about. Realistically, is it especially careful vetting to verify the quotes and check the factual statements? The quotes seem like especially obvious risks - no matter how sick, who would let an LLM write anything without verifying quotes?
That seems like not verifying currency figures in an estimate or quote, and especially in one written by an LLM - I just can't imagine it. I'd be better off estimating the figures myself or removing them.
Possibly the author doesn't understand LLMs well.
[0] https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p
[1] your mileage may vary on how much you believe it and how much slack you want to cut him if you do
Makes me wonder about Ars Technica's company culture.
You are assuming that...
He says he currently has a fever.
But was he sick when he wrote the article? That is not so clear.
> I have been sick with COVID all week /../, while working from bed with a fever and very little sleep, I unintentionally made a serious journalistic error in an article about Scott Shambaugh.
Ok, what sort of facts would you accept here?
That’s not sleight-of-hand, I think we all immediately recognize it for what it is. Whether it is good form to lead with an excuse is a matter of opinion, but it’s not deceptive.
We don't know yet how widespread these practices are at Ars Technica, or whether this is a one-off. But if it went down like he says it did here, then the coincidental nature of this mistake -- i.e., that it's an AI user error in reporting an AI novel behavior story at an AI-skeptical outlet -- merely makes it ironic, not more egregious than it already is.
[1] Edit: I read and agreed with ilamont's new comment elsewhere in this thread, right after posting this. It's a very reasonable caveat! https://news.ycombinator.com/item?id=47029193
People are making a bigger deal about it than this one article or site warrants because of ongoing discourse about whether LLM tech will regularly and inevitably lead to these mistakes. We're all starting to get sick of hearing about it, but this keeps happening.
An AI agent published a hit piece on me – more things have happened - https://news.ycombinator.com/item?id=47009949 - Feb 2026 (602 comments)
AI Bot crabby-rathbun is still going - https://news.ycombinator.com/item?id=47008617 - Feb 2026 (28 comments)
The "AI agent hit piece" situation clarifies how dumb we are acting - https://news.ycombinator.com/item?id=47006843 - Feb 2026 (125 comments)
An AI agent published a hit piece on me - https://news.ycombinator.com/item?id=46990729 - Feb 2026 (945 comments)
AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (746 comments)
This is pretty much what I expect when an organization makes a mistake. Many organizations don't do as well.
I thought of Ars Technica as a pretty decent publication, now I am wondering if they actually check what they publish.
Both from the Mastodon post of the journalist (which admits to casual use of more than one LLM), and from a cursory review of this author's past articles, I'm willing to bet that this rule wasn't followed more than once.
> Following additional review, Ars has determined that the story “After a routine code rejection, an AI agent published a hit piece on someone by name,” did not meet our standards. Ars Technica has retracted this article. Originally published on Feb 13, 2026 at 2:40PM EST and removed on Feb 13, 2026 at 4:22PM EST.
Rather than say “did not meet our standards,” I’d much prefer if they stated what was false - that they published false, AI generated quotes. Anyone who previously read the article (which realistically are the only people who would return to the article) and might want to go back to it as a reference isn’t going to have their knowledge corrected of the falsehoods that they read.
He admits to using an AI tool, says he was sick and did dumb things. He does clear Kyle (the other author).
He makes the claim that he was just using AI to help him put together an outline for his article, when the evidence clearly shows that he used the AI's verbatim output.
1.) He tried use Claude to generate a list of citations. Claude refused because the article talked about harassment and this breaks its content policy.
2.) He wanted to understand why so he pasted the text into ChatGPT.
3.) ChatGPT generated quotes; he did not verify they were actual quotes.
I don’t see any sign that he actually read the source article. He had an excellent lead in to that - he had Covid and mentioned a lack of sleep so brain fog would have been a valid excuse. He could have said something as simple as ‘I was sick, extremely tired and the brain fog was so deep that I couldn’t remember what I read or even details of the original author’s voice.’ And that would have been more than enough. But there’s nothing.
That’s an odd thing for a journalist to leave out. They’re skilled at crafting narratives that will both explain and persuade and yet the most important part of this whole thing didn’t even warrant a mention.
As a basic rule, if a journalist is covering something that happened via blog posts, you should be able to expect the journalist to read the posts. I’d like to give this writer the benefit of the doubt but it’s hard.
So since he says he was sick and his recollection cannot be trusted (I don't blame him, the second-to-last time I had COVID-19, I can barely remember anything about the worst day - which was Christmas Day), something seems to be missing. He may not have pasted in the blog post like he remembers. Or perhaps he got routed to a cheap model; it wouldn't surprise me if he was using a free tier, that accounts for a lot of these stories where GPT-5 underperforms and would explain a lot of stupidity by the GPT. Or didn't use GPT at all, who knows.
So companies often have a strange concept of "sick days", a specific number of days a year you're allowed to be sick. If you're sick more than that you have to use your vacation days, or unpaid leave when you're sick.
(And of course American companies often have weirdness around vacations too. More so in companies where there is allegedly "unlimited time off". But that's kinda off-topic now.)
As far as I can tell, the pulled article had no obvious tells and was caught only because the quotes were entirely made up. Surely it's not the only one, though?
Ars were caught with their pants down. We have no reason to believe otherwise. It isn't possible to prove otherwise. We as readers are lucky ars quoted someone who disabled LLM access to their website, causing the hallucination and giving us a smoking gun.
Clawing back credibility will be hard
Thread on Arstechnica forum: https://arstechnica.com/civis/threads/editor%E2%80%99s-note-...
The retracted article: https://web.archive.org/web/20260213194851/https://arstechni...
Well, it's retracted. That means that it shouldn't exist any more, so while they could link to the archive, it defeats the point of retracting it if they do so, right?
Not at all. Whitewash.
The problem is people on the Internet, hn included, always howl for maximalist repercussions every time. ie someone should be fired. I don't see that as a healthy or proportionate response, I hope they just reinforce that policy and everyone keeps their jobs and learns a little.
This was not a mistake.
Correct, I only mentioned the blame-free post-mortem thing to head off the usual excuses, as a shorthand for the general approach. It has merits in many/most circumstances.
> I don't see that as a healthy or proportionate response,
Again, correct. It's only appropriate in cases of malice.
My wife, former journalist, said that you don’t direct quote anyone without talking to them first and verifying what you’re quoting is for sure from them. The she said “I guess they have no editors?” because in her experience editors aren’t like fact checkers but they’re suppose to have the experience and wisdom to ask questions about the content to make sure everything is kosher before going to print. Seems like multiples errors in judgement from multiple parts of the organization.
(My wife left journalism about 15 years ago so maybe things are different but that was her initial reaction)
Ya, they are quite different!
I think that a journalist using an AI tool to write an article treads perilously close to that kind of recklessness. It is like a carpenter building a staircase using some kind of weak glue.
I may not intend to burn someone's house down by doing horribly reckless things with fireworks... but after it happens, surely I would still bear both some fault and some responsibility.
My assumption is that one of the authors used something like Perplexity to gather information about what happened. Since Shambaugh blocks AI company bots from accessing his blog, it did not get actual quotes from him, and instead hallucinated them.
They absolutely should have validated the quotes, but this isn't the same thing as just having an LLM write the whole article.
I also think this "apology" article sucks, I want to know specifically what happened and what they are doing to fix it.
"Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here."
They aren't allowed to use the tool, so there was clearly intention.
Seems like ordinary, everyday corner cutting to me. I don't think that rises to the level of malice. Maybe if we go through their past articles and establish it as a pattern of behavior.
That's not a defence to be clear. Journalists should be held to a higher standard than that. I wouldn't be surprised if someone with "senior" in their title was fired for something like this. But I think this malice framing is unhelpful to understanding what happened.
By submitting this work they warranted that it was their own. Requiring an explicit false statement to qualify as a lie excludes many of the most harmful cases of deception.
You can absolutely lie through omission, I just don't see evidence that that is a better hypothesis than corner cutting in this particular case. I am open to more evidence coming out. I wouldn't be shocked to hear in a few days that there was other bad behavior from this author. I just don't see those facts in evidence, at this moment. And I think calling it malice departs from the facts in evidence.
Presumably keeping to the facts in evidence is important to us all, right? That's why we all acknowledge this as a significant problem?
I certainly stand by my broader claim that lying is fireable.
For what it's worth, the post below talks about experimenting with Claude Code but also having COVID in December. I don't know what to think of that, I did work with a guy who just kept catching COVID (or at least he said that and I believed him, I didn't swab him personally or anything), but it is weird for him to have COVID in December and February.
https://arstechnica.com/information-technology/2026/01/10-th...
Assuming malice without investigating is itself careless.
we're really at the point where people are just writing off a journalist passing off their job to a chatgpt prompt as though that's a normal and defensible thing to be doing
Honestly I'm just not astounded by that level of incompetence. I'm not saying I'm impressed or that's it's okay. But I've heard much worse stories of journalistic malpractice. It's a topical, disposable article. Again, that doesn't justify anything, but it doesn't surprise me that a short summary of a series of forum exchanges and blog posts was low effort.
I also do not believe this was a genuine result of incompetence. I entertained that it is possible, but that would be the most charitable view possible, and I don't think the benefit of doubt is earned in this case. They routinely cover LLM stories, the retracted article being about that very subject matter, so I have very little reason to believe they are ignorant about LLM hallucinations. If it were a political journalist or something, I would be more inclined to give the ignorance defense credit, but as it is we have every reason to believe they know what LLMs are and still acted with intention, completely disregarding the duty they owe to their readers to report facts.
That's more or less what I mean. It was only a few notches above listicle to begin with. I don't think they intended to fabricate quotes. I think they didn't take the necessary time because it's a low-stakes, low-quality article to begin with. With a short shelf life, so it's only valuable if published quickly.
> I also do not believe this was a genuine result of incompetence.
So your hypothesis is that they intentionally made up quotes that were pretty obviously going to be immediately spotted and damage their career? I don't think you think that, but I don't understand what the alternative you're proposing is.
I also feel compelled to point out you've abandoned your claim that the article was generated. I get that you feel passionately about this, and you're right to be passionate about accuracy, but I think that may be leading you into ad-hoc argumentation rather than more rational appraisal of the facts. I think there's a stronger and more coherent argument for your position that you've not taken the time to flesh out. That isn't really a criticism and it isn't my business, but I do think you ought to be aware of it.
I really want to stress that I don't think you're wrong to feel as you do and the author really did fuck up. I just feel we, as a community in this thread, are imputing things beyond what is in evidence and I'm trying to push back on that.
> I also feel compelled to point out you've abandoned your claim that the article was generated.
As you've pointed out, neither of us has a crystal ball, and I can't definitively prove the extent of their usage. However, why would I have any reason to believe their LLM usage stops merely at fabricating quotes? I think you are again engaging in the most charitable position possible, for things that I think are probably 98 or 99% likely to be the result of malicious intent. It seems overwhelmingly likely to me that someone who prompts an LLM to source their "facts" would also prompt an LLM to write for them - it doesn't really make sense to be opposed to using an LLM to write on your behalf but not be opposed to it sourcing stories on your behalf. All the more so if your rationale as the author is that the story is unimportant, beneath you, and not worth the time to research.
Yeah, that's accurate. I will turn a dime the moment I receive evidence that this was routine for this author or systemic for Ars. But yes, I'm assuming good faith (especially on Ars' part), and that's generally how I operate. I guess I'm an optimist, and I guess I can't ask you to be one.
But the last section of the article includes apparent quotes from this blog post by Shambaugh:
https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...
and all the quotes are fake. The section:
> On Wednesday, Shambaugh published a longer account of the incident, shifting the focus from the pull request to the broader philosophical question of what it means when an AI coding agent publishes personal attacks on human coders without apparent human direction or transparency about who might have directed the actions.
> “Open source maintainers function as supply chain gatekeepers for widely used software,” Shambaugh wrote. “If autonomous agents respond to routine moderation decisions with public reputational attacks, this creates a new form of pressure on volunteer maintainers.”
> Shambaugh noted that the agent’s blog post had drawn on his public contributions to construct its case, characterizing his decision as exclusionary and speculating about his internal motivations. His concern was less about the effect on his public reputation than about the precedent this kind of agentic AI writing was setting. “AI agents can research individuals, generate personalized narratives, and publish them online at scale,” Shambaugh wrote. “Even if the content is inaccurate or exaggerated, it can become part of a persistent public record.”
> ...
> “As autonomous systems become more common, the boundary between human intent and machine output will grow harder to trace,” Shambaugh wrote. “Communities built on trust and volunteer effort will need tools and norms to address that reality.”
Source: the original Ars Technica article:
They put quote-looking not-quotes in the headlines and articles routinely that essentially amount to "putting words in someone's mouth". A very large portion of the population seems to take this at face value as direct quotes, or accurate paraphrasing, when they absolutely are not.
I unsubscribed (just the free rss) regardless of their retraction.
Ref: https://web.archive.org/web/20260214134656/https://news.ycom...
In the comments I found a link to the retracted article: https://arstechnica.com/ai/2026/02/after-a-routine-code-reje.... Now that I know which article, I know it's one I read. I remember the basic facts of what was reported but I don't recall the specifics of any quotes. Usually quotes in a news article support or contextualize the related facts being reported. This non-standard retraction leaves me uncertain if all the facts reported were accurate.
It's also common to provide at least a brief description of how the error happened and the steps the publication will take to prevent future occurrences.. I assume any info on how it happened is missing because none of it looks good for Ars but why no details on policy changes?
Edit to add more info: I hadn't yet read the now-retracted original article on achive.org. Now that I have I think this may be much more interesting than just another case of "lazy reporter uses LLM to write article". Scott, the person originally misquoted, also suspects something stranger is going on.
> "This blog you’re on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn’t figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn’t access the page it generated these plausible quotes instead, and no fact check was performed." https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...
My theory is a bit different than Scott's: Ars appears to use an automated tool which adds text links to articles to increase traffic to any related articles already on Ars. If that tool is now LLM-based to allow auto-generating links based on concepts instead of just keywords, perhaps it mistakenly has unconstrained access to changing other article text! If so, it's possible the author and even the editors may not be at fault. The blame could be on the Ars publishers using LLMs to automate monetization processes downstream of editorial. Which might explain the non-standard vague retraction. If so, that would make for an even more newsworthy article that's directly within Ars' editorial focus.
It's good to issue a correction, and in this case to retract the article. But it doesn't really give me confidence going forward, especially where this was flagged because the misquoted person raised the issue. It's not like Ars' own processes somehow unearthed this error.
It makes me think I should get in the habit of reading week-old Ars articles, whose errors would likely have been caught by early readers.
It might be even worse (and more interesting) than that. I just posted a sister response outlining why I now suspect the fabrication may have actually been caused by Ars' own process. https://news.ycombinator.com/item?id=47027370. Hence, the odd non-standard retraction.
When I wrote my post above, I hadn't yet read the original article on achive.org. Now that I know the article actually links to the claimed original sources on Scott's blog and Github for all the fabricated quotes, how this could have happened is even more puzzling. Now I think this may be much more interesting than just another case of "lazy reporter uses LLM to write article".
Ars appears to use an automated tool which adds text links to articles to increase traffic to any related articles already on Ars. If that tool is now LLM-based to allow auto-generating links based on concepts instead of just keywords, perhaps it mistakenly has unconstrained access to changing other article text! If so, it's possible the author and even the editors may not be at fault. The blame could be on the Ars publisher's using LLM's to automate monetization processes downstream of editorial. Which might explain the non-standard vague retraction. If so, that would make for an even more newsworthy article that's directly within Ars' editorial focus.
They need to enumerate the specific details they fudged.
They need to correct any inaccuracies.
Otherwise, there is little reason to trust Arse Technica in the future.
Ars is owned by Conde Nast, which had to let go of its HQ in 2024. I suspect they don't have a plan to replace a journalist like Benj if they axe him. And it's not like readers are going to hold them accountable.
A lot of the results would be predictable partisan takes and add no value. But in a case like this where the whole conversation is public, the inclusion of fabricated quotes would become evident. Certain classes of errors would become lucid.
Ars Technica blames an over reliance on AI tools and that is obviously true. But there is a potential for this epistemic regression to be an early stage of spiral development, before we learn to leverage AI tools routinely to inspect every published assertion. And then use those results to surface false and controversial ones for human attention.
https://www.nytimes.com/section/corrections
https://www.wsj.com/news/types/corrections
etc etc. Many of them include the retraction or correction in the following print edition, if they have one, as well.
Ars Technica makes up quotes from Matplotlib maintainer; pulls story
With regard to editorial review, an editor didn't catch the error. The target of the false quotes had to register on Ars and post a comment about it. To top it off, more than one Ars commenter was openly suspicious that he was a fake account. Only when some of the readers checked for themselves to see that the quotes were indeed falsified did it gain attention from Ars staff.
Most people would have had no hope and nobody would ever know.
If the coverage of those risks brought us here, of what use was the coverage?
Another day, another instance of this. Everyone who warned that AI would be used lazily without the necessary fact-checking of the output is being proven right.
Sadly, five years from now this may not even result in an apology. People might roll their eyes at you for correcting a hallucination they way they do today if you point out a typo.
I think this track is unavoidable. I hate it.
If they had named the people involved, the criticism would be, "they aren't taking responsibility, they're passing the buck to these employees."
It is definitely not a good look for a "Senior AI Reporter."
They admit wrong doing here and point to multiple policy violations.
It’s not optional, but wasn’t followed, with zero repercussions.
Sounds optional.
If they had waited until Monday the thread would be filled with comments criticizing them for waiting that long.
> we probably won't have something to report back until next week.
The forum thread is locked.
If they felt the need to post something in a hurry on the weekend, then the message should acknowledge that, and acknowledge that "investigation continues" or something like that
What would you have liked to see them announce?
And yes, it looks like Ars is still investigating (bluesky post by one of the authors of the retracted article) https://bsky.app/profile/kyleor.land/post/3mewdlloe7s2j
That's not how it works. It's standard op nowadays to lock out terminated employees before they even walk in the door.
Sometimes they just snail mail the employee's personal possessions from their desk.
Moreover, Ars Technica publishes articles every day. Aside from this editor's note, they published one article today and three articles yesterday. So "holiday weekend" is practically irrelevant in this case.
Some places.
> It's standard op nowadays to lock out terminated employees before they even walk in the door.
Some places.
You're speaking very authoritatively about what's "standard", in a way that strongly implies you think this is either the way absolutely everyone does it, or the way it should be done.
It's standard op nowadays to acknowledge that your experiences are not universal, and that different organizations operate differently.
Neither. I just meant it's common.
The comment I replied to said, "they may need to wait for office staff to return to begin the process."
I think the commonality of the practice shows that Ars Technica doesn't need to wait for office staff to return to begin the process, if office staff is even gone in the first place (again, Ars Technica appears to be open for business today). There's certainly no legal reason why they'd need to wait to fire people.
Does Ars Technica have a "policy" to only fire people on weekdays? I doubt it. Imagine reading that in the employee handbook.
Besides, President's Day is not a holiday that businesses necessarily close for. Indeed, many retailers are open and have specific President's Day sales.
They normally aren't, they probably write the stories on the weekdays and prepare them to automatically publish over the weekend, with only a skeletal staff to moderate and repair the website. Legal, HR, and other office staff probably only work weekdays, or are contracted out to external firms.
Their CEO posted a quick note on their forums the other day about this which implied they don't normally work on holidays and it would take until Tuesday for a response.
Judging from today's editors note, if things need to happen more quickly, then they do.
Can we not just have a little patience anymore?
throw3e98 is the one who suggested that Ars Technica was going to fire people, but not for a few days. I merely suggested that if anyone was getting fired, they would likely already be fired.
At this point, however, I don't think anyone is getting fired, not this weekend and not Tuesday either: https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p
I don't condemn Ars Technica for not firing the guy, but I do condemn Ars Technica for the terse hand-wave of an editor's note with no explanation, when on the same day we get a fuller story only from someone's personal social media account.
The only incident we know was isolated was getting caught.
It's such a cliche that they should have apologized in a human enough way that it didn't sound like the apology was AI generated as well. It's one way they could have earned back a small bit of credibility.
> Kudos to ARS for catching this and very publicly stating it.
> Thank you for upholding your journalistic standards. And a note to our current administration in DC - this is what transparency looks like.
> Thank you for upholding the standards of journalism we appreciate at ars!
> Thank you for your clarity and integrity on your correction. I am a long time reader and ardent supporter of Ars for exactly these reasons. Trust is so rare but also the bedrock of civilization. Thank you for taking it seriously in the age of mass produced lies.
> I like the decisive editorial action. No BS, just high human standards of integrity. That's another reason to stick with ARS over news feeds.
There is some criticism, but there is also quite a lot of incredible glazing.
> If there is a thread for redundant comments, I think this is the one. I, too, will want to see substantially more followup here, ideally this week. My subscription is at stake.
> I know Aurich said that a statement would be coming next week, due to the weekend and a public holiday, so I appreciate that a first statement came earlier. [...] Personally, I would expect Ars to not work with the authors in the future
> (from Jim Salter, a former writer at Ars) That's good to hear. But frankly, this is still the kind of "isolated incident" that should be considered an immediate firing offense.
> Echoing others that I’m waiting to see if Ars properly and publicly reckons with what happened here before I hit the “cancel subscription” button