A few days ago, I asked it some questions on Russia's industrial base and military hardware manufacturing capability, and it wrote a very convincing response, except the video embedded at the end of the response was an AI generated one. It might have had actual facts, but overall, my trust in Gemini's response to my query went DOWN after I noticed the AI generated video attached as the source.
Countering debasement of shared reality and NOT using AI generated videos as sources should be a HUGE priority for Google.
YouTube channels with AI generated videos have exploded in sheer quantity, and I think majority of the new channels and videos uploaded to YouTube might actually be AI; "Dead internet theory," et al.
This itself seems pretty damning of these AI systems from a narrative point of view, if we take it at face value.
You can't trust AI to generate things that are sufficiently grounded in facts that you can't even use it as a reference point. Why should end users believe the narrative that these things are as capable as they're being told they are, by extension?
It's not like chatgpt is not going to cite AI videos/articles.
Almost every time for me... an AI generated video, with AI voiceover, AI generated images, always with < 300 views
Very few people manage high quality verbal information delivery, because it requires a lot of prep work and performance skills. Many of my university lectures were worse than simply reading the notes.
Furthermore, video is persuasive through the power of the voice. This is not good if you're trying to check it for accuracy.
Most of the "educational" and documentation style content there is usually "just" gathered together from other sources, occasionally with links back to the original sources in the descriptions.
I'm not trying to be dismissive of the platform, it's just inherently catered towards summarizing results for entertainment, not for clarity or correctness.
Still doesn’t make them a primary source. A good research agent should be able to jump off the video to a good source.
Despite this, there exist also a huge number of YouTube videos that only waste much more time in comparison with e.g. a HTML Web page, without providing any useful addition.
It matters in the context of health related queries.
> Researchers at SE Ranking, a search engine optimisation platform, found YouTube made up 4.43% of all AI Overview citations. No hospital network, government health portal, medical association or academic institution came close to that number, they said.
> “This matters because YouTube is not a medical publisher,” the researchers wrote. “It is a general-purpose video platform. Anyone can upload content there (eg board-certified physicians, hospital channels, but also wellness influencers, life coaches, and creators with no medical training at all).”
> However, the researchers cautioned that these videos represented fewer than 1% of all the YouTube links cited by AI Overviews on health.
> “Most of them (24 out of 25) come from medical-related channels like hospitals, clinics and health organisations,” the researchers wrote. “On top of that, 21 of the 25 videos clearly note that the content was created by a licensed or trusted source.
> “So at first glance it looks pretty reassuring. But it’s important to remember that these 25 videos are just a tiny slice (less than 1% of all YouTube links AI Overviews actually cite). With the rest of the videos, the situation could be very different.”
Oh, you mean like removing scores of covid videos from real doctors and scientists which were deemed to be misinformation
I'm glad that we've decided Youtube is the oracle for everything
The credentials don't matter, the actual content does. And if it's misinformation, then yes, you can be a quadruple doctor, it's still misinformation.
In France, there was a real doctor, epidemiologist, who became famous because he was pushing a cure for Covid. He did some underground, barely legal, medical trials on his own, and proclaimed victory and that the "big bad government doesn't want you to know!". Well, the actual proper study finished, found there is basically no difference, and his solution wasn't adopted. He didn't get deplatformed fully, but he was definitely marginalised and fell in the "disinformation" category. Nonetheless, he continued spouting his version that was proven wrong. And years later, he's still wrong.
Fun fact about him: he's in the top 10 of scientists with the most retracted papers, for inaccuracies.
https://www.politifact.com/factchecks/2023/jun/07/ron-johnso...
This is called disinformation that will get you killed, so yeah, probably not good to have on youtube.
- After saying he was attacked for claiming that natural immunity from infection would be "stronger" than the vaccine, Johnson threw in a new argument. The vaccine "has been proven to have negative efficacy," he said. -
Extraordinary claims require extraordinary evidence instead of just posting bs on rumble.
I'd assumed they simply didn't feed it properly to Google Search... but they did for Gemini? Maybe just the Search transcripts are heavily downranked or something.
"AI responses may include mistakes. Learn more"
It's not mistakes, half the time it's completely wrong and total bullshit information. Even comparing it to other AI, if you put the same question into GPT 5.2 or Gemini, you get much more accurate answers.
...and then there's WebMD, "oh you've had a cough since yesterday? It's probably terminal lung cancer."
Google AI Overviews put people at risk of harm with misleading health advice
Before we get too worked up about the results, just look at the source. It's a SERP ranking aggregator (not linking to them to give them free marketing) that's analyzing only the domains, not the credibility of the content itself.
This report is a nothingburger.
A professor in the field can probably go "ok this video is bullshit" a couple minutes in if it's wrong. They can identify a bad surgeon, a dangerous technique, or an edge case that may not be covered.
You and I cannot. Basically, the same problem the general public has with phishing, but even more devastating potential consequences.
That said, if (hypothetically) gemini were citing only videos posted by professional physicians or perhaps videos uploaded to the channel of a medical school that would be fine. The present situation is similar to an LLM generating lots of citations to vixra.
Example: it is the official position of the Turkish government that the Armenian genocide [1] didn't happen.. It did. Yet for years they seemingly have spent resources to game Google rankings. Here's an article from 2015 [2]. I personally reported such government propaganda results in Google in 2024 and 2025.
Current LLMs really seem to come down to regurgitating Reddit, Wikipedia and, I guess for Germini, Youtube. How difficult would it be to create enough content to change an LLM's answers? I honestly don't know but I suspect for certain more niche topics this is going to be easier than people think.
And this is totally separate from the threat of the AI's owners deciding on what biases an AI should have. A notable example being Grok's sudden interest in promoting the myth of a "white genocide" in South AFrica [3].
Antivaxxer conspiracy theories have done well on Youtube (eg [4]). If Gemini weights heavily towards Youtube (as claimed) how do you defend against this sort of content resulting in bogus medical results and advice?
[1]: https://en.wikipedia.org/wiki/Armenian_genocide
[2]: https://www.vice.com/en/article/how-google-searches-are-prom...
[3]: https://www.theguardian.com/technology/2025/may/14/elon-musk...
[4]: https://misinforeview.hks.harvard.edu/article/where-conspira...
Follow them and you should be able to comment without further issue. Hope this helps.
...what?