AI Detector vs AI Fake News: Why AI Fakes Fool People Better

We tend to assume fake news is easy to spot. Sloppy writing, strange formatting, claims that fall apart under the slightest scrutiny. That assumption is outdated.

Linguists at the University of Oslo (research summary) spent years studying how fake news works at the language level across English, Russian, and Norwegian. Their latest findings upend the familiar picture: people rate AI-generated disinformation as more credible than fakes written by human authors. And more informative, too.

It's not about emotional manipulation. AI texts persuade because they mimic the style of sources we trust by default.

AI-generated content scored higher on trust than human fakes

Two projects at the University of Oslo produced this research. Fakespeak studied the linguistics of fake news across three languages. Its successor, NxtGenFake, focused specifically on AI-generated disinformation and will run until 2029. Both are led by linguist Silje Susanne Alvestad.

The team ran an experiment with American participants. People were shown a set of texts, some written by humans, some generated by AI. Nobody was told which was which. Each text was rated on three parameters: credibility, emotional appeal, and informativeness.

AI-generated disinformation scored higher on credibility and informativeness. On emotional appeal, AI did not outperform humans. When participants were asked which texts they would prefer to keep reading, the majority chose the AI-generated ones.

Comparison of AI-generated fakes vs human-written fakes across experiment parameters. Source: NxtGenFake, University of Oslo, 2026

AI fakes persuade not through emotional pressure but through the way information is packaged. They don't provoke or push. We are used to clickbait and emotional headlines, and over time something like an automatic filter develops: you see provocation, your guard goes up. AI disinformation does not provoke. It looks like a normal article with normal sources. You don't sense it because the deception is structural, not semantic: tone, structure, style — all of it imitates genres that readers have no reason to question.

AI disinformation copies formats we trust by default

NxtGenFake researchers identified two main techniques that AI propaganda deploys more consistently than human-made propaganda.

The first is appeal to authority. AI texts routinely reference experts and studies but never with specifics. "According to researchers." "Experts believe." The phrasing follows the conventions of quality journalism, but verification is impossible. There are no names behind the claims, no publications to check. Language models reproduce this format because they were trained on texts where it appears. They have no connection to actual sources.

The second technique is visible in endings. Human propaganda typically closes with a call to action: go, do, fight. AI does not work that way. Instead it produces a smooth paragraph about values: fairness, trust, economic growth. It reads like the conclusion of an analyst's report. No one questions that kind of writing. That is exactly why it slips through.

AI propaganda deploys fewer persuasive techniques overall than human propaganda. But the ones it does use, it applies more evenly. Human propagandists can be chaotic and unpredictable. AI maintains the tone of a competent professional text from the first paragraph to the last.

Fake news patterns: what three languages revealed about AI text

Even before the AI angle entered the project, the Fakespeak team spent years analyzing how fake news differs from genuine reporting at the level of grammar and style.

Part of the analysis was built on the case of Jayson Blair, a former New York Times journalist. In 2003, it came out that he had been fabricating stories for years. Researchers at the University of Birmingham compared his truthful and fabricated articles and found measurable differences. When Blair was lying, he switched to the present tense more often. When writing genuine news, he used the past tense. His motivation was money, and his fabricated texts contained almost no metaphors. And instead of negative emotions typically associated with manipulation, Blair used positive ones — he wrote fake stories about heroic American soldiers in Iraq.

Fakespeak expanded this analysis across three languages and documented consistent patterns:

  • Present tense instead of past. A real journalist describes what happened. A fabricator creates the sensation of a live broadcast, as if the event is unfolding right now. This held true across all three languages.
  • Emphatic expressions. Words like truly, really, absolutely appear significantly more often in fabricated content. Genuine news is more restrained. When facts support the text, the author does not need to insist that "this is really true."
  • Epistemic certainty. This is how researchers describe the situation where the author allows no shadow of doubt in any claim. Fake texts are saturated with phrases like obviously, evidently, as a matter of fact. Doubt would undermine the lie, so it gets stripped out. In Russian-language texts, this effect was markedly more pronounced than in English.

But these markers are not universal. Fakespeak tested across three languages and found a different picture in each one. Russian texts showed more categorical assertions. English texts had more emphatic expressions. Even the author's motivation changes the style: someone lying for money barely uses metaphors. Someone lying for an ideology fills the text with imagery from war and sports. There is no single recipe for "this is what a fake looks like." That might be the most unsettling takeaway of the entire study.

AI generated text as a global threat: beyond one experiment

The Fakespeak findings could be dismissed as a single experiment with a limited sample. But they align with what large-scale international studies are reporting — from the WEF Global Risks Report 2026 to research published in PNAS Nexus.

When people cannot tell AI text from human text, that is one problem. Those who work with AI regularly start noticing patterns over time. But when AI text is perceived as more trustworthy, experience alone won't help. We judge credibility by form — by how closely a text resembles what we're used to considering high quality. AI doesn't mimic individual words. It mimics entire genres. And we fall for it.

AI text detector: when human judgment is no longer enough

In August 2026, Article 50 of the EU AI Act takes effect: mandatory labeling of AI-generated content. Penalties for non-compliance can reach 3% of global annual turnover or €15 million (Article 99(4), EU AI Act). A serious signal. But labeling only works for those willing to play by the rules. Disinformation campaigns do not label themselves.

That's where tools come in. Not labels, but actual text analysis. The It's AI detector does exactly that: it breaks text down at the sentence level. It catches the consistent "expert" tone, the stylistic uniformity, the phrasing that reveals AI-generated origins. The very things the Fakespeak study identifies as reasons people trust AI text more.

There's a related finding from Fakespeak that directly affects how detection should work. Fake news markers are structured differently depending on the language. Propaganda patterns in Russian text may go unnoticed by a tool tuned to English. Fakespeak tested this across three languages and found a distinct set of markers in each one. The It's AI detector supports multiple languages — and in light of this data, multilingual support is not an extra feature but a requirement without which a detector will miss part of AI-generated content.


FAQ