
2024 was a year of elections — and a year of record-volume misinformation. Across four major democratic votes, two ongoing wars, and the first large-scale deployment of AI-generated synthetic media, fact-checkers documented patterns that will shape the information environment for years. This review covers the cases that mattered most.
Why 2024 Was Different
Volume alone did not distinguish 2024. The key shift was the convergence of three factors simultaneously: election calendars across the US, EU, UK, and India created coordinated targeting opportunities; AI-generation tools reached quality thresholds that made synthetic content plausible to casual viewers; and platform moderation — by the Reuters Institute’s own assessment in the 2024 Digital News Report — became less consistent, not more.
The Reuters Institute surveyed nearly 100,000 respondents across 47 markets and found that concern about misinformation had risen further than in any year since the COVID-19 pandemic — with AI-generated content cited as a primary driver. Trust in news overall held at 40%, four points below its 2020 peak. More than a quarter of TikTok news users (27%) reported struggling to identify trustworthy content, the highest figure across all platforms surveyed.
What follows is not an exhaustive catalog. It is a selection of documented, debunked cases organized by category — chosen because they illustrate the dominant patterns, not because they were the most viral.
Election Misinformation: United States
The US presidential election produced the year’s largest single volume of debunked claims. Three categories dominated: AI-generated synthetic endorsements, voting system fraud allegations, and foreign-state disinformation operations.
The Taylor Swift AI Endorsement
In August 2024, Donald Trump’s campaign shared AI-generated images on Truth Social depicting pop star Taylor Swift alongside text implying she had endorsed Trump’s candidacy. The images were visually crude but spread rapidly. Swift had not endorsed Trump — she publicly endorsed Kamala Harris in September 2024. The episode was significant not because anyone was likely deceived by the specific images, but because it normalized the use of synthetic celebrity likenesses in political messaging without disclosure.
The New Hampshire Biden Robocall
In January 2024, ahead of the New Hampshire Democratic primary, a voice clone of President Biden called registered Democrats urging them to “stay home” and not vote. The audio was generated using commercially available AI voice synthesis. The Federal Communications Commission (FCC) subsequently ruled AI-generated voices in robocalls illegal under the Telephone Consumer Protection Act — the first such regulatory action. The political consultant behind the call was identified and faced criminal charges in New Hampshire.
Russian Doppelganger Operations
The Department of Justice seized 32 internet domains in September 2024 linked to Russia’s Doppelganger operation — a network of sites impersonating legitimate news outlets including the Washington Post, Fox News, and Bild to spread pro-Kremlin narratives. In October 2024, the FBI publicly attributed several viral false claims — including a video purportedly showing Trump ballots being destroyed — to Russian state actors. Meta’s Q3 2024 Adversarial Threat Report documented that since 2017, Russia had generated 39 disrupted coordinated inauthentic behavior (CIB) operations, more than any other country.
The “Immigrant Pets” Narrative
Claims that Haitian immigrants in Springfield, Ohio, were eating pets and animals circulated widely in September 2024, amplified by prominent political figures. Springfield’s city government, police department, and local news outlets confirmed no such incidents had been reported or verified. Snopes, FactCheck.org, and PolitiFact all rated the claims false within 48 hours. The episode illustrated how viral misinformation can seed from a single unverified post, gain credibility through political amplification, and prove difficult to retract once embedded in partisan media ecosystems.
Election Misinformation: Europe
The June 2024 European Parliament elections generated coordinated disinformation at scale. The most systematic operations came from pro-Russian networks, but domestic actors also ran targeted campaigns.
The Pravda Network
The EU DisinfoLab and the European Digital Media Observatory (EDMO) documented the “Pravda” disinformation network — a cluster of more than 50 copycat news websites that impersonated regional European media outlets. Initially targeting German-language audiences, the network expanded to French, Italian, Polish, and Czech content ahead of the June vote. Articles were largely machine-translated Russian state media content repackaged under local-looking domain names. Meta approved 275 political advertisements linked to network-adjacent accounts that lacked mandatory EU transparency disclaimers, reaching over three million users across Italy, Germany, France, and Poland before removal.
UK Election: Deepfake Audio of Sadiq Khan
An AI-generated audio clip falsely attributed to London Mayor Sadiq Khan circulated on social media ahead of the May 2024 London elections. The clip depicted Khan making inflammatory statements about the priority of pro-Palestinian protests over Remembrance Day ceremonies — statements he never made. Khan himself described the clip as nearly causing “serious disorder.” Researchers at the Centre for Emerging Technology and Security (CETAS) at the Alan Turing Institute identified 16 confirmed viral AI disinformation cases across the UK general election — a number they considered significant given how recently the technology had become accessible.
UK Election: Rishi Sunak Deepfake Ads
Separate from the Khan audio, AI-generated video clips depicting then-Prime Minister Rishi Sunak promoting fraudulent investment schemes circulated across Facebook and YouTube. The clips used realistic voice cloning and video manipulation. Neither Meta nor Google removed all instances before they had accumulated substantial view counts. CETAS researchers noted that while none of the 2024 UK deepfakes appeared to have shifted measurable vote outcomes, they established a template for future operations — and raised the threshold of what voters would need to verify before trusting audio-visual political content.
Health Misinformation: Post-COVID Narratives
Vaccine misinformation did not end when the acute COVID-19 emergency did. In 2024, a cluster of persistent false narratives continued to circulate — many unchanged from 2021, some updated with new pseudo-scientific framing.
The mRNA Cancer Link
A preprint paper circulated widely in early 2024 claiming that mRNA COVID-19 vaccines caused “turbo cancer” — abnormally aggressive malignancies. FactCheck.org’s SciCheck team reviewed the paper and found its authors included individuals with documented histories of promoting vaccine misinformation, its methodology had not been peer-reviewed, and its conclusions contradicted multiple large-scale epidemiological studies. The CDC’s Vaccine Safety Datalink, which monitors adverse events across millions of vaccinated patients, found no signal connecting mRNA vaccines to increased cancer incidence or aggression. The narrative was classified as misinformation by FactCheck.org, Science Feedback, and the Mayo Clinic.
Vaccine Infertility Claims Revisited
Claims that COVID-19 vaccines caused infertility — debunked multiple times since 2021 — resurged in 2024, often attached to new studies taken out of context. A systematic review and meta-analysis published in the journal Human Reproduction Update examined 18 studies covering 35,000 participants and found no statistically significant association between COVID-19 vaccination and reduced fertility in either men or women. The WHO and CDC both issued updated guidance reaffirming vaccine safety for people of reproductive age. The persistence of this claim, despite repeated comprehensive debunking, illustrates what researchers call the “zombie narrative” problem: false health claims that cannot be permanently killed because they resonate with pre-existing anxieties.
Long COVID Denial
A counter-narrative emerged in 2024 claiming that Long COVID was primarily a psychosomatic condition or an artifact of pandemic-era anxiety rather than a documented physiological syndrome. This narrative circulated alongside legitimate scientific debates about Long COVID’s mechanisms. The distinction matters: genuine scientific uncertainty about how Long COVID works is not the same as the evidence-free claim that it does not exist. As of 2024, the WHO estimated that 10–20% of COVID-19 survivors experienced Long COVID symptoms beyond 12 weeks, with consistent biological markers including microclots, immune dysregulation, and autonomic nervous system involvement documented across independent research centers.
AI-Generated Content: The First Large Wave
2024 marked the year AI-generated synthetic media moved from an emerging threat to a documented, high-volume one. The cases below represent the most significant debunked instances — a fuller catalog is available in our case database.
The Kamala Harris Michigan Rally: Fake Crowd Claims
In August 2024, conspiratorial accounts on X and Facebook claimed that aerial photographs of a large crowd at a Kamala Harris rally in Ann Arbor, Michigan, were AI-generated or CGI. The claims alleged the Harris campaign had fabricated crowd size for optics. The images were authenticated by Reuters Fact Check, ABC News, and local Michigan media who had journalists present at the event. Notably, the false claim spread faster than the debunking — a pattern researchers call “truth sandwiching failure,” where corrections amplify the original claim rather than neutralize it.
Meta’s AI Disinformation Networks
Meta’s 2024 Adversarial Threat Reports documented a qualitative shift in state-sponsored disinformation: foreign operations began deploying AI-generated video newsreaders and synthetic news portals at scale. An Iranian network identified in Meta’s Q2 2024 report used AI-generated anchors on fake news websites to present pro-Iranian narratives as independent reporting. A Lebanese-origin network posted AI-generated video newsreaders blending geopolitical content with lifestyle topics to appear credible. Meta noted that AI tools offered these networks efficiency gains in content production — but had not yet meaningfully improved their ability to build authentic audiences before detection. All 20 operations disrupted by Meta in 2024 were identified before reaching significant organic reach. For deeper case analysis, see our AI Fake News in 2025 follow-up.
Synthetic Images Flooding Disaster Coverage
Multiple natural disaster events in 2024 — including flooding in Valencia, Spain, and Hurricane Helene in the US Southeast — were accompanied within hours by AI-generated images depicting destruction that had not occurred or was exaggerated. AFP Fact Check, BBC Verify, and Misbar documented over a dozen confirmed AI-generated disaster images that achieved significant circulation before debunking. Visual artifacts (implausible light sources, anatomically distorted hands, impossible architectural geometries) were present in all cases — but were invisible to most casual viewers consuming images at thumbnail size on mobile screens.
Climate Misinformation: Shifting Tactics
Climate misinformation in 2024 largely abandoned outright science denial in favor of solutions denial — false claims targeting climate policy rather than climate science itself. This shift made fact-checking harder: the underlying science was no longer the target.
The “1986 Temperature Map” Hoax
A hoax comparing two weather maps — one falsely labeled 1986, the other 2022 — spread across Spain, France, Germany, the Netherlands, Poland, Romania, Austria, Belgium, and Hungary in mid-2024, claiming to show that mainstream media were manufacturing climate panic. EDMO’s fact-checking network documented the spread across nine countries. The first map was actually from 2016, not 1986; the second was from 2021, not 2022. Both were authentic maps from legitimate meteorological sources — the manipulation was entirely in the labeling. This case is a clean example of how authentic data can be falsified through context manipulation alone, without any pixel-level editing.
European “Green Policy” Disinformation
Ahead of the EU Parliament elections, coordinated networks spread false claims about the economic effects of EU climate policies — including fabricated statistics attributing farm bankruptcies, job losses, and energy price spikes specifically and exclusively to Green Deal legislation. The EU DisinfoLab tracked multiple instances of these narratives appearing first in pro-Russian outlets before migrating to domestic populist channels. The narratives were not entirely invented: real economic pressures on European farmers existed. The misinformation was in presenting those pressures as solely caused by climate policy, and in attaching fabricated figures to real complaints.
Post-Disaster Attribution Denial
Following major weather events in 2024 — the Valencia floods (October), Hurricane Milton (October), and record summer temperatures across Southern Europe — a consistent counter-narrative emerged claiming that attributing these events to climate change was unscientific speculation. This framing selectively misrepresented attribution science: while individual weather events cannot be attributed to climate change with certainty, the field of climate attribution science has established probabilistic links between warming and increased frequency and severity of specific event types. The IPCC’s methodology distinguishes between event-level attribution (uncertain) and pattern-level attribution (well-established) — a distinction the denial narratives consistently collapsed.
Patterns and Trends: What 2024 Established
Across all four categories, several structural patterns recurred consistently enough to be considered defining features of the 2024 misinformation landscape.
Speed asymmetry widened. Misinformation spread faster in 2024 than in any previous election year, driven by algorithmic amplification and the removal of friction from sharing (one-tap reposts, automatic previews). Corrections consistently lagged by hours or days. The Reuters Institute found that more than 24% of X users reported difficulty distinguishing trustworthy from untrustworthy content — a figure that reflects not individual failure but systemic design choices by platforms.
AI lowered the production floor without raising the ceiling. Meta’s own analysis found that AI tools gave influence operations efficiency gains in content production but did not improve their ability to evade detection. The Zelensky surrender deepfake of 2022 — one of the first high-profile AI fakes in a conflict context, documented in our case database — was crude and quickly debunked. 2024 AI fakes were more numerous, not categorically more convincing.
Platform enforcement remained inconsistent. EU DisinfoLab data showed that approximately 45% of flagged election-period content was not acted on by platforms, with X and YouTube showing the highest non-action rates (around 75%). This is not a 2024-specific problem — but it became a documented, quantified one.
Zombie narratives don’t die. The vaccine infertility claims, the election fraud allegations, and several climate hoaxes debunked in 2021–2022 resurged in 2024 with minimal modification. Debunking does not erase a narrative from the information ecosystem — it competes with it.
What to expect in 2025: the AI-generated content wave that began in 2024 escalated significantly in 2025. For the documented cases and detection methods, see our analysis of AI-generated fake news in 2025. For hands-on verification skills, the SIFT method workshop covers the practical tools used in most of the cases above.