Skip to main content

The SIFT Method: A Step-by-Step Guide to Fact-Checking Anything Online

SIFT gives you four concrete moves to apply before sharing or believing any claim online: Stop, Investigate the source, Find better coverage, and Trace claims to their origin. Developed by digital literacy researcher Mike Caulfield, the method takes under two minutes per claim and catches the majority of viral misinformation before it spreads further.

What SIFT Is — and Why It Works

SIFT is a practical framework, not a checklist to complete in order. The four moves are tools you deploy situationally — sometimes one move is enough, sometimes you need all four. What makes SIFT effective is that it mirrors how professional fact-checkers actually work: they spend minimal time inside the source they’re evaluating and maximum time looking at what independent sources say about it.

Mike Caulfield, a digital literacy researcher then at Washington State University, published the framework in 2019 in a blog post titled “SIFT (The Four Moves)” on his Hapgood blog. It built on his earlier “Web Literacy for Student Fact Checkers” and incorporated findings from Stanford University’s Civic Online Reasoning research, which documented that students typically perform worse at source evaluation by reading deeply into a site than by reading laterally across multiple sources. That counterintuitive finding — less time on the source, more time checking what others say about it — is the methodological core of SIFT.

You will use SIFT for anything you read, watch, or are about to share: news articles, social media posts, statistics, quotes attributed to public figures, and images or videos claiming to document an event. The Workshop overview → shows where SIFT fits within the broader media literacy toolkit.

Move 1: Stop

Stop before you engage. The single most important move in SIFT is a pause — because the emotional response a piece of content triggers in you is often precisely what the content was designed to trigger.

When you hit a post or article and feel a strong emotional pull — outrage, fear, excitement, vindication — that emotional response is a signal to pause, not a signal that the content is true. Misinformation specifically exploits the fact that emotionally activated readers share before they think. A 2020 study in Cognitive Research: Principles and Implications found that heightened emotional state was predictive of greater belief in false news posts.

Stopping does not mean being skeptical of everything. It means being skeptical of your own first reaction. Ask: Do I know this source? Do I know this claim is accurate? If the answer to either question is no, continue to the next move.

In the Fake News Database →, virtually every documented case features content that was emotionally designed to provoke immediate sharing. The Stop move would have caught most of them before they went viral.

Move 2: Investigate the Source

Leave the page and find out what independent sources say about the publisher or author — before you decide whether to trust the content. This is lateral reading, and it is the technique professional fact-checkers use.

Most people evaluate a source by reading deeper into it: looking for an “About” page, checking the design, scanning the references. Fact-checkers do the opposite. They open new browser tabs and search: [site name] bias, [site name] credibility, or check the source directly in resources like Media Bias/Fact Check or the Pew Research Center’s journalism database. The goal is not to find a definitive verdict but to get a rough sense within 60 seconds of whether the source has a documented track record of accuracy or manipulation.

Specific signals to look for when investigating a source:

  • Is the outlet indexed in press freedom databases (RSF, CPJ, IFCN)? Legitimate news organizations are documented; fake ones rarely are.
  • Does the site name closely resemble a real outlet? Domain impersonation — abcnews.com.co instead of abcnews.com — is a documented pattern for misinformation sites.
  • Is the author a named individual with a verifiable professional record? An anonymous byline or a generic name with no searchable history is a yellow flag.
  • Has the outlet been cited or flagged by independent fact-checkers? Search the outlet name on Poynter’s IFCN page or Full Fact.

Move 3: Find Better Coverage

Search for what other sources report about the same claim, not to confirm it but to understand whether credible outlets have independently verified it or contradicted it.

If a story is accurate and significant, multiple independent newsrooms will have covered it. If only one outlet has the story — especially if that outlet has a weak track record — the absence of corroboration is informative. You are not looking for consensus; you are looking for independent verification from sources with different ownership, different country of origin, and different institutional interests.

For statistical claims — health data, economic figures, casualty numbers — always find the primary source. Most viral statistics are real numbers stripped of their original context, limitations, or measurement caveats. The Database case on casualty statistics → shows exactly how a real number becomes misleading without its methodology. Go back to the institution that published the data: national statistics offices, peer-reviewed journals, UN databases, government records.

Three searches that usually work:

  • Search the headline text in quotes to see which outlets are covering the story and how they frame it.
  • Search the core claim (not the headline) plus the word “fact check” to surface any existing verification work.
  • Search the name of the person quoted plus the claimed statement — false quotes attributed to public figures are common, and the person’s own social media or official website will often show no record of the statement.

Move 4: Trace Claims, Quotes, and Media to the Original Context

Trace the claim back to its original source to verify it was not taken out of context, selectively edited, or misattributed entirely.

Most misinformation does not involve fabricated content. It involves real content — real quotes, real images, real statistics — presented in a false context. A quote is real but taken from a 2003 interview, not this week. An image is real but from a different country or a different decade. A statistic is real but applies to a different population than the one implied. Tracing means finding the original context, not just the original source.

For images, tracing means reverse image search (covered in detail in the Reverse Image Search Guide →). For quotes, trace means finding the original interview, speech, or document — not a secondary article that paraphrases it. For videos, frame-level tools like Google Video Search and InVID/WeVerify allow you to extract keyframes and run reverse searches on them.

The practical question to ask: Who made this claim first, and in what context did they make it? If you cannot trace it to a named original source — not a social media post, not a screenshot, but a primary document or named speaker — the claim has not been verified.

For a real-world example of out-of-context manipulation, see the Emotional Language in Headlines guide → for how emotional framing creates the illusion that no tracing is needed.

SIFT in Practice: A Step-by-Step Example

Here is how SIFT works on a real-type scenario drawn from documented patterns in the database.

You see a post: “BREAKING: [Major European Government] declares media blackout on [sensitive political event]. Journalists arrested.” The post has 4,000 shares and feels urgent.

  1. Stop: The word “BREAKING” and the urgency framing are emotional triggers. Pause. Do you recognize this outlet? No. Continue.
  2. Investigate the source: Open a new tab. Search the outlet name plus “credibility” and “bias.” The outlet has no listing on Media Bias/Fact Check, no Wikipedia article, and domain registration shows it was created 11 days ago. Yellow flag elevated to red flag.
  3. Find better coverage: Search the core claim: [government] media blackout journalists arrested. Reuters, AFP, and the Committee to Protect Journalists (CPJ) show no coverage. If this were real, CPJ would have it within hours. Absence of corroboration from institutional press freedom monitors is significant.
  4. Trace the claim: The post links to a secondary article that itself cites “sources close to the government.” There is no primary document, no named journalist, no official statement — from either the government or a press organization. No traceable origin. The claim fails the trace test.

Conclusion: Do not share. The claim may be entirely fabricated or may have a partial factual basis being wildly distorted. Either way, it has not been independently verified and should not be amplified.

When SIFT Is Not Enough

SIFT handles most viral misinformation quickly and reliably. It is less suited to highly technical claims — complex scientific research, medical studies, economic modeling — where domain expertise is required to evaluate methodology. For those cases, look for secondary expert commentary from named researchers, not just the abstract of a paper.

SIFT also does not directly address deepfakes or AI-manipulated media. For synthetic video and audio, provenance analysis (who published this first, and where) remains the most reliable starting point, but technical tools add an additional layer of verification. The Deepfakes Identification Guide → covers that workflow in full.

Finally, SIFT assumes you have time to apply it. For real-time breaking news during high-intensity events — elections, natural disasters, armed conflict — the pressure to share first creates conditions where emotional triggers dominate. That is precisely when applying Stop is most important and most difficult.

SIFT Quick-Reference Checklist

Apply these four questions to any piece of online content before sharing or citing it.

  • Stop: Am I having a strong emotional reaction? Do I actually know whether this source is credible?
  • Investigate: What do independent sources say about this publisher or author? Is this outlet documented in press freedom databases or indexed by fact-checkers?
  • Find better coverage: Have credible, independent outlets reported on the same claim? If it involves data, have I found the primary institutional source?
  • Trace: Can I identify the original context of this claim, quote, or image? Is there a primary document or named source — not just a secondary post — at the origin?

All four moves should take under two minutes for most claims. If a claim requires significantly more time, that itself is a signal: credible, well-sourced information is usually easy to verify independently.