EDCI568 ‑10. A Conversation with Mike Caulfield on SIFT Method for Mis/Dis-Information

In an age where infor­ma­tion flows at an unprece­dent­ed rate, the abil­i­ty to eval­u­ate sources and claims is more crit­i­cal than ever. Hav­ing grad­u­at­ed from high school in 1990, I expe­ri­enced first­hand the shift from tra­di­tion­al research meth­ods to the dig­i­tal world. In those days, research­ing meant phys­i­cal­ly going to a library, flip­ping through card cat­a­logs, and search­ing for books and jour­nal arti­cles. If I need­ed more infor­ma­tion, I had to track down an ency­clo­pe­dia or ask a librar­i­an for guid­ance.

When the inter­net arrived, every­thing changed. I remem­ber the ear­ly days of dial-up modems, the screech­ing sound of a con­nec­tion being estab­lished, and the sheer excite­ment of access­ing infor­ma­tion with­out leav­ing the house. Before Google became dom­i­nant, we relied on search engines like Yahoo!, Fetch, AltaVista, Lycos, and Ask Jeeves, each with its own quirks and lim­i­ta­tions. Back then, search results weren’t always ranked by rel­e­vance, and fil­ter­ing through pages of unre­lat­ed links was a com­mon frus­tra­tion. Despite these chal­lenges, the inter­net felt like a rev­o­lu­tion­ary tool—one that promised quick and easy access to knowl­edge.

Fast for­ward to today, and while access to infor­ma­tion has nev­er been eas­i­er, the chal­lenge has shift­ed from find­ing infor­ma­tion to ver­i­fy­ing it. Unlike a library, where sources had already been vet­ted for cred­i­bil­i­ty, the inter­net presents infor­ma­tion with­out any built-in qual­i­ty con­trol. Any­one can pub­lish any­thing, and it’s up to the read­er to deter­mine its valid­i­ty.

Mike Caulfield, co-author of Ver­i­fied and cre­ator of the SIFT method, has spent years study­ing how mis­in­for­ma­tion spreads and how peo­ple can bet­ter nav­i­gate the dig­i­tal world. His approach isn’t about mem­o­riz­ing lists of reli­able sources but rather about devel­op­ing habits of crit­i­cal inquiry that can be applied to any infor­ma­tion encoun­tered online.

Caulfield’s work shows that stu­dents often assume Google func­tions as a high­ly curat­ed data­base, lead­ing them to trust the first result with­out ques­tion­ing its cred­i­bil­i­ty. To coun­ter­act this, he devel­oped the SIFT method, a sim­ple but pow­er­ful approach:

  1. Stop – Before engag­ing, ask if the source is what you think it is.
  2. Inves­ti­gate the Source – Con­sid­er why the source has author­i­ty on the top­ic.
  3. Find Bet­ter Cov­er­age – Look beyond the first link to more rep­utable sources.
  4. Trace Back to the Orig­i­nal – Ensure claims haven’t been altered or tak­en out of con­text.

Mis­in­for­ma­tion today isn’t just about fab­ri­cat­ing false con­tent; it often involves repack­ag­ing real infor­ma­tion in mis­lead­ing ways. Whether it’s a cropped image, a selec­tive­ly edit­ed video, or a quote stripped of its orig­i­nal mean­ing, much of today’s mis­in­for­ma­tion relies on the removal of crit­i­cal con­text.

In my own teach­ing, I empha­size the impor­tance of ver­i­fy­ing sources by requir­ing stu­dents to find at least three sources that align before trust­ing infor­ma­tion. Ini­tial­ly, they resist this, want­i­ng to use the first thing they find, assum­ing that if it looks offi­cial, it must be true. To chal­lenge this, I intro­duce exam­ples like the Pacif­ic North­west Tree Octo­pus—a well-craft­ed hoax that claims a species of tree-dwelling octo­pus­es exist. When stu­dents take it at face val­ue, it seems cred­i­ble, but as they dig deep­er, they real­ize how eas­i­ly mis­in­for­ma­tion can spread.

Anoth­er exam­ple I some­times use is the House Hip­po, a Cana­di­an PSA that shows a tiny hip­po liv­ing in a home, pre­sent­ed in a style sim­i­lar to wildlife doc­u­men­taries. It’s a mem­o­rable way to high­light how visu­al sto­ry­telling can make mis­in­for­ma­tion seem more con­vinc­ing. Once stu­dents rec­og­nize that their ini­tial instinct to believe was flawed, they become more open to ques­tion­ing sources and ver­i­fy­ing claims.

Arti­fi­cial intel­li­gence is now com­pli­cat­ing this land­scape fur­ther. While AI strug­gles with fac­tu­al accu­ra­cy, it excels at mim­ic­k­ing rea­son­ing struc­tures. Caulfield has exper­i­ment­ed with AI mod­els that ana­lyze argu­ments and iden­ti­fy gaps in log­ic. These tools show promise, but they also high­light the increas­ing impor­tance of teach­ing rea­son­ing skills—not just the abil­i­ty to fact-check, but the abil­i­ty to ana­lyze how and why an argu­ment is struc­tured in a cer­tain way.

Anoth­er major con­cern is the atten­tion econ­o­my. As Caulfield notes, “infor­ma­tion con­sumes atten­tion,” mean­ing that the real chal­lenge isn’t a lack of infor­ma­tion but an over­whelm­ing flood of it. Social media algo­rithms rein­force con­fir­ma­tion bias by pre­sent­ing end­less streams of con­tent that align with what users already believe, mak­ing it dif­fi­cult to engage with dif­fer­ing per­spec­tives. Instead of engag­ing in thought­ful dia­logue, peo­ple end up in echo cham­bers, rein­forc­ing their exist­ing beliefs while dis­miss­ing oppos­ing view­points out­right.

One of the most thought-pro­vok­ing ideas from Caulfield’s dis­cus­sion is crit­i­cal ignor­ing—the idea that man­ag­ing one’s infor­ma­tion diet is just as impor­tant as fact-check­ing. Instead of col­lect­ing end­less low-qual­i­ty evi­dence to con­firm an exist­ing belief, indi­vid­u­als need to prac­tice fil­ter­ing out unre­li­able infor­ma­tion and know­ing when to step back from the con­stant fire­hose of online con­tent.

Reflect­ing on how dig­i­tal lit­er­a­cy has evolved, I can see how my own expe­ri­ences have shaped my approach to teach­ing these skills. I grew up in an era where infor­ma­tion was slow­er to access but inher­ent­ly more vet­ted. Now, my stu­dents have instant access to end­less con­tent, but they must devel­op the abil­i­ty to fil­ter, ques­tion, and val­i­date it. Whether it’s teach­ing them about mis­in­for­ma­tion through hoax­es like the tree octo­pus or help­ing them adopt strate­gies like SIFT, my goal is to pre­pare them to think crit­i­cal­ly about the infor­ma­tion they encounter.

In an era of AI, mis­in­for­ma­tion, and algo­rithm-dri­ven news feeds, dig­i­tal lit­er­a­cy must go beyond sim­ple fact-check­ing. It requires an inten­tion­al approach to eval­u­at­ing sources, under­stand­ing rea­son­ing struc­tures, and man­ag­ing one’s own atten­tion. The skills that will mat­ter most in the future won’t be the abil­i­ty to mem­o­rize facts but the abil­i­ty to think crit­i­cal­ly, ask the right ques­tions, and deter­mine when infor­ma­tion is worth engag­ing with at all.