How to Spot AI Hallucinations Like a Reference Librarian
The verification tricks that would make fact-checkers weep with joy.
Last Tuesday, a client sent me their “thoroughly researched” white paper on workplace automation. It had 47 citations. Looked bulletproof. Every claim backed by a study, every statistic sourced to a journal. I was impressed for exactly three minutes.
Then I tried to find “Peterson et al. (2024): Longitudinal Analysis of Remote Work Productivity in Tech Sectors” in the Journal of Organizational Psychology.
The journal exists. The year 2024 existed. Dr. Peterson probably exists somewhere. But this study? Complete fabrication. ChatGPT had invented a plausible-sounding citation out of thin air. And it wasn’t alone. Of those 47 citations, 31 were what we call “hallucinations.”
The client was mortified. I wasn’t surprised. Because here’s what nobody tells you about AI-generated citations: they’re often more believable than real ones.
Why AI Makes Up Sources (And Why They’re So Convincing)
ChatGPT doesn’t lie, exactly. It patterns matches. When you ask for a “cited article about remote work productivity,” it knows what citations look like. Author name, year, compelling title, respectable journal. It assembles these patterns into something that feels right. Like a dream where everything makes sense until you wake up.
The tell isn’t that fake citations look wrong. It’s that they look too right. Too convenient. Too perfectly aligned with whatever point the AI is making.
Real academic citations are messy. The study about productivity actually measured something adjacent and you’re extrapolating. The date is from 2019, not last month. The journal name is awkwardly long with a colon and subtitle nobody remembers. The author’s name is hard to spell. Real research is inconvenient.
AI citations are suspiciously convenient. They appear right when you need them, saying exactly what you need them to say.

The Three-Layer Verification Method
In library school, they taught us something called “citation chaining,” but I’ve adapted it for the age of AI hallucinations. Think of it as three increasingly paranoid levels of verification.
📖 Librarian Dictionary
citation chaining (n.)
/saɪˈteɪʃən ˈtʃeɪnɪŋ/Library Science. The academic equivalent of stalking someone’s friends on social media to understand who they really are. You start with one decent source, raid its bibliography for other sources (backward chaining), then check who’s cited it since publication (forward chaining). Before you know it, you’ve mapped an entire scholarly conversation and it’s 3 AM. Classic use: Student finds one relevant article, follows its citation trail, suddenly understands the entire field’s debate structure. Modern plot twist: Now we use it to verify if ChatGPT’s citations even exist, because we’re citation chaining through phantoms and ghosts.
Origin: Scholars have been pillaging each other’s bibliographies since footnotes were invented; library schools formalized it as a research method sometime in the mid-20th century when they needed to teach systematic research beyond “just look for more books.”
See also: academic six degrees of separation, why librarians make excellent detectives, how to lose an entire weekend to Google Scholar.
Layer One: The Existence Check
Does this source exist at all? Not “does it sound real” but actually exist. Google the exact title in quotes. Check the journal’s website. Search the author’s name with the institution they supposedly work at. About 40% of AI citations fail this basic test.
I watched a colleague do this with a “Harvard Business Review” article that ChatGPT cited. The title sounded perfect. The year was recent. HBR definitely publishes articles on that topic. But that specific article? Never existed. The AI had created a highly plausible ghost.
Layer Two: The Content Check
So the source exists. Great. Does it actually say what the AI claims it says? This is where things get weird. I’ve seen ChatGPT cite real articles but completely fabricate their findings. It’s like it remembers the article exists but not what it actually argued.
Last month, someone cited a real MIT study about algorithmic bias. The study existed. The authors were real. But the AI claimed it showed algorithms reducing bias by 73%. The actual study? It warned about algorithms amplifying bias. Complete opposite conclusion.
Layer Three: The Context Check
This is the librarian special. Even if the source exists and says what’s claimed, is it being used appropriately? Is this a preliminary study being treated as definitive? Is this one contrarian researcher being positioned as consensus? Is this correlation being presented as causation?
The AI doesn’t understand academic weight. It treats a conference paper, a journal article, a blog post, and a Nobel laureate’s research with equal authority if they contain the right keywords.

The Tell-Tale Signs of Hallucinated Facts
Beyond citations, there are patterns to how AI makes things up. Once you see them, you can’t unsee them.
The Percentage Tell
AI loves oddly specific percentages. “Studies show 78.3% of remote workers report higher productivity.” Really? Exactly 78.3%? Not 78%? Not “roughly three-quarters”? Real research rarely lands on such specific decimals unless there’s a methodological reason.
The Timeline Tell
Watch for impossibly recent studies about long-term trends. “A 2024 study tracked workplace changes over the past decade.” Think about that. A study published in 2024 about ten-year trends would have started in 2014, taken years to conduct, more years to analyze and publish. The timeline doesn’t work.
The Scope Tell
AI tends to create studies with impossibly broad scope. “Researchers surveyed 50,000 professionals across 15 industries in 30 countries.” That’s a multi-million dollar study. Those are rare. Most real research is narrower: “We surveyed 247 software developers in three tech companies in Seattle.”
The Wikipedia Paradox
Here’s something that breaks people’s brains: Wikipedia, the source every teacher told you never to cite, is now more reliable than AI for one specific reason. You can check Wikipedia’s citations. They’re right there, numbered, linked. Sometimes they’re broken or outdated, but they’re attempting to point to something real.
AI citations often point to nothing. They’re citations to the idea of a citation. References to the concept of research. Footnotes in a dream.
I’ve started telling people: treat AI citations like Wikipedia. Never cite them directly, but use them as a starting point to find real sources. If ChatGPT claims “research shows,” your job is to find the actual research or acknowledge it might not exist.
What This Means for How We Use AI
The client with the 31 fake citations asked me something interesting: “Should I just stop using AI for research?”
No. That’s like saying you should stop using Wikipedia because anyone can edit it. You just need to understand what you’re working with. AI is brilliant at helping you discover what kinds of research might exist, what terminology researchers use, what debates are happening in a field. It’s terrible at accurately citing specific sources.
Use AI to explore. Use libraries to verify. Use your brain to evaluate.
Next week: Why AI’s absolute confidence in false information might actually be useful. I’ll explain how librarians use confident wrongness as a teaching tool, and how you can use AI’s hallucinations to sharpen your own critical thinking.
Subscribe to Card Catalog for weekly lessons in thinking like a librarian in the age of AI. Because the machines aren’t getting less confident anytime soon.



So many AI citations are copied and recopied that they now show up in Google Scholar, making them seem all the more real. As a librarian I was taught to always find 3 different sources that said the same thing before deciding it was accurate. That is now harder to do with the slop AI puts out.
Poor AI. Found that both my state and national library will give me access to a fair range of electronic databases with full-text options. I think the reason why some folk don’t do the research themselves is because they can’t access the papers.