The Hierarchy of Sources: A Cheat Sheet
A guide to evaluating information sources in the AI age.
If you learned about sources in school, you probably encountered the terms primary, secondary, and tertiary. Primary sources are original materials created at the time of an event or discovery, such as a treaty, a dataset, or an interview transcript. Secondary sources analyze or interpret primary materials, including biographies, documentaries, and most journalism. Tertiary sources compile and synthesize secondary ones: encyclopedias, textbooks, and databases.
That framework is useful for understanding how far removed a source is from the original material. But it doesn’t answer the question that matters most when you’re trying to decide whether to believe something: how much should I trust this?
The hierarchy below is organized around a different axis entirely: reliability and accountability. Where did this information come from, and did anyone face consequences for getting it wrong?
The Hierarchy, from Most to Least Reliable
The hierarchy below ranks sources by how much structural accountability exists behind them. Sources at the top have already passed through verification processes, editorial oversight, or review by people with professional consequences for getting things wrong; sources at the bottom haven't. You should verify any claim that matters to you regardless of where it comes from, but this ranking tells you how much independent verification has already occurred before the information reached you.
Primary sources
put you closest to the origin of a claim because they are the original document, data, speech, or study, unmediated by anyone else’s interpretation. But proximity isn’t the same as reliability. Companies can manipulate earnings reports. Politicians can deceive in speeches. Researchers can cherry-pick data. When you consult a primary source, you gain direct access to the material, but the work of interpretation and verification still belongs to you.
Peer-reviewed research
has been evaluated by experts before publication. Peer review catches methodological problems more reliably than it catches fraud or motivated reasoning, and the rigor of review varies considerably. For example, a study published in The Lancet or Nature has passed through a more demanding evaluation process than one accepted by a journal that charges authors a fee and conducts minimal review. That said, peer-reviewed work has encountered a structured evaluation process that most information never faces.
Investigative journalism
is built on multiple sources, editorial oversight, and fact-checking processes. Quality varies by outlet and reporter, but the structural expectation of accountability exists. When The New York Times or ProPublica publishes an investigation, reporters and editors have jobs and reputations on the line if the facts don’t hold up.
Expert commentary
involves people with relevant credentials interpreting information for broader audiences. Their perspective is shaped by training, institutional affiliations, and funding sources. A cardiologist explaining heart disease research on a podcast offers useful context, but that interpretation reflects their particular clinical experience, the studies they’ve read, and potentially the institutions or companies they’re affiliated with. Expert interpretation is a valuable lens, not a final verdict.
News aggregation
means summarizing or repackaging what others have reported, which places you one additional layer away from the original evidence. A newsletter rounding up the week’s tech news, or a website collecting headlines from multiple outlets, serves a real function in helping people stay informed. But aggregation means someone else has already decided what to include, what to emphasize, and what to leave out.
Social media
consists of individuals sharing claims without editorial oversight. Social posts can surface eyewitness accounts or emerging stories before traditional media picks them up. During breaking events, the first photos and videos often appear on X (formerly Twitter), TikTok, or Instagram before journalists arrive. But the absence of verification means most content requires independent confirmation before you rely on it for anything consequential.
AI-generated content without sources
represents the bottom of this hierarchy. Large language models like ChatGPT, Claude, and Gemini generate statistically plausible sentences assembled from training data, with no built-in verification mechanism. When AI tools cite and link to sources, they function more like aggregation and can serve as a useful starting point for research. When they generate from memory alone, the output should be treated as a draft that requires fact-checking against original sources before you trust it.

What This List Doesn’t Cover
This hierarchy isn’t exhaustive. Several categories of information don’t fit neatly into the ranking because their reliability depends on context, institutional backing, and the specific claim being made.
Government data from agencies like the Bureau of Labor Statistics or the Centers for Disease Control and Prevention is collected through standardized methodologies and subject to public scrutiny. But government data can be influenced by political pressures around what gets measured and how, and collection methodologies sometimes change in ways that make historical comparisons difficult. The way the U.S. measures inflation, for instance, has been revised multiple times since the 1980s, so comparing today’s Consumer Price Index to figures from 1985 requires understanding those methodological shifts.
Think tank reports often contain valuable original research and expert synthesis you won’t find elsewhere. But think tanks exist on an ideological spectrum, and their funding sources shape the questions they ask and the conclusions they emphasize. The Brookings Institution, which leans center-left, and the Heritage Foundation, which is conservative, will approach the same policy question from different starting points, highlight different evidence, and reach different conclusions. This doesn’t make think tank research useless, but it means reading it requires awareness of whose perspective you’re getting.
Legal documents like court filings, legislation, and regulatory guidance carry the weight of legal systems behind them. But legal language is designed for precision in ways that can obscure meaning for non-specialists, and the existence of a law doesn’t tell you how it’s being enforced or interpreted in practice. Reading the text of the Affordable Care Act, for example, won’t tell you how insurance companies are actually implementing its provisions or how courts have ruled on disputed sections.
Corporate press releases and earnings reports provide official statements directly from companies about their activities and financial performance. They’re useful for understanding what a company wants you to know, which is valuable information in itself. But press releases are marketing documents, and even audited financial statements can emphasize strengths while downplaying weaknesses within the bounds of accounting rules. A company might highlight revenue growth while burying information about rising debt in the footnotes.
Wikipedia varies dramatically by topic. Articles on well-trafficked subjects like major historical events or scientific concepts tend to be carefully sourced and patrolled by editors who catch errors quickly, because thousands of people are reading and revising them. Obscure entries on niche topics or local figures may sit uncorrected for years because few people with relevant expertise ever see them. Wikipedia’s strength is its citation apparatus, which lets you trace claims back to their sources rather than taking the encyclopedia’s word for it.
Corporate-funded research presents a particular challenge because financial conflicts of interest can shape findings in ways that aren’t always transparent. Studies funded by pharmaceutical companies are more likely to report favorable results for the sponsor’s drug than independently funded studies of the same medications, not necessarily because researchers are dishonest, but because study design, outcome selection, and publication decisions can all be subtly influenced by who’s paying. This doesn’t mean industry-funded research is automatically wrong, but it means scrutinizing funding sources and looking for independent replication matters more than usual.
How to Identify What You’re Looking At
One of the trickiest parts of evaluating information is that it rarely arrives with a label. Someone shares a screenshot of a study’s conclusion on social media. A news article mentions “research shows” without linking to the original paper. A website publishes what looks like scientific research but is actually a white paper commissioned by a company with a financial stake in the findings.
This checklist offers concrete questions to ask and details to look for when you encounter different types of sources. Each category includes the specific locations where key information tends to appear, what that information tells you, and what its absence might mean.
Research studies
Search “[journal name] predatory” or “[journal name] legitimate.” If a journal has a reputation for publishing anything for a fee, that will usually surface quickly.
Look for the funding disclosure. Scroll to the end of the paper and look for a section labeled “Funding,” “Acknowledgments,” “Declarations,” or “Conflicts of Interest.” If a pharmaceutical company funded a study about its own drug, that’s worth knowing.
Check who the authors work for. Author affiliations appear near the beginning of the paper, usually right below their names. A study about the safety of a product written by researchers employed by the company that makes it warrants more scrutiny than one written by independent university researchers.
News and journalism
Search “[outlet name] corrections” or “[outlet name] retractions.” News organizations that acknowledge and fix their mistakes are more credible than those that don’t. If an outlet has a history of quietly deleting stories or never issuing corrections, that’s worth knowing.
Search the reporter’s name. A quick search will show you their previous work and whether they have a track record covering this topic. If you can’t find any other articles by them, or they seem to write about wildly different subjects, that’s a flag.
Check whether it’s labeled as news or opinion. These serve different purposes. News reporting is supposed to present verified facts; opinion pieces present arguments. If you can’t tell which you’re reading, that’s a problem.
Click the links. When an article claims “a study shows” or “research suggests,” check whether it actually links to that research. If there’s no link, the claim is harder to verify.
Think tanks and policy research
Assume there’s an angle. Think tanks exist to advance particular perspectives. This doesn’t make their research useless, but it means you should read it as an argument from a point of view rather than neutral analysis.
Search “[think tank name] funding” or “[think tank name] political leaning.” A quick search will usually surface reporting on who funds the organization and what perspective it represents.
Look for methodology. Rigorous policy research explains how the analysis was conducted. Advocacy dressed as research often presents only conclusions.
Social media and user-generated content
Figure out who’s behind the account. An anonymous account, an influencer with a brand to promote, a journalist sharing their reporting, and a scientist posting about their research all warrant different levels of trust. Check their bio, what else they post about, and whether they have credentials or affiliations you can verify outside the platform.
Follower counts don’t equal accuracy. Popular accounts get things wrong constantly. An influencer with millions of followers can spread misinformation just as easily as a bot account, sometimes more effectively.
Expertise in one area doesn’t necessarily transfer. A physician sharing medical information is more credible than one sharing investment advice. A tech entrepreneur’s opinions on technology carry more weight than their opinions on foreign policy. Pay attention to whether someone is speaking inside or outside their actual area of knowledge.
AI-generated content
Look for verifiable specifics. AI-generated text tends to be fluent but vague. Does it cite sources you can check? Does it include details that would require direct access or firsthand knowledge?
Fluent doesn’t mean accurate. AI can produce polished, confident-sounding text that’s entirely fabricated. Smooth writing isn’t evidence of good information.
When it matters, verify the claim, not the source. Whether content was written by a human or generated by AI, the same question applies: can you confirm this information through sources that have accountability structures? If you can’t, treat it as unverified regardless of where it came from.
The Principle That Cuts Across Everything
One principle applies to every level of this hierarchy: who produced the source matters as much as what type it is.
A primary source from a university research hospital carries different weight than one from a company selling the product being evaluated. A peer-reviewed study in JAMA or The New England Journal of Medicine means something different than one published in a predatory journal that accepts submissions for a fee and provides only cursory review. An investigative report from a newsroom with a track record of accurate, verified journalism differs from one produced by an outlet with a history of retractions or partisan framing.
This is where the concept of institutional credibility becomes essential. Institutions develop reputations over time based on their track record of accuracy, their correction practices when they get things wrong, their transparency about methods and funding, and the professional standards they enforce. The Washington Post has published corrections and retractions when errors are discovered, which is actually a sign of a functioning accountability system rather than a mark against the organization. A publication that never issues corrections either never makes mistakes (unlikely) or doesn’t have processes for catching them (concerning).
Funding sources and financial relationships also matter in ways that can be subtle. A nutrition study funded by a food company isn’t automatically invalid, but the financial relationship creates incentives that should make you look more carefully at the methodology and conclusions. A policy analysis from a think tank that receives funding from industries affected by the policy being analyzed warrants extra scrutiny. A glowing product review from someone who received the product for free, or who earns affiliate commissions on sales, exists in a different context than one from a reviewer who purchased the item independently.
Professional credentials and relevant expertise add another layer. A climate scientist publishing in their field of specialization speaks with different authority than that same scientist opining on economic policy. A physician’s medical advice carries more weight than their investment recommendations. Expertise is domain-specific, and people with impressive credentials in one area can be completely wrong about topics outside their training.
Source type tells you something about how the information was produced: whether it went through peer review, editorial oversight, or fact-checking, or whether it flowed directly from someone’s mind to your screen without any intermediate verification. Source credibility tells you something about the incentives behind it: who funded the work, what the producer stood to gain from particular conclusions, and what accountability structures exist if the information turns out to be wrong. Evaluating information well requires attention to both dimensions, and developing a sense for when each one matters more.
Why This Matters Now
The information environment you navigate today bears little resemblance to the one that existed a few decades ago. In the 1990s, getting your ideas in front of a large audience usually required convincing a publisher, a newspaper editor, a television producer, or some other institutional gatekeeper that your content was worth distributing. Those gatekeepers had biases and blind spots, and plenty of valuable perspectives were excluded from mainstream channels. But the friction involved in publishing meant that most widely circulated information had passed through at least one filter, however imperfect.
That friction is gone. Anyone with an internet connection can now publish anything, to a potential audience of billions, at essentially zero cost. A teenager in their bedroom can create a website that looks as professional as a major news organization. A coordinated network can flood social media with fabricated stories faster than fact-checkers can respond. AI tools can generate plausible-sounding text, realistic images, and convincing audio and video at a scale that would have seemed like science fiction a decade ago.
Meanwhile, the platforms that distribute this information have built their business models around engagement, which means their algorithms actively promote content that generates strong emotional reactions regardless of whether that content is accurate. Research has shown that false news stories tend to spread faster and reach more people than accurate ones, in part because fabricated content is often designed to provoke outrage or surprise. The economic incentives of the attention economy are misaligned with the goal of keeping people well-informed.
The volume of information available has also exploded beyond any individual’s capacity to process it. No one can read it all, which means everyone relies on filters and shortcuts to decide what deserves attention. Those filters used to be librarians, editors, and curators with professional training in evaluating information. Now they’re increasingly algorithms optimized for engagement, social networks where popularity substitutes for accuracy, and AI assistants that can present fabrications with the same confidence as facts.
This is the world where information literacy matters more than ever.
But acquiring that skill doesn’t mean memorizing a hierarchy of sources or following a rigid checklist. The deeper capability is developing the habit of asking better questions every time you encounter a claim that might influence what you believe or how you act.
Four questions, in particular, can help you evaluate almost any source.
Who created this?
Knowing whether the source is an individual, an institution, a corporation, or an algorithm changes how you evaluate its reliability. An anonymous social media account making claims about a breaking news event exists in a different credibility context than a reporter with a byline and an editor.
What was their process?
Did someone fact-check this? Did experts review it? Did an editor push back on unsupported claims? Or did it go straight from someone’s head to your screen with no intermediate verification? The presence or absence of accountability structures matters enormously.
What did they stand to gain or lose?
Financial incentives, political motivations, professional reputation, and social approval all shape what people publish and how they frame it. A product review from someone with no financial stake differs from one written by someone earning affiliate commissions. A policy analysis from a researcher whose career depends on a particular conclusion differs from one produced by someone with no skin in the game.
Where can I find corroboration?
A claim that appears in multiple independent sources with different institutional backing is more likely to be accurate than one that appears in a single place or only in sources that cite each other. If five different news organizations are all citing the same original report, that’s one source, not five, no matter how many times the story gets repeated.
These questions won’t guarantee you arrive at the truth. Information literacy isn’t about achieving certainty because certainty is often impossible when claims are contested, evidence is incomplete, or experts disagree. What these questions give you is better odds: a higher probability of recognizing when something deserves your trust and when it requires further scrutiny before you let it shape your beliefs or your actions.
In a world where the information is infinite and your attention is the scarce resource, the ability to evaluate sources quickly and accurately is the skill that determines whether you’re navigating the landscape or being navigated by it. The algorithms, the advertisers, the propagandists, and the AI systems don’t need you to be a passive recipient of whatever content they push in front of you. They just need you to not ask questions.
Your questions are the only thing standing between you and whatever they want you to believe.




Very useful information. A great guide for those who seek truths!
This is awesome! Thanks for writing this.