The Verification Ladder: A Framework for Evaluating Video When You Can’t Tell What’s Real
"Is this real?" isn't a straightforward question anymore. What to ask instead, and why the source often matters more than the footage itself.
A video lands in a group chat: a celebrity saying something wildly out of character. The first instinct is to react. The second, increasingly, is to wonder: wait, is this even real?
The mouth movements look right. The voice sounds right. The lighting stays consistent across the frame. And still, there’s no way to know for certain just by watching.
This is the new default. And if you’ve felt a growing unease about it, a sense that the ground has shifted beneath your feet in ways you weren’t prepared for, you’re responding appropriately to the situation.
What Changed, and When
For most of the history of recorded media, a photograph or video served as evidence. It wasn’t perfect evidence. Photos could be staged, footage could be edited, context could be stripped away. But the underlying assumption held: if you could see it, someone had pointed a camera at something that existed in the world.
That assumption began to erode in late 2017, when a Reddit user posted the first widely circulated “deepfakes,” videos using AI to swap celebrities’ faces onto other bodies. The term combines “deep learning” (a type of artificial intelligence) with “fake.” The underlying technology, called Generative Adversarial Networks (GANs), had been developed by researcher Ian Goodfellow in 2014, but 2017 marked its entry into public accessibility.
Early deepfakes were crude, obviously artificial, relatively easy to identify. They also required significant time and technical knowledge to produce. Each of those barriers has fallen. The technology has improved substantially. The cost has dropped to near-zero for basic applications. The time required has shrunk from days to minutes. And the scope has expanded beyond face-swapping to include voice cloning, full-body generation, and the creation of entirely synthetic scenes.
The result is a world where “seeing is believing” no longer functions as a reliable heuristic. Video evidence is no longer self-verifying.
The Emotional Reality
Before getting to practical frameworks, it’s worth pausing on what this actually feels like.
If you’ve watched something and genuinely couldn’t tell whether it was real, you’ve experienced a particular kind of disorientation. It’s different from being lied to in text, where you’re evaluating claims. It’s different from seeing an obvious special effect in a movie, where your brain categorizes it as fiction. It’s a specific unsettling sensation: your eyes are telling you one thing, your reasoning mind is uncertain, and there’s no immediate way to resolve the conflict.
This disorientation is unpleasant enough in isolated incidents. When it becomes chronic, when every striking piece of video triggers it, the psychological toll accumulates. Some people respond by becoming hyper-skeptical, doubting everything, which has its own costs. Others respond by giving up on verification entirely and just going with their gut, which makes them easier to manipulate. Both responses are understandable. Both are also what bad actors are counting on.
There’s also the social dimension. You may have shared something that turned out to be fake. You may have had an argument with someone who believed something you were certain was fabricated. You may have watched an older relative or a young person in your life get taken in by synthetic media and felt helpless to explain why their confidence was misplaced.
These experiences are genuinely difficult. They strain relationships. They make people feel foolish. They erode the shared foundation of agreed-upon reality that makes conversation possible. The framework that follows won’t eliminate that difficulty, but it offers a structured way to think through the problem rather than being overwhelmed by it.

The Verification Ladder
“Is this real?” used to be a yes-or-no question. Either someone filmed it or they didn’t. Either it happened or it was staged. The binary was simple, even when the answer was complicated.
That binary has shattered into something more like a spectrum, which I’m calling the verification ladder. Each rung represents a different relationship between what appears on screen and what actually happened in the world. Each rung leaves different traces. Each rung requires different verification questions.
Knowing which rung you’re likely standing on determines which questions are worth asking. This matters because asking the wrong questions wastes time and energy, while asking the right questions can sometimes resolve uncertainty quickly.
Rung 1: Authentic Footage
What it is: Real video of a real event, unaltered. Someone pressed record, something happened, and the resulting file hasn’t been modified beyond basic compression for upload.
This is what video was, until recently. And a large portion of video still is authentic footage. The existence of fake media doesn’t mean all media is synthetic. But authentic footage now exists on a continuum with manipulated footage in ways that require active verification rather than passive assumption.
What to ask:
About the source:
Who filmed this? Is the original videographer identifiable?
Where did this footage first appear? Can you trace it back to its earliest known upload?
Does the original source have a track record? Is it a verified account, a known journalist, an established news organization, or an anonymous channel with no history?
Has the footage been stripped of metadata and reuploaded multiple times, obscuring its origin?
About the event:
If this depicts a public event, are there other angles, other witnesses, other documentation?
Does the timing claimed for the footage match other records of what was happening at that time and place?
Are the physical details (weather, lighting, location landmarks) consistent with when and where this supposedly occurred?
How to recognize it:
Authentic footage of real events generates corroborating evidence. Other people were there. Other cameras were rolling. News organizations reported on it. The event exists in multiple independent records. When a video claims to show something significant and yet exists in complete isolation, that absence of corroboration is worth noting.
Common manipulation at this rung:
Even authentic footage can be weaponized through false labeling. A real video from one country gets described as being from another. A real protest from 2019 gets presented as happening today. The footage is genuine; the context supplied by the person sharing it is not. This is why source-tracing matters even when the footage itself is unaltered.
Rung 2: Real But Edited
What it is: Genuine footage that has been cut, recontextualized, slowed down, sped up, reversed, or spliced with other real footage to change its apparent meaning. The raw material is authentic, but the assembly is deceptive.
This is the oldest form of video manipulation, far predating AI. It remains among the most common forms of video misinformation, and in some ways the most effective, because it doesn’t require any sophisticated technology. Anyone with basic editing software can do it. The footage passes technical authenticity checks because it is technically authentic. The deception happens at the level of curation and framing.
Examples of how this works:
Selective editing: A thirty-minute speech is cut down to a fifteen-second clip that reverses the speaker’s meaning. The word “don’t” gets edited out of “I don’t support this policy.” A politician’s sarcastic statement gets presented as sincere.
False juxtaposition: Two real clips are edited together to suggest they’re related when they’re not. A crowd shot from one event gets paired with audio from another. A reaction shot of someone laughing is placed after something they weren’t actually responding to.
Manipulated timing: Footage is slowed down to make someone appear impaired. Footage is sped up to make a crowd appear more frenzied. A pause is edited out to make someone seem to say two things consecutively that they actually said hours apart.
Stripped context: A video shows someone throwing a punch without showing the preceding minutes of them being attacked. A clip shows someone shouting without showing what they’re shouting at. The beginning and end are cut precisely to create a misleading impression.
What to ask:
What happened immediately before and after this clip? Is there a longer version available?
Who originally published this, and who edited it down to this form?
What emotional reaction does this clip seem designed to provoke? Does that reaction benefit someone?
If this involves a public figure, have they or their representatives provided the full context?
Does the edit feel “too perfect”? Does it land exactly on the inflammatory moment in a way that seems engineered?
How to recognize it:
Contextual gaps. Jump cuts that feel abrupt. Audio that doesn’t quite match the ambient sound of the environment. A sense that something is missing, because something usually is. Also: a suspicious precision in what the clip includes and excludes, as if someone went through a longer piece of footage and extracted exactly the portion that supports a particular narrative.
Why this rung matters:
Edited footage can be more effective than synthetic footage because it carries the weight of authenticity. When someone points out that a deepfake is fake, the response is usually acceptance. When someone points out that a selectively edited clip is misleading, the response is often “but it’s real footage.” The footage being real obscures the fact that the meaning has been manufactured through curation.
Rung 3: Real Person, Fake Elements
What it is: Actual footage of a real, identifiable person, but with AI-generated voice, altered words, or manipulated facial expressions layered on top. The person exists. Some underlying footage of them may have been used as raw material. But what they appear to say or do in this specific video has been fabricated.
This is where deepfakes live, along with their increasingly sophisticated descendants. The technology works by training AI models on existing footage and audio of a person, then using those models to generate new content that mimics their appearance and voice. The more footage of someone that exists publicly (celebrities, politicians, executives, anyone who has given speeches or interviews on camera), the easier they are to fake convincingly.
The main variants:
Face-swapping: One person’s face is mapped onto another person’s body. Early deepfakes primarily used this technique. The body language, movements, and setting might be from a real video of someone else, but the face has been replaced.
Lip-sync manipulation: Real footage of a person is altered so their mouth movements match different audio. The person may have actually been in that room, wearing those clothes, at that time, but the words coming out of their mouth have been changed.
Voice cloning: AI generates speech that mimics a specific person’s vocal patterns, cadence, accent, and emotional range. This synthetic audio can be paired with authentic video, manipulated video, or entirely generated video.
Expression manipulation: Subtle changes to facial expressions shift the emotional tone of real footage. A neutral expression becomes a smirk. A serious statement is delivered with an eye-roll that wasn’t there originally.
What to ask:
About verification:
Does a verified version of this moment exist elsewhere? If a public figure supposedly said something, is it documented in official transcripts, multiple news sources, or their own verified channels?
Has the person or their representatives addressed this specific video? (High-profile individuals are increasingly issuing explicit denials of synthetic content.)
When and where did this supposedly occur? Can the event itself be verified independent of this footage?
About technical indicators (though these are becoming less reliable):
When you slow the video down, do the lip movements match the audio precisely, especially on consonants like “b,” “p,” and “m” that require specific mouth shapes?
Do facial micro-movements (small expressions between words, subtle shifts in eye focus) look fluid or slightly mechanical?
Is there an unusual “smoothness” to the skin that looks more like a video game character than a real person?
Around the edges of the face (hairline, ears, jaw), do things blur or distort inconsistently, especially when the head moves?
About context:
Is this person saying something dramatically out of character? (This doesn’t prove it’s fake, but it raises the prior probability.)
Who benefits from this video existing and spreading?
Is it being shared by sources with a track record of reliability, or is it circulating primarily through channels with no accountability?
How to recognize it:
Audio-visual desynchronization becomes apparent when slowed down. Lighting on the face sometimes fails to match lighting in the environment. Earrings, glasses, or hair near the face may blur or distort in ways that real objects don’t. Reflections in eyes may not match what should be reflected given the scene. And there’s often an uncanny quality that’s easier to sense than to specify: something feels slightly wrong, even if you can’t immediately articulate what.
The limitation of technical detection:
According to a 2024 meta-analysis of 56 studies involving over 86,000 participants, human accuracy at detecting high-quality deepfakes hovers around 55%, which is not significantly better than chance. Detection tools perform better in controlled settings (the best commercial detectors achieve around 78% accuracy), but research shows that both commercial and open-source detectors can lose up to half their accuracy when tested on “in the wild” deepfakes that weren’t in their training data.
Rung 4: Generated From Scratch
What it is: Entirely AI-created content. The person depicted may not exist at all. The voice may be synthesized without reference to any specific real person. The setting, the event, the entire scene: none of it happened. There is no underlying footage being distorted. The whole thing is synthetic from the ground up.
This category is evolving faster than any other. In 2022, AI-generated video was obviously artificial, useful mainly for curiosity and experimentation. By 2024, tools like Sora, Runway, and others had produced footage capable of fooling casual viewers in short clips. The trajectory continues.
Current AI video generation can produce footage that looks plausible at first glance: talking heads in generic settings, short clips of people walking or gesturing, stock-footage-style scenes of cities, nature, crowds. The technology still struggles with complex physics, extended sequences, and fine details, but “struggles” is relative on a rapidly changing curve.
What to ask:
About the person:
Does this person verifiably exist? Can you find other footage of them from different sources, different contexts, different time periods?
If they claim to be someone specific (a whistleblower, an eyewitness, an expert), can you verify their identity through channels they don’t control?
Do reverse image searches turn up this face attached to different names, different claims, different contexts? (AI-generated faces get reused.)
About physical details:
Do hands look correct? (AI generation has historically produced hands with the wrong number of fingers, fingers that bend incorrectly, or fingers that fuse together. This is improving but remains worth checking.)
Do teeth remain consistent throughout the video, or do they shift in number or arrangement between frames?
Does text in the background (signs, screens, book spines) resolve into readable words, or does it look like language without actually being language?
Does jewelry (earrings, necklaces, watches) appear and disappear or change between frames?
Are there small objects in the scene (pens, cups, buttons) that warp or morph as the camera moves?
About movement and physics:
Is the camera movement too smooth and perfect, lacking the small imperfections of handheld footage or even stabilized footage?
Do objects in the background stay consistent, or do they shift, multiply, or disappear?
If there’s liquid (water, coffee, wine), does it move like real liquid?
If there are reflections (mirrors, windows, water), do they reflect what they should be reflecting given the scene?
Does fabric (clothing, curtains) move in ways that feel physically plausible?
About context:
Where did this first appear? What’s the original source?
Is this video evidence for an extraordinary claim? (The more extraordinary the claim, the more verification it requires.)
Who benefits from this video existing and being believed?
How to recognize it:
Generated video tends to struggle with consistency across frames. Look at the periphery of the scene, not the center where effort was concentrated. Objects may morph. Backgrounds may shift. Physics may feel subtly wrong in ways that are easier to sense than to articulate. The small details that real videographers wouldn’t think about because they’re just capturing reality become tells when AI has to make choices about them.
The trajectory problem:
The specific artifacts mentioned here may be outdated within months. AI generation is actively trained to eliminate the artifacts that humans learn to spot. The progression from “obviously fake” to “mostly convincing” to “difficult to distinguish” has happened faster than anticipated. Verification strategies that rely primarily on spotting visual artifacts are fighting a losing battle. This doesn’t mean it’s not worth noting them when they occur, but it does mean building a verification practice primarily around artifact-spotting is building on an unstable foundation.
Why a Ladder
The metaphor is deliberate.
A ladder implies climbing: effort, deliberation, one rung at a time. The instinct when encountering striking video is to react at ground level. See, feel, share. The ladder serves as a reminder that verification is an ascent. It takes work to move from gut reaction to informed assessment. That work is often worth doing, especially when the emotional stakes are high.
A ladder also implies that you can stop climbing. You can pause on a rung. You can decide that “I don’t know whether this is real, and I’m choosing not to share it until I do” is the most honest position available. That’s a legitimate place to stand. It’s often the right place to stand.
And a ladder implies that different content sits at different heights. A clip that’s been selectively edited requires different questions than a fully generated scene. Conflating all types of video manipulation into one category of “fake” makes the problem feel more impossible than it is. Some fakes are easier to verify than others. Some questions resolve quickly. Knowing which rung you’re on helps you know what’s worth trying.
The Source Question: What Often Matters More Than the Footage Itself
There’s a persistent hope that careful enough viewing will reveal whether something is real. That visual evidence is self-verifying. That the truth is visible if you just look hard enough.
This hope is increasingly misplaced.
The more sophisticated synthetic media becomes, the less reliable direct visual examination is for people without specialized forensic tools. And even specialized tools are struggling to keep pace. According to research published in 2024, even the best detection tools can lose up to half their accuracy when confronted with new deepfakes they weren’t trained on.
But there’s a different set of questions that remains powerful: questions about the source and context of the video rather than its pixels.
Provenance: Where did this video first appear?
This is often the most useful question you can ask. A video that first appeared on a verified account of a major news organization with editorial standards and legal accountability carries different weight than a video that first appeared on an anonymous account on a platform with no content moderation. This isn’t because news organizations are always right. It’s because they face consequences for being wrong, which creates incentives for verification that anonymous accounts don’t have.
Tracing a video back to its origin can be difficult when it’s been downloaded, re-uploaded, and passed through multiple platforms. But the difficulty of tracing it is itself information. Content that has been deliberately stripped of context and laundered through multiple anonymous channels is more likely to be deceptive than content with a clear chain of custody.
Motivation: Who benefits if this spreads?
This question sounds cynical, but it’s practical. Synthetic and manipulated media doesn’t create itself and distribute itself. Someone made it. Someone put it into circulation. Someone is amplifying it. Why?
Sometimes the answer is financial (engagement, ad revenue, clicks). Sometimes it’s political (discrediting an opponent, shifting public opinion). Sometimes it’s personal (revenge, harassment). Sometimes it’s chaotic (some people want to see what happens). The motivation doesn’t tell you whether the content is real, but it tells you something about what kind of scrutiny it deserves.
Pay particular attention to content that seems engineered to provoke a specific emotional reaction: outrage, fear, disgust, vindication. Content that produces strong, immediate emotional responses spreads faster than content that produces contemplation. A 2018 study published in Science analyzed 126,000 news stories shared on Twitter and found that false news reached 1,500 people about six times faster than accurate news, and was 70% more likely to be retweeted. The researchers attributed this partly to novelty (false claims are often more surprising) and partly to emotional charge (false news inspired more fear and disgust). People who want content to spread before verification know this.
Corroboration: Is anyone else reporting this?
Authentic events, especially significant public events, generate multiple records. Different witnesses. Different cameras. Different angles. News organizations that send their own reporters rather than just aggregating social media. Official statements and responses. The event sends ripples outward that can be detected independently.
Fabricated events tend to have a single point of origin. One video, shared over and over, with no independent confirmation despite the significance of what it supposedly shows. The more important an event would be if true, the more suspicious it should be if only one source is documenting it.
This doesn’t apply to everything. Some real events genuinely are captured by only one camera. Some situations don’t allow for multiple angles or independent verification. But the question is still worth asking: if this happened, who else would have seen it? Who else would be reporting it? Where is the corroboration that would naturally exist if this were real?
Accountability: What consequences would the source face for being wrong?
This is the underlying logic that ties the other questions together. People and institutions that face significant consequences for publishing false information have strong incentives to verify before publishing. People and institutions that face no consequences have no such incentive.
Anonymous accounts face no consequences. Partisan outlets that exist to confirm their audience’s beliefs face few consequences (their audience may reward them for being wrong in the “right” direction). Major news organizations with reputations to protect, legal exposure to defamation claims, and professional standards to uphold face significant consequences. Individual public figures who stake their credibility on specific claims face consequences.
This doesn’t create a perfect hierarchy. Accountable sources make mistakes. Unaccountable sources sometimes tell the truth. But when you’re trying to assess video you can’t verify directly, the accountability of the source is one of the best proxies available.
The Social Dimension: What to Do When Someone You Know Shares Something Fake
Verification doesn’t happen in isolation. It happens in the context of relationships, group chats, family dinners, social media feeds. And one of the hardest moments isn’t verifying something for yourself, but navigating the social situation when someone you know shares something that’s fake.
This is difficult terrain. Correcting people can feel condescending. It can damage relationships. It can make you look like the person who kills the vibe. And sometimes it doesn’t work: people double down when corrected, especially if the correction feels like an attack on their judgment.
Some principles that tend to help:
Lead with questions rather than corrections.
“Where did you see this originally?” and “Have you seen anyone else reporting this?” open conversation better than “This is fake.” Questions invite the other person to participate in verification rather than defend against accusation.
Focus on the content, not the person.
“This video might be manipulated” is easier to hear than “You got fooled.” Separating the assessment of the content from the assessment of the person’s judgment preserves dignity and keeps the conversation about the actual question.
Share your uncertainty rather than performing certainty.
“I’m not sure this is real, and here’s what made me wonder” is often more persuasive than a definitive pronouncement. It invites engagement rather than triggering defensiveness.
Accept that you won’t always succeed.
Some people are emotionally committed to certain content being real because it confirms what they want to believe. Some people will feel embarrassed and respond with hostility. Some relationships can’t absorb this kind of disagreement. You can control how you approach the conversation. You can’t control how it lands.
Consider the stakes.
Not every fake is worth addressing. If someone shares a manipulated video of a celebrity doing something funny and obviously didn’t realize it was synthetic, maybe that’s not the moment for a media literacy intervention. If someone shares a fake video that’s shaping their political views or medical decisions, the stakes are higher. Calibrate your response to what’s actually at stake.
What to Do When You’ve Already Shared Something Fake
This happens. It happens to careful people. It happens to people who know about media manipulation. And it will probably happen to you at some point, if it hasn’t already.
When you realize you’ve shared something fake, the instinct is often to quietly delete it and hope no one noticed. This is understandable but not ideal. A correction that’s visible to the same audience that saw the original does more to stop the spread than a silent deletion.
Something like: “I shared this earlier, and I’ve since learned it appears to be [manipulated/synthetic/taken out of context]. Here’s what I found: [brief explanation]. I’m leaving this correction up so anyone who saw the original sees this too.”
This is uncomfortable; it requires admitting you made a mistake. But it accomplishes several things: it stops the chain of sharing that you started, it models the behavior you want to see from others, and it demonstrates that being wrong is recoverable. The willingness to correct publicly makes you more credible over time, not less.
Building the Habit
The verification ladder isn’t about becoming an expert. It’s about creating a pause between seeing and believing. A moment of friction where assessment becomes habit.
When you encounter video that prompts a strong reaction, especially video that’s being shared with urgency, you can ask:
What rung am I likely on?
What questions does this rung require?
Do I have enough information to answer them?
If the answer is no, what would I need to find out?
Am I willing to wait until I know more before sharing?
The ladder is friction by design. Deliberate, necessary friction that slows the reflex to share.
You don’t have to complete a full investigation every time you see a video, and you don’t have to become a forensic analyst. You just have to build the habit of pausing, of asking which rung, of recognizing when you don’t know and being willing to sit with that uncertainty rather than resolving it prematurely through sharing.
The Deeper Problem: Living With Uncertainty
Beneath all the specific techniques is a harder challenge: learning to tolerate not knowing.
The verification ladder can help you figure out what’s real in many cases. But there will also be cases where you do everything right and still can’t be certain. Cases where the provenance is murky, the stakes are high, and the video could plausibly be authentic or synthetic. Cases where the truth matters and the truth isn’t available.
This is genuinely uncomfortable. Humans tend to prefer resolution over ambiguity. We want to know. We want to be able to say “this is real” or “this is fake” and move on.
The discomfort of uncertainty is exactly what manipulators exploit. They count on people resolving uncertainty in whichever direction is emotionally satisfying. They count on impatience, on the desire for closure, on the way “I don’t know” feels like failure.
Learning to say “I don’t know, and I’m going to wait” is a skill. It doesn’t feel good. It doesn’t provide the satisfaction of certainty. But it’s often the most honest position available, and honesty about your own state of knowledge is the foundation that everything else is built on.
On Verification Fatigue
This is exhausting. The need to question everything is exhausting. The feeling that every piece of video requires forensic analysis is exhausting. Living in a world where “seeing is believing” no longer applies is genuinely, legitimately exhausting.
That exhaustion is worth naming because it’s part of the problem.
When verification feels impossible, people stop trying. They default to believing everything or believing nothing, and both of those responses serve the interests of people creating fakes. Believing everything means you’re easy to manipulate. Believing nothing means you’ve surrendered the possibility of shared truth, which makes collective action and collective sense-making impossible. Both outcomes benefit people who want to operate without accountability.
The exhaustion is real. Acknowledging it is important. But the response to exhaustion matters.
The verification ladder isn’t about being right every time. It’s about staying in the practice of asking. Staying skeptical enough to pause. Staying open enough to update when better information arrives. Staying in the process even when the process is tiring, because the alternative is ceding the territory entirely.
This is information literacy for the conditions that actually exist. The conditions are difficult. They require more of us than earlier conditions required. And they’re not optional.
A Note for Parents, Teachers, and Anyone Guiding Young People
Young people are sometimes assumed to be more media-literate because they’ve grown up with digital technology. Research suggests this assumption is often incorrect. Familiarity with technology is not the same as critical thinking about technology. Facility with platforms is not the same as skepticism about content.
If you’re in a position to guide young people’s media consumption, the verification ladder can be a starting point for conversation. You can watch videos together and talk through which rung they seem to occupy. You can practice asking the questions out loud. You can model the willingness to say “I don’t know” and the patience to wait for more information.
The goal isn’t to make young people cynical. It’s to make them appropriately skeptical: willing to ask questions, aware that what they see isn’t necessarily what happened, capable of withholding judgment when the evidence is insufficient.
This is a life skill now. It will only become more important.
The Questions That Transfer
The specific technologies will continue to evolve. The particular artifacts that betray fake media today will be corrected in the next generation of tools. The arms race between generation and detection shows no signs of ending.
But the underlying questions remain stable:
Where did this come from?
Who benefits from its spread?
What would exist if this were true?
What accountability does the source have?
What am I uncertain about, and am I willing to sit with that uncertainty?
They’re the same questions librarians have asked about every information format that’s ever existed: manuscripts, newspapers, photographs, broadcasts, websites. The format changes; the questions don’t. Information literacy is not a response to AI; it’s the foundation that makes navigating AI possible.
The verification ladder is a framework for applying these questions systematically, with awareness of the different types of manipulation that exist. But the framework matters less than the habit. The habit of pausing. The habit of asking. The habit of distinguishing between “I saw it” and “I know it’s real.”
That habit is what turns you from audience into participant, from passive scroller into someone who decides for themselves. It takes work, and it’s also how we protect the shared reality that lets us understand each other at all.
Coming soon: Your Information Diet Is Making You Sick. You are what you consume, and right now most of us are consuming information the way we used to consume fast food in the 90s: constantly, mindlessly, and with no idea what it’s doing to us.




Thank-you so much for this piece. I've become the AI-police among (mostly) older friends spreading deepfake video recently - this is a very helpful guide to share with those who are still willing to read in depth.
I'll be sharing this for those who aren't able to discern reality from AI. I appreciate your wealth of knowledge more than you can know.