The Dark Pattern Directory: 14 Manipulation Tactics Built into the Digital World
What they are, how they work, and what to do about them.
Have you ever signed up for something in thirty seconds and spent twenty minutes trying to cancel it? Watched one video and surfaced an hour later with no memory of deciding to keep going? Clicked “Accept All” on a cookie banner because finding the alternative required eleven more steps? These aren’t bugs or oversights. They’re features — built deliberately by product and design teams, tested against millions of users, and kept in place because the data showed they increased revenue.
Being able to name what’s happening is one of the most underrated forms of literacy we can develop. Information literacy — the ability to recognize how content, interfaces, and systems are shaped to influence our behavior — is usually framed around headlines and misinformation. But it applies just as directly to the design of the apps and platforms we use every day. When we can identify the specific technique being used on us, we stop experiencing it as vague, ambient pressure and start seeing it clearly — as a technique with a name, a mechanism, and (most importantly) a countermove.
UX designer Harry Brignull coined the term "dark patterns" in 2010 with the specific goal of naming and shaming deceptive user interfaces — a phrase he has since updated to "deceptive patterns," though both terms appear in legal and regulatory contexts today. Researchers at Princeton and other institutions have since traced their origins to three converging forces: decades of deceptive retail practices, behavioral science research on how people make decisions, and the growth-hacking culture of Silicon Valley — which discovered through relentless A/B testing that even small interface changes produce large behavioral differences.
Companies run these experiments internally and keep the results as trade secrets. The reason we see so many of the same patterns across so many platforms is almost certainly that the proprietary research confirms they work. When a manipulation technique survives testing at scale and increases a metric the company cares about — subscriptions, session length, ad clicks, data collection — it stays, it spreads, and it becomes industry standard.
Regulators have taken notice. The California Consumer Privacy Act, the EU's Digital Services Act, and the UK's Competition and Markets Authority have all moved against specific patterns — fining companies, demanding design changes, and in some cases making certain tactics illegal outright. Knowing what these patterns are called, and being able to recognize them, is increasingly how consumers push back, file complaints, and hold platforms accountable.

The attention machine
Most free platforms run on advertising revenue, which means their actual product is your attention — specifically, the number of seconds per day they can keep you on the platform. The more time you spend, the more ads you see, and the more the platform can charge advertisers for access to you. Every pattern in this category is engineered to extend that time, and none of them require your awareness to function. That’s not incidental; it’s the design goal.
Infinite scroll
What it is: Before infinite scroll, digital feeds were paginated. Reaching the bottom of a page created a natural pause — a moment to decide whether to keep going. Infinite scroll removes that moment entirely. The feed continues because nothing stops it, and the brain doesn’t generate its own stopping points when the interface provides none.
What it looks like: You open Instagram to check one thing. There’s no bottom to the page. Forty minutes later you’re watching a video from someone you’ve never heard of, with no memory of deciding to stay.
Aza Raskin, who invented the feature, expressed regret in 2019 and described it as “one of the first products designed to not simply help a user, but to deliberately keep them online for as long as possible.”
What you can do: On YouTube, a browser extension called DF YouTube removes the recommended feed entirely, leaving only what you searched for. On iPhone, setting app time limits through Settings > Screen Time applies a hard daily cap on specific apps — unlike in-app limits, which can be bypassed with a passcode, the OS-level control requires the Screen Time passcode to override.
Variable reward schedules
What it is: Psychologist B.F. Skinner found that behaviors reinforced on unpredictable schedules are the hardest to stop — more persistent than behaviors that receive a consistent reward every time. Social feeds are built on this finding. The unpredictable rhythm of content — something great, then several mediocre posts, then something that makes you laugh, then nothing — creates a state of continuous anticipation that researchers compare to slot machines. The intermittency is the mechanism. The content is secondary.
What it looks like: Pulling down to refresh a feed is mechanically identical to pulling a slot machine lever. Sometimes something good appears. Often nothing does. The not-knowing is what keeps the behavior going — not genuine interest in what’s there.
What you can do: Scheduling specific times to check social feeds — rather than opening apps whenever the impulse hits — removes the variable element. The unpredictability only has power when the behavior is reflexive. An app called One Sec inserts a brief breathing pause before opening social apps, adding just enough friction to convert a reflex into a conscious choice.
Notification flooding
What it is: Our nervous systems don't distinguish between real urgency and manufactured urgency. Notification badges and phone vibrations are engineered to feel like something requires immediate attention — the same mechanism that makes us reach for the phone when someone we care about might be trying to reach us. Platforms borrow that reflex and apply it to everything.
What it looks like: Your phone buzzes. Someone you’ve never met commented on a post you liked a month ago. The buzz felt urgent. Checking it gave you nothing. The platform got thirty more seconds of your attention.
What you can do: Turning off all notifications from social apps — while leaving texts and calls active — stops the platform from scheduling your attention on its terms. Most people find, after a few days, they don’t miss the notifications. They check the apps when they choose to rather than when prompted.
Autoplay
What it is: The next video begins before any decision to watch it is made. Opting out requires an active interruption every single time, which means that inattention — a perfectly ordinary human state — automatically produces continued consumption.
What it looks like: You finish a Netflix episode intending to stop. The next one is already counting down. Stopping requires action; continuing requires none.
What you can do: On Netflix, autoplay is disabled under Settings > Playback Settings. On YouTube, it's the toggle at the top of the "Up Next" panel while a video is playing. Once off, the next video no longer starts without a deliberate choice to play it.
Getting agreement we’d never otherwise give
A separate category of dark patterns targets decisions rather than attention — specifically, getting us to agree to things we'd decline if the choice were presented clearly. The mechanisms vary, but the structure is the same: the company's preferred outcome is made to feel like the natural path, and the actual preference requires extra effort, more steps, or emotional resistance to reach.
Confirmshaming
What it is: The decline button is written to make saying no feel like a self-indictment. The choice being offered is neutral — do you want this thing or not? The framing is engineered to attach embarrassment to one side of it, even when no one else is watching.
What it looks like: A popup offers a discount for joining a mailing list. The accept button says "Yes, I want to save money." The decline says "No thanks, I'd rather pay full price." The manipulation isn't in the offer — it's in how refusal is labeled.
What you can do: Focus on the offer itself, not the button text. Whether a discount, a newsletter, or a membership is worth having has nothing to do with what the decline button says. Evaluate the offer on its own terms and the button wording becomes irrelevant.
Forced continuity
What it is: A free trial requires a payment method upfront. The trial ends without a reminder. A charge appears weeks later on a statement most people scan rather than read. The structure bets, accurately, that canceling feels like more effort than continuing to pay — and that we consistently overestimate how likely we are to remember to act.
What it looks like: You sign up for a seven-day trial of a software tool. You forget about it. Six weeks later you find three charges on your credit card. Canceling now requires locating a settings page that wasn’t mentioned during signup.
What you can do: A calendar reminder set at signup catches the trial end date before the charge lands. Services like Privacy.com issue virtual card numbers where you can set a $0 spending limit after the trial period ends — meaning any charge the platform attempts after that point gets declined automatically. Searching "[service name] + how to cancel" before signing up reveals what leaving will actually involve.
Privacy zuckering
What it is: Named after Facebook’s Mark Zuckerberg, this pattern describes interfaces that make sharing data easy and limiting it nearly impossible. Cookie consent banners are the most common form: one large, colorful “Accept All” button and a much smaller, grayed-out “Manage Preferences” that opens a panel listing dozens of advertising vendors (with none of their toggles set to off by default). Some apps have been documented reverting tracking settings back on after each update, making any manual adjustment a recurring task rather than a one-time decision.
What it looks like: A banner appears on a news site asking about cookies. “Accept All” takes one click. “Manage Preferences” opens a panel with 47 vendors listed alphabetically. Adjusting all of them takes several minutes. The design is built on the accurate assumption that most people won’t.
What you can do: Browser extensions like uBlock Origin and Privacy Badger handle these banners automatically, rejecting non-essential tracking without requiring any engagement with the consent interface. On iPhones running iOS 14 or later, Settings > Privacy & Security > Tracking lets you turn off cross-app tracking globally — one decision that covers most apps at once.
Roach motels
What it is: The name comes from an old ad for a cockroach trap — “Roaches check in, but they don’t check out.” Joining a service is frictionless. Leaving can require navigating multiple confirmation screens, waiting out cooling-off periods, and declining retention offers designed to wear down resolve. The harder it is to leave, the longer the company keeps collecting payment from people who no longer want the service.
What it looks like: Signing up for a gym membership takes five minutes online. Canceling requires calling during business hours, speaking to a retention agent who offers discounts until you give up, and in some cases visiting in person or submitting a written request.
What you can do: Before committing to any paid subscription, search “[service name] + cancel subscription” — not to find the cancellation page, but to understand what leaving will actually involve. For subscriptions where cancellation is obstructed, most major credit cards will block or reverse recurring charges on request. It's a legitimate and widely underused option.
Manufactured urgency
Loss aversion is a well-documented feature of human psychology: we feel the pain of losing something more acutely than the pleasure of an equivalent gain. Scarcity and social pressure are real forces — when they’re real. The patterns below generate the feeling without the underlying reality.
Fake scarcity
What it is: Fake scarcity is when a platform artificially limits the perceived availability of something — a room, a seat, a price — to pressure a faster decision. The shortage isn't real and neither is the urgency, but the psychological response it triggers is. When we believe something is running out, we stop comparing options, stop reading the fine print, and stop asking whether we actually want the thing. That narrowing of attention is exactly what the tactic is designed to produce — a purchase made before doubt has time to form.
What it looks like: A hotel listing shows "Only 2 rooms left at this price!" alongside a countdown timer. You reload the page. The timer resets to exactly where it started. The room count hasn't changed. The only thing the reload revealed is that neither number was tracking anything real.
What you can do: Reloading the page and watching whether the countdown resets reveals whether the scarcity is real. For travel bookings, checking the hotel's own website directly often shows the same rooms at the same prices, without the urgency indicators. Most non-refundable rates don't disappear overnight — the pressure to decide immediately is usually the fiction, not the availability.
Social proof manipulation
What it is: We look to others when uncertain, and platforms exploit it by fabricating the signals we'd normally look for. Shopping sites have been caught using random number generators to display how many of a product were "just sold" in the last hour, and sports merchandise sites have been found using identically worded customer testimonials with different names substituted each time. The numbers and reviews exist to pressure a decision, not to report a fact.
What it looks like: “23 people are viewing this item right now.” “847 people saved this deal today.” These figures imply that others have already validated the purchase — making hesitation feel like falling behind.
What you can do: Treat any real-time activity counter on a commerce site the way you’d treat a “Sale ends tonight!” banner that appears every night. The only relevant question is whether you want the thing at that price — not whether a number on the page suggests other people want it too.

Interface tricks
These patterns require no dramatic deception. They work through button placement, color, wording, and layout — producing decisions users didn’t intend without the platform saying anything technically false.
Misdirection
What it is: One option is made visually prominent and easy to click. The alternative exists but requires more attention, more effort, or more steps to reach. On cookie consent banners, both “Accept All” and “Reject Non-Essential” are available — the design simply makes one cost almost nothing and the other cost several minutes.
What it looks like: A software installer shows a large, colorful “Install” button. In smaller text below it: “Also install [unrelated browser toolbar].” The toolbar checkbox is pre-checked. Clicking the main button without reading installs both.
What you can do: On any installer, checkout page, or signup form with multiple steps, read every line before clicking the prominent button. The thing the interface most wants you to skip is almost always in the smaller text beneath the main action.
Hidden costs
What it is: The advertised price is the base price. Service fees, facility charges, and booking fees appear only at the final checkout screen — after time and mental commitment have already been invested. The US Department of Transportation estimated that passengers overpay more than $500 million annually in hidden airline fees, and DOT data shows airline baggage fee revenue grew by more than 30 percent between 2018 and 2022.
What it looks like: A flight shows as $89 in search results. At checkout: $89 base fare + $35 seat selection + $30 carry-on + $18 booking fee = $172. The $89 was never technically a lie. It was never the price, either.
What you can do: Google Flights displays total prices including fees for most routes before you reach any airline’s own site. For concert and event tickets, checking the venue’s box office directly often reveals significant fee differences. Ticketmaster’s service fees are not an industry standard — many venues sell the same seats at face value with no added fees.
Trick questions
What it is: Syntactic complexity produces the wrong answer under normal reading attention. A double negative buried in a checkbox — “Uncheck this box if you do not wish to receive promotional emails from our partners” — requires careful parsing that most people don’t do on a signup form. The phrasing is constructed to produce the wrong answer on a first pass, and it’s especially confusing for non-native speakers and people with cognitive disabilities.
What it looks like: During account signup, a checkbox appears with that kind of phrasing. Whether you check it, uncheck it, or skip it, the outcome may not match what you actually wanted — and the company benefits from the confusion either way.
What you can do: Checkboxes phrased as negatives — containing “not,” “uncheck,” or “opt out” — are almost never worded that way accidentally. Reading them twice before responding is the simplest way to avoid consenting to something unintended.
Disguised ads
What it is: Paid placements are technically disclosed — the word “Sponsored” is present — but rendered in small, light gray text positioned to blend with the organic results around them. The disclosure satisfies a legal requirement. The visual design works against it being noticed.
What it looks like: You search for a product on Amazon. The first several results carry a tiny “Sponsored” label. The highest-rated product with the most verified reviews is further down the page. The top results could be there because companies paid to place them, not because they’re the best match for your search.
What you can do: On Amazon, sorting by "Average Customer Review" after searching filters out paid placement and ranks by what buyers actually found useful. On Google, paid results are labeled "Sponsored" — as they are on most major search engines and app stores. Scrolling past the first few results, or filtering by ratings where the option exists, is a reliable way to get past what paid to be seen first.
What changes when we can see this clearly
Regulators can fine companies and legislators can ban specific practices — and they have, increasingly. But enforcement moves slowly, and the interfaces we use every day don’t wait for it. The more immediate shift happens at the level of recognition: seeing a countdown timer as a pressure tactic rather than a fact, reading a consent banner as a design choice rather than a neutral prompt, and noticing when urgency is being manufactured rather than deserved.
That recognition compounds. The first time we catch a pattern, it might only click in retrospect. With time it clicks faster — and a decision that might otherwise have slipped by gets made on our own terms instead.
Information literacy is usually framed around text — whether a source is credible, whether a statistic holds up, whether a headline matches the story beneath it. But the same instinct applies to every interface we move through. A feed with no bottom, a fee that appears only at checkout, a button positioned to be clicked before the alternative is noticed — these are choices made by someone with a financial interest in what we do next. Asking who benefits from how an experience is designed is the same question we’d ask of any other information source. It just looks different when the answer is expressed in a button color or a missing cancel link rather than a misleading headline.
The broader stakes are real. Attention is a resource, and how we spend it shapes what we read, what we buy, what we worry about, and how we understand the world. Interfaces that quietly harvest it — through exhaustion, manufactured urgency, or the simple absence of a stopping point — don’t just cost us time. They shape the texture of daily life in ways that accumulate invisibly. Seeing those mechanisms clearly puts us in a fundamentally different relationship with the digital environments we inhabit — oriented, rather than just passing through.



I didn't see this pattern, which I have experience several times, lately with "The Epoch Times": the endless login loop.
They sent me some sample content, which looked fairly progressive. I signed up for a paid subscription, and they began charging me monthly.
It was a bait-and-switch. They are strongly conservative, and the first issue I looked at was completely contrary to what had enticed me to send them money. I immediately tried to log in to cancel, but found what I thought was my password no longer worked. I then tried to set my password, but got into an endless loop where I had to log in — using my password — in order to change my password!
I sent their support email a request to cancel. They responded that I had to log in and cancel online. I tried again, and got the same endless login loop. I told them that, and they just insisted that I had to log in to cancel.
Then, I cancelled my credit card. Now, I'm getting a stream of "your charge was declined" messages from my bank. Still annoying, but at least it's not costing me anything but irritation!
Too many steps? I just stop and say to myself not worth my time my and effort and I really don't need this information P