Our Horizon of Possibilities: How Algorithms Contract Our World
Algorithms curate our feeds based on what we've already chosen. Understanding how that process works changes what we can do about it.
When we spend twenty minutes scrolling through Netflix without choosing anything to watch, we’re experiencing an invisible boundary. The algorithm has created a perimeter around what it believes we want to see. Everything beyond that line doesn’t appear in our feeds.
This boundary has a name: our horizon of possibilities.
Most of us might experience algorithmic curation as helpful. The system learns our preferences and filters out what seems irrelevant. But the filter doesn't distinguish between content we don't want and content we've simply never encountered. It removes both - including ideas, opportunities, and perspectives we might value if we knew they existed. Each prediction based on our past behavior assumes our future interests will mirror our history.
With algorithms, our world contracts while appearing abundant. We scroll through hundreds of options, but they share underlying similarities. We access vast amounts of information, but encounter the same perspectives in different formats. The algorithm offers more of what we’ve already chosen and presents it as discovery.
Understanding our horizon of possibilities means recognizing that what we can imagine depends on what we know exists in the first place. We can’t consider a career we’ve never heard of, read a book we don’t know was published, or explore an idea that never reaches us. The boundary of our awareness shapes the boundary of our choices.
Before algorithmic curation became widespread, horizons expanded partly through accident. Overhearing a conversation on the bus about an unfamiliar band. A friend’s bookshelf introducing us to an author outside our usual reading. A magazine in a waiting room exposing us to an unexpected interest. The randomness of physical space and human recommendation created collisions between existing preferences and unfamiliar possibilities.
Algorithmic systems reduce this randomness significantly. They optimize our information environment for relevance and engagement, which means showing us more of what we’ve already engaged with. The system can’t show us what it doesn’t predict we want, and it can’t predict interest in something that falls outside our behavioral patterns.
This means our horizon shapes our agency. Pursuing opportunities requires knowing they exist. When algorithmic systems mediate our access to information about careers, relationships, ideas, and communities, they influence the boundaries of our possible futures.
The AI Behind the Curation
When we talk about algorithms shaping our horizons, we’re talking about AI systems. The recommendation engines on Netflix, Spotify, TikTok, LinkedIn, and dating apps use machine learning to personalize what we see. These aren’t simple sorting mechanisms. They’re AI systems that analyze patterns in our behavior, predict what we’ll engage with, and optimize their predictions based on what keeps us on the platform longest.
Machine learning algorithms improve their predictions over time by studying our clicks, our watch time, our scrolling patterns, and our purchase history. The more data they collect about our behavior, the better they become at predicting what we’ll want next. This is why recommendations feel uncannily accurate after we’ve used a platform for months, or even years. The AI has learned our patterns.
The personalization, the prediction, the pattern-matching that contracts our horizons are all AI capabilities. When LinkedIn shows us jobs matching our resume keywords, an AI system is doing the matching. When Spotify creates our Discover Weekly playlist, machine learning algorithms are analyzing acoustic similarities and user behavior patterns. When our news feed shows us stories similar to ones we’ve engaged with before, AI is making those predictions about what will keep us reading.
Understanding that these systems are powered by AI matters because AI systems are designed to optimize for specific goals, and those goals shape what we see. They’re optimized for engagement, for time spent on platform, for clicks and conversions. They’re not optimized for expanding our horizons, challenging our assumptions, or exposing us to unfamiliar ideas. The AI is doing exactly what it was designed to do. Our horizons contract as a side effect of that optimization.
The Philosophy Behind the Phrase
Edmund Husserl, a phenomenologist working in the early 20th century, developed the concept of 'horizon' to describe how we experience the world from a particular standpoint. The horizon includes what we can perceive and imagine from our current position, as well as the possibilities that lie within view. This phenomenological concept of horizon helps us understand how the range of potential experiences available to a person at any moment gets shaped by their standpoint.
Our horizon isn’t infinite. What we know shapes it, along with what we’ve experienced and the linguistic and cultural frameworks through which we interpret the world. Algorithmic systems now shape it by mediating our access to information.
Husserl emphasized that we experience the world from a particular standpoint, and that standpoint determines what we can perceive and imagine. A person standing in a valley can see surrounding mountains but not what lies beyond them. Their horizon includes everything visible from their position and excludes everything their position prevents them from seeing. They know more exists beyond the mountains, but they can’t see it and therefore can’t make specific decisions about it.
Algorithmic curation operates similarly. The system places us at a particular standpoint in the information landscape and shows us the view from there. Unlike a physical landscape where we can move to a different vantage point, we often don’t know an algorithmic boundary exists. The algorithm doesn’t show us the mountains blocking our view. It shows us the valley and suggests this is the whole world.
Library catalogs demonstrate this principle. When a book isn’t cataloged in a system, researchers can’t find it through that system. The collection’s boundaries become the research possibilities’ boundaries. This is why librarians search multiple databases, use different classification systems, and understand that no single catalog represents all available knowledge.

How Algorithms Contract Our View
Search engines don’t show us everything available on the internet. Google’s algorithm predicts what we want based on our search history, location, and click patterns. The system optimizes for engagement and relevance to our past behavior rather than for exposing us to unfamiliar ideas.
The contraction happens through several mechanisms working together. Personalization adjusts results based on what we’ve clicked before and what keeps us on the platform longest. Ranking algorithms prioritize certain types of content based on factors we can’t see. Filter bubbles form when our past choices determine our future options, creating feedback loops that reinforce existing preferences.
Recommendation systems operate on these same principles. Spotify’s Discover Weekly generates suggestions from music similar to songs we’ve already played. The algorithm rarely suggests genres we haven’t explored. TikTok’s For You page serves videos similar to content that previously kept us watching. LinkedIn’s feed prioritizes posts from our existing network and content resembling what we’ve previously engaged with.
Each system draws a circle around us and fills it with variations of what we already know. The circle feels expansive because it contains substantial content. But volume doesn’t equal diversity. We can scroll through hundreds of recommendations that all fall within a narrow band of possibility.
Where the Contraction Reshapes Our Lives
The shrinking horizon doesn’t stay abstract. It changes the opportunities available to us in domains that shape the course of our lives. When algorithms mediate our access to careers, relationships, self-expression, and civic information, the contracted view translates directly into contracted possibilities. The jobs we can apply for, the people we might meet, the ways we can express ourselves, and the information we have to make democratic decisions all get filtered through systems optimizing for engagement and pattern-matching rather than for breadth or discovery.
In careers: AI-powered recruiting tools match our resumes to job descriptions using keyword matching. Positions aligning with our existing work history appear while adjacent careers and unfamiliar industries remain invisible. Someone who’s spent five years in marketing roles at tech companies sees marketing jobs at tech companies. A museum education position that would combine their communication skills with their art history degree doesn’t appear because “museum” and “education” don’t match their work history keywords. A sustainability consulting firm seeking someone who can translate technical concepts for general audiences doesn’t surface because the resume says “marketing,” not “consulting.” The algorithm can’t suggest what doesn’t fit the pattern, which means careers that would require explaining how skills transfer rather than simply matching job titles become invisible.
In relationships: Dating applications show us profiles matching our stated preferences and behavioral patterns. When we consistently select certain characteristics, the algorithm amplifies those characteristics in future profiles. People whose differences might challenge our assumptions or whose lives would introduce us to unfamiliar communities appear less frequently. Someone who’s dated people who work in finance and live in their neighborhood continues seeing similar profiles. The artist who lives across town and would introduce gallery openings and creative communities appears less often. The teacher whose perspective on work-life balance differs completely gets filtered toward the margins. The algorithm interprets our past choices as definitive statements about our future desires.
In expression: AI writing assistants suggest phrases based on probability distributions from their training data. More common phrases appear more readily. Over time, this can lead our writing to sound more like aggregated internet text. Unusual metaphors and distinctive voice become less likely when we rely heavily on tools optimizing for what already exists. When describing feeling overwhelmed, the AI might suggest “drowning in work” or “swamped with tasks” because those phrases appear frequently in training data. A comparison to a bird navigating through fog that would capture a specific experience surfaces less readily. We lose the sentences only we would write.
In democratic life: News algorithms prioritize stories generating engagement, which often means stories confirming existing beliefs rather than challenging them. When different groups see fundamentally different information about the same events, creating shared understanding becomes more difficult. Two people can watch the same political debate and then see substantially different news coverage the next day based on their algorithmic profiles. One sees analysis emphasizing economic policy. The other sees analysis emphasizing cultural issues. They don’t share the same starting information for discussion. The common ground where democratic conversation happens contracts as each person’s algorithmic feed diverges from others’.
When a Narrower Horizon Serves Us
The picture so far makes algorithmic contraction sound entirely negative, but the same mechanisms that limit us in some contexts help us in others. A narrowed view that hides career possibilities we’d want to explore becomes a helpful filter when we’re choosing between thirty similar pasta recipes. The algorithm that keeps us from encountering challenging political perspectives also keeps us from drowning in decision paralysis when we’re picking a movie to watch before bed.
Contraction becomes limiting when it operates invisibly across all aspects of our lives, applying the same optimization logic to casual entertainment and consequential decisions alike. A contracted horizon works for us when we consciously choose it for a specific purpose and have other options available when our needs change. Awareness makes the difference between helpful curation and invisible constraint.
For decision-making: Research shows that excessive options can sometimes lead to difficulty choosing. In a 2000 study by psychologists Sheena Iyengar and Mark Lepper, shoppers at a gourmet food store encountered jam displays with either 6 or 24 varieties. While the larger display attracted more initial attention, 30% of people who saw the smaller display made a purchase compared to 3% of those who saw the larger one. This finding has been debated in subsequent research, with some replication attempts finding no effect and others finding similar patterns. The relationship between choice quantity and decision quality appears to depend on context, expertise, and how choices are presented. When we’re choosing a restaurant for dinner tonight, seeing every restaurant in our city can create decision paralysis. A curated list matching our dietary preferences, price range, and location enables action rather than endless deliberation.
For learning: When we’re learning something new, infinite possibilities can overwhelm rather than enlighten. A beginner guitarist doesn’t benefit from seeing every possible technique simultaneously. Someone using an app to learn Spanish benefits from an algorithm that shows vocabulary appropriate to their current level. The contraction creates a learnable path through otherwise overwhelming complexity.
For mental health: People experiencing anxiety, depression, or information overload can benefit from reduced stimulation and narrower options. When we’re overwhelmed, encountering fewer choices can feel like relief rather than restriction. The narrower view creates psychological space to function.
For deep work: Developing genuine expertise in any field requires sustained attention to a relatively narrow domain. An academic researching 19th-century French literature benefits from algorithms that surface relevant papers in their specific area rather than scattering attention across all of literary studies. The contracted horizon enables the sustained focus that produces original contribution.
For daily efficiency: Limited hours in a day mean not every moment needs to expand our horizons. Sometimes efficiency matters more than discovery. When we’re looking for a quick weeknight dinner recipe, seeing something matching our available ingredients and cooking time serves us better than exploring cuisines that require unfamiliar ingredients and hours of preparation.
The Critical Difference: Awareness and Choice
Contraction becomes limiting when it happens invisibly, continuously, across all domains of our lives without our knowledge or consent. Awareness changes everything. When we know our horizon is being contracted, we can evaluate whether that serves our current context. We can decide whether we want efficiency or discovery, whether we need focus or breadth, whether pattern-matching serves us or limits us. Without that knowledge, we can’t make informed choices about our information environment.
The difference between helpful curation and constraining limitation comes down to whether we’re conscious participants or unconscious subjects. We might want Instagram’s algorithm to show us only posts similar to what we’ve liked before when we’re using it to relax for fifteen minutes. That’s a reasonable choice for that specific context and purpose. We probably wouldn’t want that same algorithmic logic determining which news stories we see about political candidates, which career opportunities surface in our searches, or which perspectives on important issues reach us. But without awareness that the same mechanism operates across all these domains, we don’t get to make that choice. The algorithm contracts all these horizons simultaneously, applying the same optimization logic to casual entertainment and consequential decisions alike.
This matters because different contexts require different approaches to information. Sometimes we benefit from systems that predict what we want and filter accordingly. Sometimes we need systems that show us what exists regardless of what we’ve chosen before. Sometimes we want efficiency. Sometimes we need discovery. Sometimes we’re looking for confirmation. Sometimes we’re looking for challenge.
Whether we understand when contraction is happening, how it shapes what we see, and whether we have access to alternatives when the algorithmic approach doesn’t serve our current needs determines whether we maintain agency. Awareness creates the possibility of deliberate choice. Without awareness, we optimize our lives within boundaries we don’t know exist.
Beyond the Algorithmic Default
The realistic response to algorithmic contraction isn’t to fundamentally change our behavior. We’re not going to start using five different platforms for every search or commit to monthly exploration routines that require sustained effort we don’t have time for.
The realistic response is recognizing that algorithmic systems aren’t the only option and knowing when alternatives might serve us better.
Most of the time, algorithmic recommendations work fine. When we’re scrolling for entertainment, looking for a quick answer, or making a low-stakes choice, the personalized feed gives us what we want efficiently. The algorithmic approach becomes limiting when we notice we’re stuck in a loop, when we suspect we’re missing something, when the same patterns keep appearing, or when we’re making a decision that matters and want to see beyond our usual information bubble.
That feeling of being stuck or sensing something’s missing creates a moment of choice. In that moment, knowing that alternatives exist matters.
When recommendations create loops: After watching several true crime documentaries, Netflix shows nothing but true crime. After listening to indie folk for a week, Spotify’s recommendations stay locked in that genre. But what if we’re bored with the loop but don’t know what we want instead? The platform’s search function works differently than its recommendation engine. Search shows what the platform has rather than what it predicts we want. We don’t need to know exactly what we’re looking for. We can search broad categories like “documentary” or “classical music” and browse what appears. We can type in something we heard about once but never explored. We can search for a genre we used to like years ago. The search function doesn’t require knowing precisely what we want. It requires knowing we want something different from what the algorithm keeps showing us.
When job searches feel narrow: LinkedIn shows the same types of positions repeatedly because the algorithm matches keywords in our resume to keywords in job descriptions. Public libraries often provide free access to career databases that organize information differently. These databases group jobs by skill set, industry sector, or job function rather than by keyword matching. Browsing a category like “careers for people with marketing skills” reveals museum education, sustainability consulting, nonprofit communications, and roles that don’t have “marketing” in the title. The database doesn’t predict what we want. It shows what exists within a category, which means seeing the full range rather than algorithmic variations on our past.
When news coverage feels incomplete: Reading the same story from multiple outlets that all emphasize the same angle creates a suspicion that we’re seeing one perspective on a multi-faceted situation. But if we don’t know what’s missing, how do we know where to look for it? Many public libraries provide free access to newspaper databases that organize publications by country or region rather than by our reading history. These databases let us browse what’s available from international sources, different regions, or publications we’ve never encountered before. We don’t need to know which specific publication to check. We can see how British papers cover American healthcare policy differently than American papers do, or how regional papers include local details that national coverage omits, or how international sources provide context that domestic coverage assumes we already know. The database shows us what exists rather than what it predicts we want to read.
When book recommendations stagnate: Amazon and Goodreads recommend books similar to what we’ve bought or read. After a while, they feel like variations on familiar themes. Physical bookstores and libraries organize books differently than algorithms do. They group by genre, by subject, by theme, by geographical sections. Walking into the literature section and browsing what’s shelved near a book we loved shows us books the algorithm would never suggest because they’re connected by subject matter or literary tradition rather than by purchasing patterns. Staff recommendation tables feature books employees loved regardless of what’s algorithmically popular. Browsing by section rather than by predicted preference shows us what exists in a category rather than variations on our past choices.
When research needs depth: Googling a medical condition, legal question, or financial decision often returns websites optimized for search rankings. They provide accessible information but they’re ranked by engagement metrics, not expertise. Academic and specialized databases exist specifically to surface scholarly and expert sources rather than popular ones. Google Scholar ranks primarily by how often other scholars cite the work rather than by engagement. PubMed indexes medical and life sciences research. ArXiv hosts physics, mathematics, and computer science papers. Many public libraries provide free access to research databases that include these and other scholarly sources. (For more on accessing scientific research beyond paywalls, see this article on finding and evaluating scientific studies.) The information density differs significantly. A health website might say “diet and exercise can help manage type 2 diabetes.” A research paper details which specific dietary changes produced which measurable outcomes over what time period in what specific population.
When decisions need complicated reality: When we’re considering a career change, major purchase, or life decision, algorithm-curated feeds often show success stories and marketing content because that’s what generates engagement. Finding critical perspectives, failure stories, and complicated reality requires recognizing that the algorithm won’t show us these things and looking elsewhere. Online communities organized around specific questions rather than algorithmic engagement often contain more complete information. These communities exist for nearly every major decision people face. Forums for graduate students include discussions of career outcomes and debt burden alongside success stories. Product review communities include long-term reliability problems alongside enthusiastic launch coverage. Career change communities include people who regret their transitions alongside people who celebrate them. These communities aren’t optimized for engagement, which means they include the complications and failures that algorithmic feeds filter out.
Information Literacy for the Algorithmic Age
The size of our horizon shapes the size of our possible future. When algorithms show us a narrow slice of available information, they influence what we can become.
Wide horizons mean seeing more options and making more intentional choices. Someone with access to diverse information knows what’s possible beyond their current circumstances. They can imagine different careers, ways of living, communities to join. Narrow horizons create a different reality. When the limits of our information access become invisible, we can mistake them for the limits of what’s possible. Pursuing opportunities we don’t know exist becomes impossible. Questioning assumptions we’ve never seen challenged can’t happen. We optimize our lives within boundaries we don’t recognize as boundaries.
Information literacy used to mean knowing how to find information and evaluate its credibility. That definition assumed we knew what information existed and where to look for it. Information literacy for the algorithmic age requires a different set of questions. Beyond asking “Is this source credible?” we need to ask “What sources am I not seeing?” Every search interface, every recommendation system, every feed shows us some things and hides others. Our information environment is curated; that curation serves specific interests. We can make choices about how much influence we give it.
Recognizing when a contracted horizon serves us and when it limits us becomes the skill that matters. When we’re choosing a dinner recipe with limited time and ingredients, algorithmic filtering helps. When we’re considering career possibilities or trying to understand complex issues, that same filtering might hide what matters most.
A horizon has been drawn for us in every algorithmically mediated space. Feeling stuck, noticing recommendations feel stale, or suspecting something’s missing signals when the horizon has contracted too far for our current need. Alternatives do exist, available when algorithmic boundaries stop serving us. Understanding when we need wider horizons, and knowing how to find them, preserves our agency to choose our own boundaries and shape the life that becomes possible.




It goes back much further than algorithms. When one used to search the card catalog in the library, serendipitous results were common; replacing that with electronic versions almost completely eliminated that, limiting our search to what we wanted. More efficient, but less new content.
Well written and useful post. It’s difficult to widen the horizon. If I try to find reasonable voices from the other side of the spectrum, my YouTube feed, etc are flooded with wacko extreme examples. I still rely on friends and relatives to recommend new podcasts, music and books.