AI is making us all smarter in exactly the same way, and that might be the problem. What happens when the most accessible knowledge becomes the most uniform? As AI increasingly curates what information we see, are we trading the beautiful messiness of diverse human thought for algorithmic convenience?
Andrew Peterson’s April 2024 research paper “AI and the Problem of Knowledge Collapse”1 reveals how our AI tools are quietly filtering out the diverse perspectives that drive innovation, creating a future where we all know more but think more alike. Here’s why that matters for all of us.
AI’s Knowledge Collapse Problem Is Already Here
Reading this research was one of those lightbulb moments for me. It puts into words something I’ve sensed but couldn’t quite articulate about how AI is subtly reshaping what information we encounter.
Why This Matters Now
Here’s what struck me most: the paper boils down to a surprisingly simple but powerful idea. When we let AI systems become our primary gateway to information, we’re accidentally narrowing what we collectively know. Peterson’s simulations really drive this home. When AI-generated content was just 20% cheaper to access, people’s understanding drifted more than twice as far from reality compared to using traditional sources.
As AI gets baked into more of our daily tools and processes, how will we keep our thinking diverse and well-rounded? Take Google’s new AI Overview feature, for instance. It gives you a quick summary at the top of your search results, which is undeniably convenient. But if we all start relying on those summaries instead of exploring the full range of sources below, aren’t we all getting the same homogenized version of knowledge? The easiest path might be making us all think more alike without us even realizing it.
The Great Knowledge Paradox
Information generation has never been cheaper, yet the costs of effectively querying, filtering, and verifying that information continue to rise.
When anyone can generate endless content with a prompt, the truly scarce resource becomes attentional bandwidth and evaluation expertise. This creates an asymmetry that will reshape how we manage knowledge:
Information is abundant; wisdom is scarce.
AI can generate a research report in seconds, but determining whether that report contains novel insights or just repackaged conventional wisdom requires human judgment that remains expensive and time-consuming.
We’ve democratized creation while centralizing what gets seen
AI tools have made it incredibly easy for anyone to create content. A teenager can write a novel, a new business owner can make professional marketing materials, and your uncle can generate impressive-looking research on any topic he wants. This explosion of creation seems democratic and empowering.
But there’s a catch. As the flood of content grows, we rely more heavily on algorithms and AI to filter what’s worth our attention. And that filtering power sits with just a handful of systems; Google’s AI deciding what tops your search results, LinkedIn’s algorithm determining which posts reach your network, or ChatGPT choosing which perspectives to include in its answers.
Peterson’s study reveals something concerning about these filtering systems: they naturally favor whatever was most common in their training data. They amplify mainstream ideas while quietly burying diverse perspectives. So while millions more people can create content, a few AI systems increasingly determine what information actually reaches us—and they’re pushing us all toward the same limited set of perspectives.
The easy answers are often the least useful ones
There’s an ironic problem with how AI serves up information. The stuff it gives you most readily—the quick summaries and common explanations—is usually what everyone else is getting too.
When I ask ChatGPT about market trends in my industry, it gives me the same conventional wisdom that all my competitors are seeing. The perspectives that would actually give me an edge—the contrarian analysis, the overlooked opportunities, the unusual approaches—exist in the margins of knowledge that AI tends to simplify away.
Think about it this way: if everyone in your field can get the same answer with the same simple prompt, that information can’t possibly be your competitive advantage. The truly valuable insights are usually harder to find—they’re in the specialized knowledge, unusual connections, and uncommon perspectives that AI systems struggle to represent because they’re statistically rare.
It’s like we’ve made the common knowledge incredibly easy to access while making the truly distinctive insights relatively harder to find. And that’s a problem, because in most fields, the breakthrough ideas rarely come from what “everyone knows.”
Four Horsemen of Knowledge Collapse
We’re facing four forces that will fundamentally reshape how organizations manage information:
1. Convenience at the expense of depth When AI makes mainstream knowledge significantly cheaper to access, we naturally gravitate toward what’s easily available. Peterson’s models show how this creates a feedback loop that progressively narrows our collective understanding.
2. The algorithmic echo chamber As AI systems recursively learn from their own outputs and each other, they accelerate knowledge homogenization. This creates an algorithmic monoculture where systems reinforce mainstream perspectives while marginalizing specialized expertise.
3. The illusion of completeness When AI presents information with confidence and fluency, we experience a false sense of clarity—the dangerous belief that we’re seeing the complete picture when we’re actually getting a narrow slice.
4. The death of serendipitous discovery Peterson’s empirical research revealed that even when explicitly prompted for diversity, AI systems over-represent certain perspectives while nearly ignoring others. This algorithmic extinction of intellectual serendipity happens subtly, without us even noticing what we’re missing.
Finding Signal in the AI Noise: The Questions That Matter
I’ll admit to having very few of the answers, but as AI increasingly mediates our access to knowledge, we face profound questions about how to maintain intellectual diversity in this new landscape:
- How do we make sure rare but valuable insights don’t get buried just because they’re uncommon?
- What if everyone is missing the same things because we’re all using similar AI tools? How would we even notice?
- Could new ways of sharing knowledge emerge to balance out the sameness that algorithms push us toward?
- What if we started valuing people who share unusual perspectives instead of just those who confirm what we already believe?
- How do we teach students to find the gold nuggets of insight when they’re drowning in information?
- What kinds of digital libraries or archives might we need to keep the quirky, unusual knowledge from disappearing?
- Could we find ways to reward people who maintain diverse expertise, not just those who master what’s mainstream?
- How do we keep multiple paths to knowledge open when the AI highway seems so much faster and easier?
The Infinite Game of Knowledge
Look, I don’t have this all figured out. None of us do. We’re all trying to make sense of what happens when AI becomes as normal as checking our phones. There’s no playbook for this stuff.
We’ll learn as we go, seeing what works, what breaks, and making real choices about these tools instead of just letting them wash over us like a wave.
We need to start asking these hard questions now. Because if we sleepwalk into this future, we might wake up one day and realize we’ve all become weirdly similar thinkers with the same blind spots, same assumptions, and narrower imaginations. And honestly, a world where we all think alike is terrible at solving tough problems.
What hit me about Peterson’s research wasn’t just the data. It was this simple truth: keeping knowledge diverse isn’t a one-time fix. It’s something we have to work at continuously. There’s no moment where we can dust off our hands and say “fixed it!” It’s more like tending a garden that needs regular attention.
The people and organizations that thrive won’t be the ones with the fanciest AI subscriptions. They’ll be the ones who find that sweet spot between getting things done quickly and making sure we’re not all seeing the world through the same narrow lens.
I’m not anti-AI. I’m a huge proponent for extracting all the value out of this force multiplier. But I think we need to be clear-eyed about both what we gain and what we might lose in the process. This is about being thoughtful about how these tools become part of our lives. We can embrace what they do well while still setting up some guardrails to protect what makes human knowledge so rich and varied. After all, the whole point of these tools is to enhance human potential, not to gradually replace the beautiful messiness of diverse human thought with something more uniform and predictable.
This isn’t complicated, just difficult.