AI's next boogeyman: Narrative Fatigue

AI's journey into our consciousness didn't happen overnight. It just feels that way.

If you rewind a bit- say, back through most of the 2010s…AI was already there, quietly doing its thing. The catch? It showed up as convenience, not transformation. It made life incrementally better, not fundamentally different. And because of that, we barely noticed it.

Think about it. First, there was Siri. You could ask a question and…on a good day… get the right answer about half the time. Not perfect, but still kind of magical. Then came Alexa, letting you juggle tasks in a way that felt oddly futuristic. Cooking dinner while changing the music without lifting a finger? That felt like progress.

And then it got more personal. Tools like Grammarly started smoothing out our writing, quietly erasing the little grammar mistakes that used to trip us up. Email began finishing our sentences for us, as if it could read our minds, or at least our habits. None of this was flashy. But it was consistent. A steady drip of small wins that made the chaos of everyday life feel just a bit more manageable.

These were the breadcrumbs. The subtle signals that AI wasn't just a tool—it was becoming part of how we operate. And then…everything shifted.

Enter ChatGPT and the LLM revolution.

This wasn't just another efficiency play. This was something else entirely. Up until that point, AI helped us move from point A to point B a little faster. Now, suddenly, it could help us create, produce, learn, and even interact. It wasn't a feature - it was a platform. A kind of universal layer you could apply to almost anything.

If you think about how we each have our own "profile" for how we do those things - how we learn, how we create, how we communicate - AI became something different: a kind of extension. A muscle you could flex exactly where you were weak.

I think about this a lot in a very practical way. As a college student, if I wanted to study, I had what I had: a textbook, a notebook, maybe some class notes, maybe a friend if I was lucky, and one review packet if the professor was feeling generous. That was it. No endless practice tests. No instant explanations. No adaptive help. Now? You can generate as many practice tests as you want. Ask follow-ups. Go deeper. Pivot instantly.

It's no wonder people lost their minds a bit. Because for the first time, it felt like anyone—anywhere—could ask a question and get not just an answer, but a stream of relevant, synthesized information pulled from…everywhere. And that word—everywhere—is where things get interesting.

Because as quickly as we embraced this new capability, we also started to notice something uncomfortable. Something a little…off. AI, it turns out, is a confident liar.

Not maliciously. Not intentionally. But convincingly enough that it matters. It will give you answers that sound right, feel right, and sometimes are completely wrong. We call these Hallucinations.

Over the past couple of years, that realization has taken some of the shine off AI's big debut. The same confidence that made it powerful also made it risky. Trust got a little shaky. The honeymoon phase hit some turbulence. But here's the thing - awareness tends to breed solutions.

And we're seeing that play out in real time. The AI "wild west" is exactly that - messy, fast-moving, full of experimentation. But if there's one thing that's clear, it's this: no one is slowing down. If anything, we're accelerating. Gartner predicts 40% of enterprise applications will embed AI agents by 2026. I'd be shocked if its not already there. The land grab isn't just happening at the app layer - it's happening at infrastructure level.

In emerging markets. People place bets. Lots of them. And in the case of AI, we're now watching a wave of new companies and tools emerge, all aiming to solve the very problems AI introduced in the first place.

So AI's next wave is going to solve the 'Confident Liar' problem?

So…is this new wave of AI actually going to solve the problem? Or are we just creating a new one?

Pull yourself into today's reality for a second. As a startup founder, I can't get through a single business conversation without someone asking, "Have you heard of (insert random AI company here)?" It's constant. Almost rhythmic at this point.

And that should tell you something. We're in the middle of a land grab. In fact, you could make the argument that the traditional definition of SaaS is starting to wobble a bit. Everyone—and I mean everyone—is taking their own personal business pain, wrapping it in an AI layer, and calling it a workflow. The pitch is always some version of the same promise: our agent quietly does the work for you and gives you exactly what you need.

It feels like magic. Sometimes it actually is.

And look, I'm not throwing stones here. At GraphIQ, we're right in the middle of it. Our focus is on building the data side of the equation—because if AI is really models + data, then data is the variable that actually drives differentiation. But at the same time, we're also users of these new tools, leaning into them to make ourselves more efficient. We're constantly rethinking how our own processes should work—how to streamline them, how to get smarter about how we staff our organization, and how to build a team that's less about task execution and more about adaptability.

But while everyone is understandably wowed by this explosion of intelligent tools, there's something else creeping in. Something quieter and definitely more insidious. Narrative fatigue.

Narrative Fatigue; The Boorish Blowhard of AI

Let me introduce you to what I like to call the boorish blowhard phase of AI.

At its core, AI doesn't just respond—it performs. You ask a question, and what you get back isn't a data point or a simple answer, it's a fully formed narrative. Structured. Thoughtful. Often persuasive. It reads like something a person sat down and crafted for you. But here's where things have evolved.

With this new wave of agentic products, it's no longer just about answering a single prompt. These systems are pulling from multiple sources, processing inputs automatically, synthesizing across contexts, and delivering what feels like a well-reasoned point of view—on demand. It's not just conversation anymore. AI is continuous interpretation.

And like a conversational partner who never quite knows when to stop talking, it just keeps going. Ask again? Another narrative. Slightly tweak the question? Another narrative. Connect a few workflows together? Now you've got narratives being generated whether you asked for them or not. That's the shift.

Narrative fatigue isn't about AI's ability to talk to you - that part is actually valuable. It's about volume. It's about the overproduction of polished, processed content that stacks up faster than you can absorb it. At some point, the issue isn't quality. It's capacity.

AI is a conveyor belt that never stops. And at first, that feels incredible. Endless content. Endless explanations. Endless "help." But there's a breaking point.

Look at what the platforms everyone trusted to solve this actually did. Market leaders scaled outreach until inboxes turned into battlefields - decision-makers drowning in automated sequences nobody asked for.

Mar-tech shipped the same AI-assisted templates to every sales team on the planet, so the output became commodity noise by definition. Insight providers built insight inside proprietary interfaces that you couldn't share outside the platform without a tech heavy, friction-heavy process. All confuse volume with value. That's the boorish blowhard pattern playing out in the market. And it's not unique to them - it's the default outcome when AI gets plugged into a process without any signal-to-noise discipline built in.

So what is the definition of Narrative Fatigue?

Narrative fatigue is what happens when the volume of AI's output becomes so overwhelming that the value of any single piece drops to zero. Not because it's wrong. Not because it's bad. But because you simply stop reading.

The technical name for what breaks is signal-to-noise ratio. Once it collapses, even correct, well-sourced output gets tuned out alongside the junk. And if you're being honest, you've probably already felt this.

Making it real

Say your company rolls out a new AI system to manage and process all your Zoom conversations. It records meetings, summarizes them, pushes reports to email, Slack, dashboards—wherever you want. It can even synthesize insights across meetings if you ask the right questions.

Sounds amazing, right? It is. The efficiency gains alone are massive. The idea that you could "read" your company's conversations instead of sitting through them? That's powerful. You can skim, annotate, follow up, even generate new insights you might have missed in the moment.

But here's where things start to bend. If five people are feeding into that system, maybe it's manageable. Ten? Still workable. What happens when it's fifty? The Slack channel that seemed like the savior of summarization you have been waiting for your whole life? It's now a firehose. Narrative after narrative after narrative flooding in. At some point—probably sooner than you think—you just stop opening them. Not because they aren't useful, but because there are too many.

This isn't a hypothetical problem. Large enterprise teams are already receiving an average of 960+ alerts per day. Organizations over 20,000 employees hit 3,000+. Only 30% of those are actionable. (Parse Labs, State of Revenue Intelligence 2026.) Agentic AI dropped on top of that without deliberate design doesn't shrink the pile—it industrializes it.

Now layer on another scenario. You've got a meeting with ten people. Everyone shows up "prepared," each having asked the AI for summaries, insights, or recommendations ahead of time. Sounds great in theory. But remember—AI is still a confident liar.

So what do you get? Ten slightly different narratives. Ten versions of "truth," each polished, each plausible, each just different enough to create friction. Instead of alignment, you get subtle chaos. Everyone is armed with a perspective, but no one is actually on the same page. Efficiency…ironically…creates confusion.

Now push it even further. Your function doesn't just buy one of these tools. It buys seven. Each with its own workflow. Each generating its own beautifully written narratives. Each promising clarity. What you end up with isn't clarity. It's noise.

And at some point—and I'm only half joking here - you start to long for the simplicity of a spreadsheet. Maybe even a piece of paper. Because at least it was quiet. So let's call it what it is.

If AI starts as a confident liar and evolves into a boorish blowhard—endlessly talking, rarely consolidating - it risks becoming something far less valuable than we hoped. And the real question becomes…If we can't separate signal from noise, was all this progress actually progress at all?

Planning for the Rise of Narrative Fatigue

If you've made it this far and find yourself nodding along - even a little - then you've already taken the first step. Awareness. And that matters more than you think.

AI already took a hit with hallucinations. We all saw it happen. Trust was built slowly…then chipped away almost overnight. People who blindly handed over their thinking to tools like ChatGPT learned the hard way that outsourcing judgment entirely isn't a strategy - it's a shortcut to bad decisions.

So the question becomes: what do we do now that the next wave—narrative fatigue—is already forming?

  1. You know about it…so look for it

Awareness is half the battle. It always is. Once you can name a problem, you can start to see it. And once you can see it, you can design around it.

Narrative fatigue isn't some abstract concept—it's a predictable outcome. If you take agentic AI, plug it into a process, and scale it across a team without thinking, you will create a flood of output. That's not a maybe. That's a guarantee. So play it forward.

Where will narratives pile up? Where will people stop reading? Where does "helpful" quietly turn into "noise"? If you can anticipate those moments, you can build guardrails before they become problems.

There's a cost hiding in plain sight here. Data professionals already spend 82% of their time preparing and cleaning data - not analyzing it. (Parse Labs, State of Revenue Intelligence 2026.) Agentic AI sitting on top of a noisy data layer doesn't fix that ratio. It just runs the noise faster.

  1. Don't scale too quickly - and control where you scale

We're already using tools that do exactly what I described earlier. And honestly? We love them. As a small team, the efficiency gains are real. Having processed outputs automatically generated from our work feels like a superpower. But here's the catch—we're small. As we grow, we have to be intentional. Not everything that can scale should scale.

We have to ask: Where does AI actually help? Where does it create clarity? And where does it start to create noise? Because if you're not careful, you end up chasing the magic trick. You get so enamored with what AI can do that you stop asking what it should do. And that's when things flip.

Letting AI run everything isn't efficient - it's laziness dressed up as innovation. And when that happens, the two villains we talked about earlier - the confident liar and the boorish blowhard—start running your process instead of supporting it. That's not a place you want to be.

  1. Be a consumer when designing AI processes

Earlier in my career, I worked on something wildly unglamorous—formulating toilet bowl cleaners. And my job wasn't just chemistry. It was observation. Watching how people actually used the product and understanding what they perceived as value.

Because here's the truth: the product only matters if the user feels the benefit. We forget that when it comes to AI. You are not just implementing tools - you are consuming them.

What I see happening right now is that AI enables individuals to create their own micro-processes. Personal workflows that help them think better, move faster, learn more effectively. That's where the real value is showing up. And that ties back to something we discussed earlier—AI fills gaps. In how we create. How we produce. How we learn. How we interact. So start there.

If you design processes that serve the individual first—helping them close their own gaps—you naturally reduce the risk of narrative fatigue. Why? Because the user is in control. They decide what they need, when they need it, and how much is too much.

The design test I keep coming back to: does this output give the person something they couldn't have predicted? Something that actually changes what they do next? That's information gain and it's the only real measure of whether an AI-generated output earns its place in someone's day. The data backs this up: GTM teams working from verified, high-signal data book 26% more meetings than those running on standard database exports. (Parse Labs, State of Revenue Intelligence 2026.) The output isn't just cleaner. It's decisively different.

But if you try to force a single, scaled process across a broad group? One that works for some and not for others? That's when the noise creeps in. And once it starts, it compounds quickly.

Putting it all together

AI is an incredible support system. It accelerates how we synthesize information, how we process ideas, how we move from question to insight. But let's not kid ourselves—it's not a replacement for thinking. Narrative fatigue is a signal. It tells us we're still early. That we haven't quite figured out how to scale this properly yet.

Because if the system were truly optimized, it wouldn't overwhelm us with information—it would deliver exactly what we need, in a way we can actually absorb. That's the challenge ahead.

We need scale—but not blunt force scale. We need customized scale. Systems that can stretch across an organization while still flexing to the needs of the individual. The companies that crack this will build what I'd call a semantic fortress—a layer where the signal stays clean no matter how much volume the system is processing underneath it. That's the infrastructure problem nobody is talking about yet.

Because if users don't feel in control—if they can't shape the output into something digestible—the experience breaks down. And when that happens…

…AI doesn't fail because it isn't powerful. It fails because it becomes background noise. A constant, dull hum that eventually gets tuned out by the only customer that actually matters:

The user.



Malcolm De Leo

CBO

Share