For the last couple of years, you couldn't walk five feet without hearing about how Large Language Models (LLMs) were going to fix everything. They were supposed to be the magic bullet that would answer all our questions, automate the boring stuff, and basically change how we think and work.

But recently? Cracks are starting to show.

It turns out those AI "hallucinations"—where the bot confidently gives you a completely wrong answer—are a real problem. If you’ve ever tried to use an AI assistant for something serious and got a bizarrely off-base response, you’ve learned the old lesson: garbage in, garbage out.

The honeymoon phase is officially over. We’re waking up to the limits of these tools and starting to ask if they can actually handle the day-to-day tasks we were promised they could.

But this skepticism isn't just a blip on the radar. It’s actually the start of a necessary shift—a realization that the future of AI isn't about the models themselves, but the data we feed them.

When Data Becomes the Real MVP

In a world drowning in noise, quality information is everything. Eventually, these fancy LLM models are just going to become standard tools—commodities you can swap in and out. So, what should we actually be looking at?

The answer is a little drier than "sentient robots," but way more important: structured data, often called knowledge graphs.

Think of it this way: instead of letting AI dig through a chaotic dumpster of internet information, a knowledge graph connects facts into an organized, reliable network. It turns noisy data into usable insight.

A good knowledge graph changes the whole game. It acts as a trusted "data watering hole." We can curate accurate information there first, and then point the AI model at it. Instead of expecting the AI to magically "know" everything, we give it the right information and let it do what it's actually good at: summarizing and interpreting things quickly.

This approach blends careful human curation with AI speed. The answers you get are tighter, more accurate, and actually useful. It flips the script from "How smart is this AI?" to "How good is the data we gave it?"

The Chocolate Chip Cookie Philosophy

If that sounds too technical, let’s try a tastier analogy. Think of a chocolate chip cookie.

In this scenario, the solid, reliable cookie base is your knowledge graph (the structured data). The chocolate chips? Those are your LLMs.

Here’s the thing: you can swap out the chips. Maybe you want dark chocolate today, or milk chocolate tomorrow. You can change the brand or the style of the chip, but as long as that cookie base is solid and delicious, the end result still works.

So, what makes the cookie irresistible in the first place? That brings me to a lesson I learned the hard way in my early career.

I started out in corporate America as a formulator—making things like cleaning products. One time, I worked on a new chemical formula that was technically brilliant. In the lab, it outperformed everything. But when we tested it with real people, nobody cared. Meanwhile, a basic version that just had a new fragrance added was flying off the shelves.

I was totally confused until my boss pulled me aside. He said, “Malcolm, if you can’t create a perceivable consumer benefit, it doesn’t matter how technically good it is.”

That lesson stuck. And AI is exactly the same.

We’ve had passive AI for years—think autocorrect or those smart email replies. They were helpful, but boring. Then ChatGPT landed. Suddenly, you could actively engage with it and get something clear and useful right away. The benefit was immediately obvious. You could taste it.

That’s what really sparked this whole boom. It wasn't just the technology; it was the fact that consumers could finally perceive the benefit. That was the flavor burst that made the cookie worth eating.

As AI models keep evolving, they’re going to become easier to swap out, just like those chocolate chips. What will matter most is the cookie itself—the base of reliable, curated information that gives those models something meaningful to work with.

So yes, the crazy AI hype bubble might be deflating a bit. But don’t mistake that for an ending. It’s just evolving. The winners in this next phase won’t be the ones with the flashiest models; they’ll be the ones baking the best cookies—built on solid structures that deliver benefits you can actually taste.

Malcolm De Leo

CBO

Share