Mike Cannon-Brookes said it on a16z this week: the models are far ahead of the value they're delivering. He's right. And as someone who builds on top of these models every day, I think the reason is simpler than most people realize.
The models aren't the problem. The products are.
The Blinking Cursor
Here's the dominant AI experience in 2026: you open a tool, you see an empty text box, and a cursor blinks at you. Go ahead. Type something.
That's it. That's the product.
The model behind that cursor can reason through complex problems, synthesize research across domains, and produce work that would take a human team days. But the product wrapping it says: “You figure out what to ask. You provide the context. You verify the output. Good luck.”
This is the equivalent of handing someone the keys to a Ferrari and dropping them in the middle of a field. No road. No map. No destination. The engine is extraordinary. The experience is not.
Aaron Tay calls this the Blank Box Problem: we've traded structured interfaces with clear affordances for infinite potential behind an empty prompt. The result isn't empowerment — it's paralysis. Most people stare at that cursor and don't know where to start.
The Numbers Don't Lie
The data on this gap is brutal and getting worse.
BCG surveyed 1,250 firms last year and found that only 5% are generating AI value at scale. Sixty percent are getting no material value at all — despite substantial investment. And the gap between leaders and laggards is widening, not closing.
McKinsey's 2025 State of AI report tells the same story: more than 80% of organizations report no tangible impact on enterprise-level EBIT from gen AI. Eighty-eight percent of companies are using it. Almost none are getting real returns.
The most telling stat comes from Workday. They surveyed 3,200 employees and found that 85% save one to seven hours a week using AI tools. Sounds great — until you learn that nearly 40% of those savings are lost to rework. Correcting errors. Rewriting content. Verifying outputs. Only 14% of employees consistently get clear, positive outcomes.
The AI makes you faster at producing things that need fixing. That's not productivity. That's a treadmill.
What “Lazy” Actually Means
I don't think most AI products are lazy because their teams aren't talented. I think they're lazy because the industry settled on a pattern and stopped iterating.
The pattern looks like this:
Take a model. Add a chat interface. Ship it.
No memory — every conversation starts at zero. The AI doesn't know your organization, your history, or your constraints. You re-explain context every single session.
No workflow integration — the AI lives in a separate tab from where the work actually happens. You copy-paste between the tool and your real workflow.
No quality infrastructure — outputs come with no citations, no confidence scoring, no way to verify claims without doing the work yourself. Hence the 40% rework rate.
These aren't model limitations. The models can handle memory, context, verification, and structured output. These are product decisions — or more accurately, product non-decisions. It's easier to ship a chat wrapper than to build the infrastructure that makes a model genuinely useful.
The Architecture Nobody Wants to Build
Here's what I've learned building on top of these models: the hard part isn't getting the AI to be smart. It's giving it something to be smart about.
When a model has access to your organization's accumulated knowledge — documents, decisions, conversations, institutional context — it doesn't need you to brief it from scratch. It already knows what matters before you type a word. The blinking cursor goes away, replaced by an AI that can actually start from a position of understanding.
This is an architecture problem, not a model problem. It requires building a knowledge layer that compounds over time. It means structured fact extraction, not just dumping documents into a vector database. It means the product does the work of connecting context to capability — so the user doesn't have to.
The 5% of companies BCG identified as AI value generators? They didn't get there by using better models. They got there by building the infrastructure around the models — redesigning workflows, investing in data foundations, and treating AI as a system, not a feature.
The Gap Is the Opportunity
Cannon-Brookes is right that the models are ahead. Way ahead. But I'd frame it differently: the models have been ready. The products haven't caught up.
The next generation of AI tools won't win because the models got smarter. They'll win because someone finally built the product layer that respects what the models can do — and stops making the user compensate for what the product won't.
That's what we're building at Hone Labs. Not a better model. A better memory.
If that resonates, we'd love to show you what it looks like.