In 1997, Garry Kasparov lost to IBM's Deep Blue. The chess world declared human relevance in the game finished, but Kasparov did something nobody expected.
Instead of retreating, he invented a new format he called Advanced Chess. Human-computer teams competing against each other and against machines playing alone.
The results upended every assumption. A weak human player with a decent machine and a good process beat both the strongest grandmasters and the strongest computers. Kasparov's name for these hybrid players was centaurs.
The strongest chess entities were now mediocre players who'd learned exactly where the machine's judgment ended and theirs began.
The same pattern is showing up in product management. Not at the grandmaster level, but at the twelve-minute level.
Most product leaders are playing the wrong game.
The Twelve-Minute Verdict
A product leader posted on LinkedIn recently. He'd run out of usage on his AI subscription in twelve minutes. Couldn't finish a single task.
The day before, he'd tried connecting an MCP integration and it didn't work. Today he'd gotten Confluence hooked up and eventually made one document. Both sessions maxed out his plan.
He concluded the technology doesn't work.
He's not alone. 42% of companies abandoned most of their AI initiatives in 2025, up from 17% the year before. Confidence in AI is dropping even as usage rises.
The dominant narrative among frustrated users follows a familiar script: "I gave it a fair shot. It failed."
But the data tells a split story. A survey of 1,750 tech workers by Lenny Rachitsky found 55% said AI exceeded their expectations. Over 50% save at least half a day per week. Among product managers specifically, 63% save four or more hours weekly.
Same tools. Same era. Wildly different outcomes.
The question worth asking is what separates the twelve-minute quitters from the four-hour savers.
The Jagged Frontier
In 2023, researchers at Harvard Business School and BCG ran a study with 758 consultants. They gave them real tasks and randomly assigned some to use AI.
The results contradicted themselves. Within certain tasks, consultants using AI performed 40% better and 25% faster. Junior consultants saw a 43% improvement. But on a different set of tasks, consultants using AI were 19% more likely to get the wrong answer than those working without it.
Same tool. Same people. Dramatically different results depending on the task.
Ethan Mollick, the researcher who led the study, called it the jagged technological frontier. AI's capability boundary is invisible and unintuitive. Some tasks that look hard are easy for AI. Some that look simple are beyond it.
The boundary is nothing like what you'd guess.
The frustrated product leader's mistake was starting in exactly the wrong place. He opened with MCP integrations and developer tooling. For a non-technical user, that's deep outside the frontier. It's like evaluating a chainsaw by trying to do surgery with it.
The people who get value from AI start with tasks clearly inside the frontier. They build calibration for where the boundary sits. Then they push outward.
The study found two patterns among successful users. "Centaurs" divided tasks between themselves and AI, choosing deliberately which parts to hand off. "Cyborgs" integrated AI continuously into their entire workflow. Both patterns worked.
Handing everything to AI and trusting the output didn't work. Neither did refusing to use it at all.
Where the Line Actually Is
For product leaders, the frontier is more mapped out than in most disciplines. The data is pretty specific about what works.
PRDs and specs. The number one AI use case for PMs, at 21.5% in the Rachitsky survey. A two-hour PRD becomes a twenty-minute first draft that you refine with your own judgment.
One PM I spoke with walked me through her process. She feeds AI the customer interview notes, competitive positioning, meeting transcripts, and technical constraints. The AI produces a structured draft. She then spends her time on the parts that require actual decisions, which features to cut, which trade-offs to make, how to sequence the rollout. The document gets better because she's spending her judgment where it matters instead of formatting tables.
Prototyping. Number two at 19.8%. PMs going from idea to clickable prototype without waiting on design sprints.
This is the one that changes meeting dynamics. Instead of describing a feature in a slide deck and hoping stakeholders imagine the same thing, a PM can walk into a review with a working prototype built that morning. The conversation shifts from "what would this look like?" to "does this solve the problem?"
Speed of validation is the actual constraint on most product teams. This removes it.
Communication. Number three at 18.5%. Exec summaries, stakeholder updates, meeting prep. The kind of work that eats hours but doesn't require original thinking.
Beyond those top three, the frontier includes competitive research, customer feedback synthesis, and acceptance criteria. Synthesizing earnings calls into structured analysis, extracting patterns from hundreds of support tickets, translating product intent into engineering specs. Work that used to take an analyst a week now takes an afternoon.
Every one of these tasks shares a quality. They reward the judgment you already have. You know what a good PRD looks like. You know what questions matter in a competitive analysis. You know which user feedback signals to take seriously.
AI produces the first draft. Your expertise makes it right.
Now here's where the frontier gets jagged.
AI is terrible at the strategic judgment that defines your job, the decisions about what to build, what to kill, how to sequence, and the invisible dynamics of stakeholder politics that no model can read.
It doesn't understand unspoken user needs. It can't make trade-offs that require knowing your company's history, your team's strengths, or your market's timing.
Those skills, the ones that took you years to build, are exactly the ones AI can't replicate. HBR published a study in February 2026 that made the point directly. The skills that make AI work are problem definition, evaluation, experimentation, and integration. All four are core product management competencies.
The irony should sting a little. Product leaders already have the skills that make AI work. Most just aren't applying them.
The biggest uncracked opportunity to me sits in user research synthesis. Current usage is just 4.7% but desired usage is nearly 32%. A gap of over 27 percentage points. The PMs who figure out how to work the frontier for user research will have a serious edge.
The Training Gap Is the Real Gap
Here's a number that should make you angry. According to Productboard's CPO Survey, 85% of companies are investing in AI tools. Only 2% are prioritizing talent development.
Read that again. 85 to 2.
Globally, 56% of workers have received no AI training at all. Companies are buying subscriptions, dropping tools into people's laps, and then concluding the tools don't work when nobody uses them well.
McKinsey's research confirmed it. Talent skill gaps account for 46% of the reasons companies cite for slow AI deployment. The gap is not about access. Half of frontline employees have AI tools available and still don't use them regularly.
BCG's data makes the fix somewhat obvious. Companies that invest 70% of their AI resources in people and processes, not just the technology itself, see two to three times better outcomes. And Microsoft's Work Trend Index added a detail that should get product leaders' attention: 71% of business leaders now prefer a less-experienced candidate with AI skills over a more-experienced one without them.
The twelve-minute verdict is a symptom, not a diagnosis. The product leader who gave up after twelve minutes was failed by an industry that sold him a subscription and forgot to show anyone what to do with it.
The Worst It Will Ever Be
AI today is the worst it will ever be.
If all progress stopped tomorrow, it would still be saving power users hours per week and producing measurably better outcomes on the tasks inside the frontier. The tools are already good enough to matter. They're just not good enough to work without you.
The real cost of waiting is missing the learning curve.
Kasparov's centaurs won Advanced Chess tournaments not because they had the best computers. They won because they'd spent the most time learning where the frontier was and how to work the boundary. They'd built intuition about when to trust the machine and when to override it. That intuition was the product of reps, not specs.
BCG's research on the widening gap makes this concrete. Only 5% of companies are generating significant value from AI. But that 5% is pulling away fast. "As high performers compound their advantages and AI capabilities continue advancing," BCG wrote, "the cost of strategic hesitation grows."
Product leaders have spent their careers building judgment about what to build and why. That judgment is exactly the skill AI can't replicate. But judgment without the tools is like Kasparov playing alone against a centaur team.
You already know what right looks like. You've spent years calibrating your sense for which problems matter, which approaches will ship, and which trade-offs your team can afford.
That's the centaur advantage. Your domain expertise is what makes AI useful. But only if you're willing to be a beginner at using it.
Twelve minutes is enough time to give up. It's also enough time to write your first AI-assisted PRD, compare it against your instincts, and start mapping the frontier for yourself.
The people who made that second choice are already a year ahead.



