In 2023, most of us experienced AI as a spark.
ChatGPT made the possibility visible to everyone at once. It was the first epoch: sudden awareness, broad experimentation, and a lot of shallow usage that still felt magical.
What we are in now is different.
With Claude Code and Opus 4.6, we have entered what I call the Second Epoch. Others are calling it the "second moment." Either way, the shift is the same: AI is moving from novelty to operating model.
This is bigger than the first wave, and far less understood.

Why this moment matters more than the first
The first epoch made AI conversational.
The second epoch makes AI operational.
That distinction is everything.
In epoch one, people asked questions and got answers. In epoch two, people design systems where AI helps produce outcomes repeatedly: better specs, faster experiments, tighter feedback loops, and dramatically shorter paths from idea to shipped work.
This is why I believe we are underestimating the current moment. Most people still evaluate AI like a chatbot. But high-performing teams are no longer using AI as a chatbot. They are using it as a force multiplier embedded in how work gets done.
The biggest misconception: this is about job loss
It is not.
Yes, roles will evolve. That always happens with major technology transitions.
But if your primary lens is "who gets replaced," you will miss the far more important question: what becomes possible for people who previously could not build, test, or ship at this speed?
This is the unlock that matters.
Non-coders can now do meaningful first-pass work that used to require waiting in line for scarce engineering cycles. Product managers can move from concept to prototype in hours. Operators can turn messy inputs into decision-ready outputs in one working session. Founders can pressure-test ideas before burning a sprint.
That is not elimination. That is capability expansion.

The harness is everything
One idea I strongly agree with from recent X discourse is this: the model is not the whole story. The harness is everything.
You are not getting poor results because you picked the "wrong" frontier model. You are getting poor results because your environment is weak.
By environment, I mean:
- the context you feed the model
- the constraints you define
- the tools and files it can access
- the review loop you use
- the standards you enforce before output becomes action
A strong model inside a weak harness creates confident mediocrity.
A solid model inside a strong harness creates leverage.

This is why some teams appear to be moving 10x faster while others report that "AI didn’t work for us." They are not experiencing different reality. They are running different systems.
What changes for product and delivery leaders
If you lead product, engineering, or operations, the second epoch requires a mindset shift from tool adoption to workflow redesign.
Here are the practical moves that matter now:
1) Move from one-off prompting to repeatable playbooks
Stop treating each interaction as a blank page.
Build reusable prompts, input templates, and quality checklists for recurring tasks: PRDs, technical discovery, launch plans, user research synthesis, outbound messaging, and executive updates.
Repeatability compounds.
2) Keep humans on judgment, push AI into throughput
Do not delegate strategic judgment.
Do delegate structure, first drafts, synthesis, alternatives, and artifact formatting. Let humans spend more time on prioritization, tradeoffs, sequencing, and market signal interpretation.
That is where your advantage lives.
3) Build tighter loops, not bigger plans
The teams winning in this moment are not writing bigger roadmaps. They are running tighter loops:
- hypothesis
- draft/prototype
- feedback
- revision
- decision
The cycle time is the strategy.
4) Train people, not just tooling
Most organizations are over-investing in subscriptions and under-investing in capability.
If your team has access to advanced models but no shared method for using them, you have bought potential, not performance.
Make AI fluency operational: coaching, examples, peer review, and explicit standards.
5) Measure outcome velocity
Track how quickly your team gets from question to decision and from decision to validated learning.
The second epoch is not won on prompt cleverness. It is won on organizational learning speed.

Why non-coders should pay very close attention
If you are not a software engineer, this may be the most important technology window of your career.
For the first time, complex digital work is becoming legible and accessible to professionals in product, design, marketing, sales, customer success, and operations without requiring years of coding fluency first.
That does not mean expertise no longer matters.
It means expertise in your domain now pairs with AI to produce outputs that were previously gated by implementation bottlenecks.
In other words: your judgment has more reach than it did last year.
The strategic risk is not using AI imperfectly
The strategic risk is waiting until everyone else has built their harnesses.
In every meaningful platform shift, early winners are rarely those with the best raw technology. They are the ones who learn fastest how to integrate it into daily execution.
That is where we are now.
Not at the end of the curve. At the beginning of organizational differentiation.
My call
The release of ChatGPT marked the first epoch.
The release cadence and workflow depth unlocked by Claude Code with Opus 4.6 marks the second.
This second epoch is more consequential because it changes not only what AI can say, but what teams can reliably do.
So if you are evaluating AI today, don’t ask only: "How good is the model?"
Ask: "How good is our harness?"
Because in this moment, the harness is the product. And the teams that build it well will define the phase of our economy.



