Something Big Is Happening. The Path There Is Harder Than You Think.
Matt Shumer’s “Something Big Is Happening” essay has 60 million views and counting. My non-tech friends are texting me about it. If you haven’t read it, the short version: AI is about to eliminate most knowledge work, the timeline is one to five years, and if you’re not already using AI seriously, you’re behind.
He’s not wrong about the direction. He’s wrong about the path.
AI capability is accelerating exponentially. Enterprise deployment is not — because infrastructure, governance, and organizational friction scale differently than model intelligence. That’s the conversation we should be having.
I’ve spent 25 years in enterprise technology. I’ve watched every major platform shift — virtualization, cloud, hybrid infrastructure, and now AI. I currently build and operate AI systems on NVIDIA DGX hardware, advise enterprise technology vendors on go-to-market strategy, and run structured feedback sessions with CxOs evaluating AI infrastructure. I’m not an observer. I’m a practitioner. And the view from inside the machine is a lot messier than Shumer’s essay suggests.
The COVID Analogy Cuts Both Ways
Shumer compares this moment to February 2020, when most people hadn’t yet grasped what COVID would become. It’s a compelling frame. But it also reveals who’s actually been paying attention.
I have a friend who was tracking COVID months before February 2020. He wasn’t waiting for a viral essay to tell him what was happening. He was reading the primary data, watching the trajectory, and drawing his own conclusions while most people were still dismissing it.
That’s where practitioners are with AI right now. We didn’t need a 5,000-word wake-up call. We’ve been inside the complexity for years. Viral essays create awareness. They do not create competence.
The Demo-to-Production Gap Is the Whole Story
Shumer describes telling an AI to build an app, walking away for four hours, and coming back to find it done. I believe him. I’ve had similar experiences. But here’s what that anecdote leaves out: he’s describing a solo developer building a greenfield application with no legacy systems, no compliance requirements, no data governance, and no organizational stakeholders.
That’s not how enterprises work. That’s not even how most consumer software works at scale.
When I moved my Virtual CTO Advisor system from Google Cloud Platform to bare-metal NVIDIA DGX hardware, I discovered something that fundamentally changed how I think about AI architecture. The cloud had been running an invisible reasoning layer — making autonomous decisions about where and how to run workloads, handling auto-scaling, managing data movement, optimizing cost and performance. None of that existed on bare metal. I had to build it explicitly.
That experience became the foundation of what I now call the 4+1 Layer AI Infrastructure Model. On DGX, there is no invisible Layer 2C reasoning plane. If you want workload orchestration, you build it. If you want cost optimization, you design it. If you want data locality controls, you architect them. It’s five layers of complexity that most people never see because the cloud abstracts it away. And it’s the reason “AI can write code” doesn’t translate to “AI can replace your enterprise IT department” on any timeline Shumer is describing.
The gap between capability and deployment is where the real story lives. AI models are genuinely impressive. Deploying them into environments with regulatory requirements, data residency constraints, security policies, existing tech debt, and organizational politics? That’s a different problem entirely — and it’s the problem that actually determines the pace of transformation.
Consumer AI Has Its Own Friction
Shumer’s essay is aimed at non-tech people, but it glosses over the friction they experience every day. He acknowledges that people tried ChatGPT in 2023 and found it lacking, then argues the current models are “unrecognizable” from those early versions. That’s partially true. The models are better. But the experience gap is still real.
Consumers hit context window limits and don’t know why the AI suddenly forgot what they were talking about. They get confidently wrong answers and can’t tell the difference. They struggle with inconsistent outputs across sessions. They don’t know how to prompt effectively — and Shumer’s advice to “rephrase what you asked” and “give it more context” is essentially telling non-technical users to learn prompt engineering without calling it that.
The paid-versus-free distinction Shumer draws is real, but it’s also revealing. He’s essentially saying the technology only works well if you know which model to select, you’re on the paid tier, and you’re skilled enough to push it into your actual workflows. That’s not a description of technology that’s ready to transform every knowledge worker’s job in one to five years. That’s a description of a power tool that rewards expertise — which is exactly what the last 25 years of enterprise technology have looked like.
Where Shumer Is Right — and Why It Still Matters
I want to be clear: I am not dismissing AI’s transformative potential. I’m building my entire business around it. I use Claude as a strategic thinking partner. I’ve built custom AI systems for architectural validation. I advise vendors whose entire futures depend on enterprise AI adoption. I believe this technology will reshape how knowledge work gets done.
But “reshaping” is not the same as “replacing,” and the timeline is not one to five years for most of the economy. It’s uneven. It’s nonlinear. It’s messy.
Coding is the canary in the coal mine — Shumer is right about that. But as Fortune’s Jeremy Kahn pointed out in his rebuttal, coding has something most knowledge work doesn’t: automated quality signals. Code compiles or it doesn’t. It passes tests or it doesn’t. Law, medicine, finance, and consulting don’t have compilers. They have judgment, context, relationships, and liability. Those aren’t details to be waved past. They’re the reason adoption will be slower and more complex than the hype suggests.
Here’s what I’ve seen across 200+ enterprise AI infrastructure sessions: the technology is rarely the bottleneck. The bottleneck is the friction between domain expertise and implementation. It’s the CTO who can’t get the data team aligned with the infrastructure team. It’s the compliance officer who needs to understand what the model is doing before it touches customer data. It’s the vendor whose platform works beautifully in a demo and falls apart when it encounters real enterprise data gravity.
The Honest Version
Shumer says he wrote his essay because the people he cares about deserve “the honest version.” Fair enough. Here’s mine.
AI is transformative. It is not magic. The path from here to widespread transformation runs through infrastructure complexity, organizational change management, regulatory adaptation, and a workforce that needs to develop genuinely new skills — not just sign up for a $20/month subscription.
In every platform shift I’ve seen — virtualization, cloud, containers — the early narrative was about technical capability. The actual transformation was governed by operational economics and organizational readiness. AI will follow the same pattern. The winners won’t be the fastest adopters. They’ll be the ones who understand where authority sits, where risk lives, and how to design for both.
The people who will navigate this well are not the ones who read a viral essay and panic. They’re the ones who start building, start experimenting, and start understanding the actual architecture of these systems — not just what they can do in a demo, but what it takes to make them work in the real world.
Something big is happening. It has been for a while. And the gap between capability and deployment is still where the real story lives.
