Everyone Is Building Layer 2C — They Just Aren’t Calling It That
Over the last week, I noticed something interesting.
A surprising number of infrastructure, cloud, Kubernetes, and AI vendors all announced products solving variations of the same operational problem. The interesting part is that almost none of these announcements were fundamentally about the models themselves. The industry conversation is already shifting away from raw model capability and toward operational control.
Google is talking about agent identity, runtime defense, and AI-aware IAM. Cloudflare is talking about versioning and traceability for AI agents. IBM is talking about an AI operating model. Kubernetes is evolving policy enforcement and resource orchestration for increasingly autonomous infrastructure. Broadcom and VMware are repositioning VCF as an operational platform for production AI. BASF is showing what production AI looks like when agentic systems operate inside a digital twin with real supply chain constraints.
Different companies. Different products. Different market segments. But the same architectural pressure is showing up everywhere. The industry is slowly realizing that enterprise AI is not fundamentally a model problem. It is a control problem.
The Industry Is Moving Beyond Layer 0 Obsession
For the past two years, most AI infrastructure conversations centered around Layer 0: GPUs, networking, storage fabric, memory bandwidth, clusters, accelerators, and raw compute capacity. Those things matter. Layer 0 is necessary fuel, but it is not the engine.
The harder enterprise problem begins once AI systems become integrated into operational workflows. That’s where the conversation shifts from whether the model can perform a task to whether the organization can trust the system operationally.
That is where Layer 2C emerges.
Layer 2C is the reasoning, governance, and authority placement layer inside the operational plane. It is where the enterprise has to answer uncomfortable questions about where control lives, who owns the decision, what gets logged, what can be replayed, what is policy-driven, what is model-driven, and what the AI can suggest versus what deterministic systems must enforce.
That is the part of the stack many vendors are now backing into, even if they are not using the same language.
Google: AI Agents Need Identity and Runtime Governance
Google Cloud announced new IAM capabilities specifically designed for AI agents, including agent identity systems, gateways, runtime defense, and policy enforcement.
Google Cloud IAM updates for AI agents
This is an important signal. Identity systems historically existed for humans, applications, and infrastructure services. Now Google is explicitly treating AI agents as operational entities requiring scoped permissions, runtime controls, policy boundaries, and governance.
That is not a model conversation. That is a control-plane and authority-placement conversation.
Cloudflare: Git for AI Outputs
Cloudflare introduced Artifacts, a Git-like versioning system for AI agents.
Cloudflare Artifacts announcement
This one jumped out at me immediately because versioning is not just a developer convenience. Versioning is about replayability, auditability, traceability, and rollback. In other words, operational trust.
The second enterprises started asking what changed, why the AI did something, whether the result could be reproduced, and whether it could be reverted, the industry was inevitably going to rediscover deterministic operational controls.
Again, not a model problem. A governance problem.
Kubernetes Is Quietly Becoming an AI Operating Substrate
The Kubernetes v1.36 announcements were particularly revealing. Kubernetes introduced immutable admission policy enforcement, server-side sharded list/watch scalability, and continued expansion of Dynamic Resource Allocation.
Immutable admission policies in Kubernetes v1.36
Kubernetes server-side sharded list and watch
Kubernetes Dynamic Resource Allocation updates
Individually, these look like incremental infrastructure updates. Collectively, they tell a larger story. Kubernetes is evolving from container orchestration into a distributed AI infrastructure control plane.
The platform is increasingly handling accelerator scheduling, policy enforcement, resource arbitration, workload isolation, operational governance, and large-scale distributed coordination. That is Layer 2 behavior. And as AI agents become operational actors, the need for Layer 2C becomes more obvious.
VMware and the Shift from Virtualization to AI Operations
Broadcom and VMware released a flood of VCF 9.1 announcements this week. The messaging was notable because VMware is no longer positioning VCF primarily as virtualization, hybrid cloud, or SDDC. The messaging is now private cloud for production AI.
The interesting part is not the AI messaging itself. It is why VMware is moving there. Operational complexity is becoming the dominant enterprise AI challenge. Not model availability. Not token generation. Operational coordination.
That is exactly where Layer 2C lives.
IBM Is Calling It an AI Operating Model
IBM announced what it calls an AI operating model.
IBM Think 2026 AI operating model announcement
This may be the clearest enterprise signal of all. The industry is gradually admitting that AI is no longer just a tool. It is becoming an operational system that requires governance, orchestration, lifecycle management, runtime controls, policy frameworks, and organizational ownership.
In other words, an operating model.
BASF Shows What Production AI Actually Looks Like
One of the most important AI stories this week was not about a new model. It was BASF using digital twins and agentic algorithms to optimize thousands of supply chain decisions.
BASF digital twin and agentic supply chain modeling
This matters because it looks much more like real enterprise AI than most public demos. BASF is not turning an LLM loose on its global supply chain. The AI exists inside a governed operational environment with production models, inventory constraints, logistics workflows, simulation boundaries, and measurable business outcomes.
The digital twin becomes the operational system of record for reasoning. The agentic system reasons inside that environment.
That distinction is critical. The operational system still owns state, constraints, validation, and accountability. The AI contributes reasoning, optimization, and scenario exploration.
That is a fundamentally different pattern than turning an agent loose and hoping for the best. And it increasingly looks like where enterprise AI is actually heading.
Deterministic-Code-in-the-Loop Is the Missing Piece
This is where I think the industry conversation still gets fuzzy. Most vendors now recognize the need for governance, runtime controls, identity, policy enforcement, observability, and orchestration. But many architectures still implicitly assume the AI itself remains the primary execution engine.
That is where Deterministic-Code-in-the-Loop becomes important.
LLMs are probabilistic reasoning systems. Enterprise operational systems cannot be. That does not mean AI is untrustworthy. It means trust has to be constructed architecturally.
The model can summarize, classify, plan, generate recommendations, infer intent, and propose actions. But deterministic systems still need to own execution, state transitions, approvals, rollback, policy enforcement, evidence generation, auditability, and operational constraints.
That distinction matters enormously because many AI failures are not model failures. They are authority failures. The AI slowly drifts from assistant making recommendations into system making operational decisions nobody can fully explain.
That is authority drift.
The solution increasingly looks like AI reasoning wrapped in deterministic workflows, enforced by policy systems, validated through observable state, and operating within constrained execution boundaries.
In other words, Deterministic-Code-in-the-Loop is becoming the operational implementation pattern for Layer 2C.
BASF’s digital twin model may be one of the clearest real-world examples. The intelligence exists inside a governed operational system, not outside of it. The AI can reason, optimize, and recommend. But the deterministic operational environment constrains the problem, validates the choices, and ties outcomes back to business reality.
That is the pattern.
Not AI replacing operational systems. AI augmenting deterministic operational systems that still own execution authority.
That is where enterprise AI starts becoming trustworthy at scale. And if you step back and look closely, that is exactly where the entire infrastructure market seems to be heading.
