In an insightful recent video by Nate B Jones (https://www.youtube.com/watch?v=kVPVmz0qJvY), he hit the nail on the head regarding a major scaling bottleneck we are facing today. The core problem is stark but accurate: companies are eagerly adopting general-purpose AI agents to move faster, but in doing so, they are papering over broken data, messy schemas, and undefined workflows. The result is an architecture that generates output at a 100x rate, but an organizational structure that can only review it at a 3x rate.
This brilliantly explains a glaring paradox I’ve noticed across the landscape in 2026. Industry reports show that while AI code generation is pushing above 80% in many companies, DORA metrics (deployment frequency, lead time, etc.) refuse to move beyond a 20% improvement. We are writing code faster than ever before, but our actual ability to ship, maintain, and observe that software is bottlenecked by the humans left holding the bag.
As the CTO of Dialectica, and having spent the last decade scaling engineering organizations (like taking Orfium from zero to 800 people), I’ve seen firsthand that technology is only as good as the system it operates within. Nate’s video perfectly aligns with the principles I’ve been writing about on my blog. Here is why the agentic revolution requires us to fundamentally rethink our architecture, our teams, and our processes.
1. You Can’t “Vibe-Code” a Business Process
Nate warns against the temptation to point an agent at a CRM and ask it to just build one. He urges us to keep our core business logic deterministic, famously comparing agent-driven workflows to “ripping up your railroad and sticking your train on the ground”.
This directly mirrors my reflections in From DeepGraphs to MCP: How I Predicted the Agentic Web a Decade Ago (and How I Got the Engine Wrong). A decade ago, during my PhD at NTUA, my research on DeepGraphs proposed that digital objects must broadcast their affordances via strict Finite State Machines. Today, the Model Context Protocol (MCP) gives agents incredible probabilistic flexibility, but it often lacks that deterministic safety. You cannot let an LLM intuitively “guess” the flow of a customer transaction. We must use AI for what it’s exceptional at—tool calling and text generation—while hardwiring the rails of our business logic.
2. The “Rule of Two” is the Antidote to the Review Bottleneck
If our agents are producing at a massive scale, how do we solve the human review bottleneck? The answer isn’t a solo “10x developer” relying entirely on AI to pump out tickets.
As I outlined in my recent strategy, Building Software with the Rule of Two: A New Strategy for 2026+, the highest-leverage unit in the new AI-native SDLC is exactly two humans working with a swarm of agents. One human gets tunnel vision and burns out trying to evaluate the massive, sometimes hallucinated output of AI systems. Two humans, however, provide instant peer review, robust architectural debate, and the nuanced product judgment that AI currently lacks. They stop being the primary code-typists and instead become the conductors of the agentic orchestra, effectively matching the throughput of the machines they manage.
3. Engineering the 20x System
Finally, Nate drops a few critical commandments for AI deployment: audit before you automate, fix your data, redesign your org, and build observability.
These are the exact prerequisites for what I call the 20x Team. In The 20x Team Manifesto, I argue that collective impact comes from engineering the system, not just managing the people. You cannot drop an AI agent into a highly bureaucratic, low-trust environment and expect a 20x return. You need absolute extreme ownership, psychological safety, and incredibly clean data feedback loops. AI is a cognitive multiplier, but zero multiplied by anything is still zero. If your underlying data schemas are garbage, your AI output will just be highly scalable garbage.
The era of “moving fast and skipping permissions” is over. The winners in 2026 and beyond will be the teams that take the foundational work seriously: clean your data, build deterministic rails for your probabilistic agents, and empower your human engineers with the right team dynamics to architect the future.
Let’s build systems that last, not just scripts that demo well.




Leave a Reply