In my previous post, I explored how the Model Context Protocol (MCP) vindicated the architecture I proposed a decade ago in my PhD thesis. We established that the structure of the “Agentic Web”—servers broadcasting capabilities to generic clients—was the correct vision.
But we also established that the engine powering it changed. I bet on Formal Logic (Semantic Web, SWRL); the world chose Probabilistic Reasoning (LLMs).
We traded precision for flexibility. And for the last two years, that trade has paid off massively. LLMs can handle messy inputs, infer context, and “figure things out” in a way that rigid logic never could.
However, as we move from “Chatbots” to “Agents”—software that actually does things—we are hitting a wall. We are discovering that probability is not enough. When an agent is managing your bank account or deploying code, “99% confident” is 100% terrifying.
We need to fix the engine. We don’t need to abandon LLMs, but we need to anchor them.
This post is a proposal to upgrade the Model Context Protocol. It is a blueprint for MCP 2.0: a hybrid architecture that re-introduces the best parts of my “DeepGraphs” thesis—Structured Definitions, Finite State Machines, and Symbolic Logic—to create agents that are not just smart, but safe.
The Problem: The Hallucinating Agent
Currently, MCP works like a conversation between a polite stranger and a helpful librarian.
- The Server (Librarian): “Here is a list of tools I have. I have a
buy_booktool and asearch_booktool.” - The Agent (Stranger): “Great, I’ll figure out which one to use based on their descriptions.”
The “glue” holding this together is English text (docstrings). The agent reads the description and guesses the correct workflow.
This fails in three critical ways:
- Context blindness: The agent doesn’t know when a tool is valid. It might try to call
buy_bookbeforesearch_bookbecause nothing explicitly stops it. - Semantic drift: One server’s
process_ordermight mean “bill the customer,” while another’s means “ship the box.” Without a shared vocabulary, the agent is guessing at meaning. - The Confidence Trap: When the LLM is unsure, it hallucinates. It might invent parameters that don’t exist or assume a workflow that isn’t there.
To fix this, we need to re-introduce structure. We need to move from Pure Probability to Neuro-Symbolic AI.
Proposal I: Structured Definitions (The Return of Vocabularies)
In the Semantic Web era, we used Ontologies (like schema.org) to define what things were. If I sent you an object tagged schema:Book, you knew exactly what properties it had (ISBN, Author, Title).
Today, APIs are “schema-less” in meaning. A JSON object is just a bag of data.
The Fix:
We should introduce Common Vocabularies into MCP Resource definitions. We don’t need the strictness of OWL (Web Ontology Language), but we need shared “Types” that exist above the server level.
How it looks in MCP 2.0
Instead of an MCP server simply saying:
JSON
{
"name": "amazon_product",
"description": "A book object from Amazon"
}
It should reference a common vocabulary:
JSON
{
"name": "amazon_product",
"type": "http://common-vocab.org/Product/Book",
"affordances": ["http://common-vocab.org/Action/Buy"]
}
Why this matters:
If an Agent has learned how to buy a Product/Book on one website, it immediately knows how to buy it on any website that uses that vocabulary. We stop training agents on specific APIs and start training them on concepts. This is true “Transfer Learning” for agents.
Proposal II: The Protocol as a State Machine (FSM)
This is the core of my DeepGraphs thesis.
Right now, MCP is “stateless.” The server presents a flat list of tools. It is up to the Agent to remember that step A must happen before step B.
But complex systems are rarely flat. They are Finite State Machines (FSMs). You cannot checkout an empty cart. You cannot refund a transaction that hasn’t happened.
The Fix:
The MCP server should not just broadcast tools; it should broadcast State. The protocol should include a dynamic graph of valid transitions.
How it looks in MCP 2.0
When an Agent connects, the Server sends the current state:
JSON
{
"status": "active",
"current_state": "BASKET_EMPTY",
"available_tools": [
{
"name": "search_book",
"transition_to": "BROWSING"
}
],
"disabled_tools": [
{
"name": "checkout",
"reason": "Requires state BASKET_FILLED"
}
]
}
If the Agent adds a book, the Server response changes the map:
- Old State:
BASKET_EMPTY - New State:
BASKET_FILLED - New Tools Unlocked:
checkout,empty_basket
Why this matters:
This creates Guardrails. We are no longer relying on the LLM to “be smart enough” to know it can’t checkout. The protocol physically prevents the Agent from making a logic error. We reduce the “search space” for the LLM, making it faster, cheaper, and safer.
Proposal III: Hybrid Reasoning (Logic as the Fallback)
This is the “engine” fix.
Currently, we rely 100% on the LLM. If the LLM is 60% confident, it often still acts, potentially causing disaster.
I propose a Neuro-Symbolic Handshake.
- System 1 (Probabilistic): The LLM looks at the user’s request and the available tools. It forms an intent. “I think the user wants to buy Ulysses.”
- System 2 (Deterministic): Before acting, the Agent checks the Logic Rules provided by the server (my old SWRL rules, modernized).
The Logic Layer
The Server provides a lightweight logic file (perhaps in a simplified logic format or even Typescript/WASM) that defines constraints:
Constraint: IF
Action == BuyANDUser.Age < 18ANDProduct.Category == AdultTHENBlock.
The Workflow:
- The User asks: “Buy this game.”
- The LLM (Probabilistic) selects the
buy_gametool. - The MCP Client (Deterministic) runs the Logic Layer locally.
- IF the logic passes, the request is sent.
- IF the logic fails, the Client halts and tells the LLM: “I cannot do that because the logic rule ‘Age Restriction’ was violated.”
Why this matters:
This allows the LLM to be creative with language and planning, while ensuring that execution obeys strict business logic. It handles the “edge cases” where probabilistic models fail.
The “DeepGraphs” Architecture Reborn
If we implement these three changes, we transform MCP from a Tool-Calling Protocol into a Reasoning Protocol.
We arrive at a modern implementation of the architecture I described in 2017:
- The Map (Vocabularies): The Agent knows what things are, regardless of which server it’s talking to.
- The Path (FSM): The Agent is guided down valid paths, preventing hallucinated workflows.
- The Guardrails (Logic): The Agent is checked by deterministic rules before it can break anything.
The Path Forward
The AI industry is currently in its “Wild West” phase—fast, loose, and exhilarating. But if we want agents to run our infrastructure, handle our finances, and manage our data, “wild” isn’t good enough.
We need predictability. We need structure.
We don’t need to return to the rigidity of the 2010s Semantic Web. We don’t need 500-page XML schemas. But we can take the wisdom of that era—the understanding that data needs structure and actions need boundaries—and fuse it with the magic of modern LLMs.
We have the Engine (LLMs). We have the Protocol (MCP). Now, let’s build the Transmission that makes it actually drivable.



