It’s strange how technology moves in circles. We often solve the same fundamental problems over and over again, just using different tools.

A decade ago, during my PhD at the National Technical University of Athens, I was obsessed with a single problem: The Dumb Client. The web was built on rigid, hard-coded interactions. If a developer wanted their app to buy a book, they had to hard-code the API calls for addToBasket, checkout, and pay. If the API changed, the client broke. If a new capability (a new “affordance”) appeared, the client was blind to it until a human updated the code.

I proposed a solution called DeepGraphs: a framework where software agents could dynamically discover “Affordances” (actions) on “Digital Objects” (resources) using semantic logic.

( you can find most of my research on google scholar, or some of my older posts at this blog)

Fast forward to late 2024. Anthropic releases the Model Context Protocol (MCP). The tech world heralds it as the missing link for AI agents—a standard way for models to discover resources and tools dynamically.

When I read the documentation, I smiled. The architecture was hauntingly familiar. It was the same song, just played on a different instrument.

This post is a “post-mortem” of my PhD thesis in the age of Generative AI. It is an analysis of how we correctly predicted the architecture of the Agentic Web but completely missed the engine that would power it.


Part I: The Thesis (2014-2017)

The Quest for the Digital Object

My research began with a philosophical question: What is a Digital Object?

In my papers, specifically The Quest for Defining the Digital Object and “Delving Into Affordances on the Semantic Web”, I argued that we were treating digital entities (files, profiles, database rows) as static “things.” But in reality, a thing is defined by what you can do with it.

I borrowed the concept of Affordances from ecological psychology (J.J. Gibson) and design (Don Norman). A physical chair “affords” sitting. A physical button “affords” pushing. I argued that a Digital Object must explicitly broadcast its affordances to any agent that encounters it.

The “DeepGraphs” Proposal

To make this work for software, I proposed DeepGraphs.

The core idea was simple but radical for the REST API era:

  1. The Server is the Brain: The server shouldn’t just send data (JSON). It should send a map of what is possible next.
  2. Semantic Rules (The “How”): I used the Semantic Web stack—OWL (Web Ontology Language) and SWRL (Semantic Web Rule Language)—to describe these interactions.
  3. The Finite State Machine (FSM): The server would transmit a logic graph. For example, “If the user has a book in the Basket (State A), the Checkout action (Transition) becomes available, leading to Payment (State B)”.

The goal was to create a Generic Client—an agent that didn’t know what it was buying or how the API worked, but could figure it out by following the rules sent by the server.

I envisioned an agent that could act like “Leopold Bloom” wandering Dublin—encountering new shops (APIs), discovering what they sold (Objects), and figuring out how to buy them (Affordances) on the fly.


Part II: The Lost Decade (2017-2023)

Why the Semantic Web Failed

So, why didn’t DeepGraphs take over the world in 2017?

The answer lies in the friction of implementation. The Semantic Web was brilliant, but it was academically heavy. To build a DeepGraph API, a developer had to write RDF, understand ontologies, and define SWRL rules. It was too much work.

Developers voted with their keyboards. They chose pragmatism over purity.

  • REST & JSON won because they were simple.
  • GraphQL emerged to solve the data-fetching problem, but it still required hard-coded clients.
  • The API Economy exploded, but clients remained “dumb.” A Stripe integration today still looks largely the same as it did in 2015—hard-coded logic reading static documentation.

We had the right architecture (dynamic discovery), but the wrong engine (Formal Logic). We were trying to make computers “understand” the world by forcing humans to write mathematically perfect descriptions of it.

Then, the AI Boost happened.


Part III: The Era of Probabilistic Reasoning (2024-Present)

Enter the Large Language Model

LLMs solved the Semantic Web’s biggest bottleneck: Translation.

We no longer need strict ontologies to make machines understand that “purchase,” “buy,” and “checkout” mean the same thing. An LLM understands this probabilistically. We traded Precision (Formal Logic) for Flexibility (Natural Language).

This paved the way for the Model Context Protocol (MCP).

What is MCP?

Released by Anthropic, MCP is an open standard that allows developers to connect AI models to their data and tools. It solves the exact same problem DeepGraphs tried to solve: How does a generic agent connect to a specific system without custom code?

In MCP:

  1. Resources: These are my “Digital Objects.” File contents, database logs, API responses.
  2. Tools: These are my “Affordances.” Executable functions like git_commit or query_db.
  3. Prompts: These are the instructions that replace my SWRL rules.

When an MCP client connects to a server (e.g., a GitHub repository), it performs a “handshake.” The server says: “Here are the resources I have (Code), and here are the tools you can use (Issues, Pull Requests).”

The Agent (Claude, GPT-4, etc.) then looks at this menu and decides what to do.


Part IV: The Comparative Analysis

DeepGraphs vs. MCP: The Mapping

The similarities are striking. If we overlay the DeepGraphs architecture onto MCP, it fits almost perfectly.

ConceptDeepGraphs (2017)MCP / Agentic AI (2025)
The Core EntityDigital Object MCP Resource
The CapabilitiesAffordances MCP Tools
The Logic EngineSWRL / OWL Reasoner LLM (Transformer)
The Control FlowFinite State Machine (FSM) Chain-of-Thought / Agent Loop
The InterfaceHypermedia / HATEOAS JSON Schema / System Prompt

What I Got Right (The Architecture)

1. The “Server-Side” Definition of Agency

My thesis argued that the intelligence shouldn’t live in the client; it should be downloaded from the server.

In DeepGraphs, I wrote: “The server is responsible to address those affordances… guide the client into the map of its entry points”.

MCP does exactly this. The AI model is generic. It becomes a “GitHub Expert” only when it connects to the GitHub MCP server and downloads the tool definitions.

2. Dynamic Discovery

I predicted that for the Internet of Things (IoT) and smart cities to work, agents had to discover actions dynamically. “Agents should be decoupled by developers… interactions with new kind of objects”.

This is the defining feature of the “Agentic Web.” We are moving away from building specific apps (e.g., a flight booking app) toward building generic agents that discover the “Book Flight” tool when needed.

3. The Object-Affordance Duality

In “The Quest for Defining the Digital Object”, I concluded that you cannot define an object separately from its affordances.

MCP enforces this rigor. You cannot just dump text into an LLM context window (Object) and expect magic. You must pair that context with Tools (Affordances) to make it actionable.

What I Got Wrong (The Engine)

1. Formal Logic vs. Probabilistic Intuition

I bet on Logic. I thought the way to guide an agent was to give it a rule: Implies(Basket -> Checkout).

I was wrong. The world is too messy for strict rules. The winning engine was Probability.

Today, we don’t give the agent a rule. We give it a docstring: “Use this tool to checkout items in the cart.” The LLM uses its training on billions of internet interactions to intuit that after adding to a basket, checkout comes next.

2. The Complexity of Standards

I leaned on the W3C Semantic Web stack (RDF, OWL). It was technically sound but usable only by PhDs.

MCP uses JSON-RPC and Markdown. It’s “dumb,” simple, and text-based. It won because a junior developer can write an MCP server in an afternoon.

What MCP Is Still Missing (The “DeepGraphs” Gap)

While MCP is a triumph of flexibility, it has introduced a new problem: Hallucination and Safety.

In my DeepGraphs proposal, the agent could not make a mistake. The Finite State Machine (FSM) enforced the path. You literally could not call the “Pay” API unless you were in the “Checkout” state.

Current LLM agents lack this. They are probabilistic. An MCP agent might try to call refund_payment before process_payment because it “feels” right.

The future of MCP lies in looking backward at DeepGraphs.

We are starting to see the re-emergence of FSMs in AI. Frameworks like LangGraph are essentially re-inventing the state machines I described in 2017. They are realizing that while the reasoning can be probabilistic (LLM), the flow must often be deterministic (FSM).


Conclusion: The vindication of the “Generic Agent”

Ten years ago, I wrote:

“The core proposal… is the DeepGraphs requirement meant for manipulative APIs, in a way that would make it sufficient and safe to reason over the client-server transactions instead of creating numerous specific clients.”

That sentence could be the mission statement for the AI industry today.

We are finally building the software I dreamed of. We aren’t calling them “Digital Objects” anymore; we call them Context. We aren’t calling them “Affordances”; we call them Tools. And we replaced my SWRL rules with Transformer attention heads.

But the vision remains unchanged: A web where machines don’t just read information, but understand how to act on it.

The “Dumb Client” is finally dead. Long live the Agent.

Leave a Reply

Trending