We need to talk about AGI (Artificial General Intelligence).

As an investor and a CTO, I hear this term thrown around in every pitch deck and coffee chat. It has become a moving goalpost. One day it’s passing the Turing test, the next day it’s solving quantum physics. The hype is loud, but the definitions are usually fuzzy.

I recently read a fascinating paper titled A Definition of AGI by Dan Hendrycks and a team of researchers. It finally puts a concrete number on the chaos.

Here is the simple breakdown of what AGI actually is, and why we aren’t there yet.

The New Definition: The “Competent Adult”

Forget the sci-fi movies about AI taking over the universe (that’s Superintelligence). The paper defines AGI very simply:

AGI is an AI that can match the cognitive versatility and proficiency of a well-educated adult.

That’s it. It doesn’t have to be a god. It just has to be as capable as a smart human across a broad range of tasks.

The 10-Subject “Final Exam”

To measure this, the researchers didn’t just give the AI a coding test. They used the CHC theory (Cattell-Horn-Carroll), which is basically the gold standard for measuring human intelligence.

Think of it like a report card with 10 subjects, each worth 10% of the final score:

  1. General Knowledge: Does it know facts about the world, culture, and science?
  2. Reading & Writing: Can it comprehend complex text and write clearly?
  3. Math: Can it handle arithmetic, algebra, and calculus?
  4. Reasoning: Can it solve new puzzles it hasn’t seen before (“on-the-spot” thinking)?
  5. Working Memory: Can it keep track of info in the short term (like a phone number)?
  6. Long-Term Memory Storage: Can it actually learn new things and remember them tomorrow? (This is a big one).
  7. Retrieval: Can it pull up facts quickly without hallucinating?
  8. Visual Processing: Can it see, analyze, and generate images?
  9. Auditory Processing: Can it listen, transcribe, and understand rhythm/tone?
  10. Speed: Can it think and react as fast as a human?

To be AGI, you need to score 100%—meaning you pass all these just like a well-educated human would.

The “Jagged” Reality Check

Here is the insight that matters for us as builders and entrepreneurs.

Current models (like GPT-4 and the projected GPT-5) have a “jagged profile”. They are geniuses in some areas and completely failing in others.

  • The Genius Part: They are great at Knowledge, Reading, and Math.
  • The “Dumb” Part: They fail hard at Long-Term Memory Storage.

According to the paper’s framework:

  • GPT-4 has an AGI score of roughly 27%.
  • GPT-5 (projected/tested in the paper) hits roughly 57%.

The biggest bottleneck? “Amnesia.” Current AI scores nearly 0% on Long-Term Memory Storage. It cannot remember you or learn from its experiences over time without us feeding it the context again and again. It resets every time we start a new chat.

What This Means for Us

As a Greek guy, I love a good myth, but we need to ground our expectations in reality.

  1. Don’t fear the “God AI” yet: We are halfway there. The leap from 57% to 100% requires solving the memory problem, which is technically very difficult.
  2. Look for opportunities: As investors, we should look for startups solving the “jagged” edges. Who is solving the long-term memory/learning problem? Who is fixing the “hallucinations”?
  3. It’s about versatility: AGI isn’t about being the best coder in the world; it’s about being able to code, then write a poem about it, then recognize a song, and then remember it all next week.

The gap is closing, but the “human” part of AGI—the ability to learn and remember continuously—is still the missing piece.

Leave a Reply

Trending