Despite all the debate around Artificial General Intelligence (AGI), the concept itself feels misguided. It doesn’t reflect how AI models are actually evolving.
AI systems have jagged edges — brilliant in some areas, clueless in others. They can ace science Olympiads yet miss how many r’s are in “blueberry.”
So are they geniuses or idiots? Both. That’s why the AGI discussion misses the point. A model that still makes silly mistakes can still be transformative when used in the right context with the right guardrails. We don’t need “general intelligence” to get extraordinary value.
We should stop fixating on when AI will become “general” or “super.” What matters is what these models can do, where they fail, and how we build systems that amplify their strengths while containing their flaws.