Global Side Menu Width
Placeholder

AGI. What is it and are we there yet? A quick commentary.

Let’s get this straight: when people say “AI is taking over,” they’re usually talking about a parrot on steroids, not some sentient digital overlord.

That thing writing your sales emails? It’s narrow AI.
The app spitting out anime foxes in your likeness? Also narrow.
The headline whispering that AI is “smarter than humans”? Probably written by a narrow AI regurgitating Twitter threads.

We haven’t built general intelligence. Not even close.

Narrow AI (a.k.a. weak AI) is what we actually have. It’s powerful, profitable, and dumb as hell outside its lane.

  • ChatGPT? LLM trained on text.
  • Midjourney? Trained on pictures.
  • Tesla Autopilot? Trained on driving scenarios.

These systems don’t understand what they’re doing. They don’t think, reason, or care. They predict. They copy. They pattern-match. Give them a prompt, and they fire back a statistically likely response based on mountains of data. That’s it.

They’re narrow because they can only do what they were built to do. You can’t ask Midjourney to plan your finances, and you shouldn’t ask ChatGPT to drive your car (yet). These are siloed savants, nothing more.

As IBM puts it, even ChatGPT and DALL·E are “narrow AI” because they’re locked into one trick: do the thing, but don’t understand it (IBM).

General AI (AGI) is the goal everyone keeps name-dropping but no one’s actually built. It’s the idea of an AI system that can handle any task a human can, with the same kind of learning, reasoning, and adaptability.

In short, AGI = an AI that can:

  • Learn to do new things it wasn’t explicitly trained for
  • Understand context, nuance, and common sense
  • Solve problems across totally different domains
  • Maybe even reflect on its own existence (if you’re into the whole “machine soul” idea)

It doesn’t exist. Still.

As Anthropic bluntly puts it: “AGI is a marketing buzzword.” And they’re one of the ones trying to build it.

Because companies are desperate for attention; and because narrow AI has gotten really good at faking it.

GPT-4 can pass legal exams. Claude 3 can write code. Gemini can solve problems across modalities. These things look smart. But they’re not general. They’re just really, really, really well-trained specialists.

Microsoft even published a paper saying GPT-4 shows “sparks” of general intelligence. Sure. And my toaster sometimes gives me existential vibes when it burns the crumpets. Doesn’t make it a philosopher.

Despite all the noise, there are still major blockers:

  • Reasoning: Current models fake logic. They don’t “think” step by step; they autocomplete.
  • Memory: Most models forget what happened five prompts ago. Real general intelligence needs long-term memory.
  • Adaptability: Humans can jump from poetry to plumbing. A model trained to summarise PDFs will fall apart if you ask it to play chess.
  • Embodiment: Humans have senses. A body. Physical intuition. AI has none of that.

DeepMind’s Demis Hassabis reckons AGI might show up in 5-10 years; but admits we’re still missing fundamental pieces like robust planning, real-world grounding, and memory.

Here’s a quick roll call of who’s sprinting toward the finish line (or cliff edge, depending on how you see it):

  • OpenAI: Claims they “know how to build AGI.” Scaling LLMs like maniacs. Sam Altman flip-flops between “we’re close” and “AGI won’t change daily life that much.” GPT-5 is rumoured to be agentic (meaning it can take actions, not just answer questions).
  • Google DeepMind: Wants to “solve intelligence.” Gato and Gemini are their latest plays. They’re combining reinforcement learning with language models; smarter systems that might reason better. Hassabis talks big but walks carefully.
  • Anthropic: Founded by OpenAI rebels. Think of them as the “AGI, but please don’t destroy the world” startup. They built Claude. They’re big on alignment (AI doing what humans want), and not big on overpromising. One of their founders reckons AI could outmatch humans at most tasks by 2027. But again; that’s not AGI. It’s just very competent narrow AI.

AGI changes everything; or nothing. Depends on who you ask.

  • Best case: AI helps us solve climate change, automate labour, discover new physics, and write better screenplays than Marvel.
  • Worst case: Unaligned AGI goes rogue. Doesn’t care what we want. Does what it wants. Skynet without the muscles.
  • Most likely case: We inch forward, building slightly smarter narrow AIs, while debating whether the next one counts as AGI.

Until then, don’t conflate your tool with a mind. LLMs are not people. Midjourney is not creative. Siri is not your assistant. These are narrow tools, not general thinkers.

If you’re building a business, teaching others, or just trying to stay ahead; know what you’re actually dealing with. Tools like ChatGPT and Claude are powerful, but they’re still just fancy hammers. Don’t treat them like architects.

Want to play in the AI space? Fine. Just stop pretending the calculator is alive.

AGI might be coming; but it’s not here yet. And when it does arrive, it won’t need your prompt. It’ll write its own.

Citations

Full Citation

Hyde, B. (2025, April 20). AGI: What is it, and are we there yet? A quick commentary.. Retrieved from https://benjaminhyde.com.au/2025/04/agi-what-is-it-and-are-we-there-yet-a-quick-commentary/

In-text

(Hyde, 2025)

References

Anthropic. (2023). Claude 2 and the path to safe AI. Retrieved from https://www.anthropic.com/index/claude

Anthropic. (2024). AI safety and alignment. Retrieved from https://www.anthropic.com/index/core-views

Hassabis, D. (2023, May 31). Why AI must be aligned with human values. Time. Retrieved from https://time.com/6280920/demis-hassabis-google-deepmind-ai

IBM. (n.d.). Types of artificial intelligence. Retrieved April 2025, from https://www.ibm.com/topics/artificial-intelligence

Microsoft Research. (2023). Sparks of artificial general intelligence: Early experiments with GPT-4. Retrieved from https://arxiv.org/abs/2303.12712

OpenAI. (2023). Planning for AGI and beyond. Retrieved from https://openai.com/blog/planning-for-agi-and-beyond

OpenAI. (2024). Our approach to alignment. Retrieved from https://openai.com/research/alignment

Vincent, J. (2022, May 12). DeepMind’s new ‘Gato’ AI model can perform over 600 tasks. The Verge. Retrieved from https://www.theverge.com/2022/5/12/23067476/deepmind-gato-generalist-ai-model-multimodal

More articles