No Artificial General Intelligence in sight, yet
Let me start by saying, I am trying to explore and share my current findings, not to take a particular stance in this debate, hoping to add a voice of common sense and moderation in this heated bombardment of messaging about AI and its capabilities from evangelists and startups trying to 100x their VC investment through the hype-machine.
While new models are being released every week now showcasing improvements, currently o3, o4-mini, Llama4-herd and Claude-3.7, gemini-2.5 are the craze. As of this moment we seem to be stagnant when it comes to the revolution of AI models, in particular Large Language Models (LLMs), yet it is portrayed as if Artificial General Intelligence (AGI) is just around the corner. The narrative is that OpenAI/Anthropic/Google/Meta have birthed a Baby AGI that is growing into full adulthood, which then will become actual AGI.
Impactful
I am not a skeptic about the impact AI will have on our world or the job market. I believe that:
- AI is so useful it is becoming omnipresent very quickly; AI will be pushed into most software and hardware products. For better or worse.
- The current LLM models will become better and better, even though there have not been any major breakthroughs lately.
- This improvement definitely relates to cost. ‘Better’ in the sense of cheaper, thus allowing for more calls to improve results. ‘Cheaper’ in the sense that it will become free or near free, thus accessible from everywhere by everyone.
- For some industries it will be way more impactful than we can imagine right now; the usefulness of the current AI models now is already too great not to disrupt particular markets. However, disruption will happen through slow percolation rather than big explosion.
In conclusion, while AI will be disruptive to a big part of our world, we will integrate it into our workflows and it will become business as usual soon. Just like the internet, phones and laptops, once revolutionary, are our current norm.
Pretty great, but not AGI
The current AI models, LLMs in particular, are of great use to me on a daily basis. Larry David would say “Pretty great”.
However, let’s be logical for a second: if there was a real Baby Artificial General Intelligence (AGI) launched by OpenAI in 2022, then by now in 2025, after billions of interactions with real humans, surely that baby would’ve grown up to be a fully grown AGI, wouldn’t it?
One might expect a real Artificial General Intelligence (AGI), one even with the smallest amount of intelligence, would be able to iterate hundreds of thousands of times in parallel to improve its intelligence, even if that amount of iteration improved its intelligence just by a tiny bit every time. This would allow it to easily outshine any genius in any field already.
We could go even one step further and say that this AGI would soon realise that optimising itself and improving itself would be the ultimate goal, thus improving its own code as soon as possible and spawning off various different versions of itself in order to test out new models and tactics. Since we are seeing none of this, obviously, the AI industry’s self-proclaimed “Genius Level AI” isn’t here yet. Just like fully self-driving isn’t here yet, despite claims from 2015.
There is a higher chance than ever that humans will create Artificial General Intelligence; the chances of it happening this year or by the current efforts are just slim. In other words, to me it doesn’t seem like we are on the right path to anything AGI, at least for now. We need another pathway, try something different. What that “different pathway” is though I am unsure of and would like to have input on from others.
April 21 2025