Why I don't believe in AGI
A big thank you to all those who subscribed. Feel free to share this newsletter with your link to enjoy the gifts !!! 😉
I Don’t Believe in General AI! This week, I got invited to speak on this topic, so I thought I’d share my thoughts with you all here.
What’s General AI (AGI)?
When I talk about AGI in this piece, I’m referring to a computer system capable of interpreting and understanding its environment. In other words, a system that can think for itself and is self-aware.
It’s like the Holy Grail or the ultimate fear that’s been brought back into the spotlight with the “feats” of large language models (LLMs). But is it really realistic?
How Do LLMs Work?
On paper, it’s nothing too fancy. It’s a computational model based on probabilities, trained to predict responses. What makes it “impressive” at first glance is the sheer volume of its knowledge, allowing it to have conversations with anyone about anything.
But like any generalist, it’s not an expert in anything… unless you restrict it to a specific field. That’s the approach taken by companies like Mistral or Dataiku.
But we hit a wall here pretty quickly. An expert, by definition, is at the cutting edge of information, whereas an LLM is always a bit behind the curve because it has to calculate, learn, and train - things humans do naturally and much faster. So, as you’ve probably experienced, LLMs can be frustrating. They don’t know as much as you do, they’re a bit out of touch, and they end up being more of an assistant than an advisor.
The Limits of LLMs?
The main problems with LLMs are in calculation and memory, which are currently physically limited by the energy they require and the GPUs used to make those calculations quickly.
But all of this still doesn’t answer the question about AGI…
AGI Won’t Be an LLM?!
What makes us human and gives us consciousness lies in our senses: we feel, touch, see, smell… All of this is data - data that could, theoretically, be collected and fed into a machine via various sensors.
But here’s the kicker: the data generated by just one minute of human existence would be millions of times larger than all the data used to train LLMs like GPT. So, as you can see, it’s physically impossible to use an LLM to reach AGI.
So, How Do We Achieve AGI?
Right now, no one can answer THE question: how do we reach AGI? But there are two major schools of thought battling it out:
The OpenAI team believes that by combining a bunch of LLMs tailored to different senses, and with the rapid development of GPUs, we’ll be able to create AGI in the coming years.
Then there are the teams of Yann Lecun and Luc Julia, who think we’re on the wrong track, that LLMs will be great for specific tasks but that AGI will be something entirely different - if it ever exists.
As for me, I’m leaning more towards the second option, but like with quantum computing, I prefer to stay open to all possibilities and keep watching from the sidelines.
Thanks a bunch for reading this far! Feel free to share this edition (and the previous ones) and enjoy lots of perks: stickers, caps, access to events…
Catch you soon!