6 minutes
SDS 588: Artificial General Intelligence is Not Nigh
Subscribe on Website, Apple Podcasts, Spotify, Stitcher Radio or TuneIn
Welcome back to the Five-Minute Friday episode of the SuperDataScience Podcast!
This week, Jon begins a two-part series that dives into artificial general intelligence (AGI) and how it might–or might not– only be a few years away. Tune-in to hear where Jon stands on this popular topic and hear him defend his hypothesis.
AGI is a single algorithm that can learn anything a human could. Given that every couple of years recently, the number of parameters in these large natural language models become ten times greater, proponents of the AGI-is-nigh hypothesis posit that in the next decade the number of model parameters in a large language model could reach the number of connections between neurons — brain cells — in a human brain.
As a neuroscientist, Jon believes that the assumption that model scale is by itself largely sufficient to facilitate AGI is excessively generous. Human brains — or, indeed the brains of even much cognitively simpler organisms like birds, reptiles, and other mammals — contain a vast amount of anatomical complexity that enables them to exhibit their broad and efficient learning capabilities. Modern A.I. systems make heavy use of the transformer architecture and then replicate it over and over to remarkable effect, but this is meager complexity relative to the dozens of varieties of interacting anatomical components in the human brain.
In addition to artificial neural networks being relatively uniform and vastly simplistic in their gross anatomical layout relative to biological brains, biological brains are also considerably more complex at a small scale — that is, at the cellular level.
Finally, besides A.I. systems not having anywhere near the gross anatomical complexity nor the cell-level complexity of biological nervous systems, biological brains also carry out massively parallel processing. And such a vast scale in quantum computing is not yet visible on the technological horizon.
In summary, Jon argues that due to gross anatomical, fine-grained cellular, and parallel processing differences between biological nervous systems and A.I., the model-scale-alone AGI-is-nigh hypothesis doesn’t hold much water from his neuroscience perspective.
Join us next week, when Jon reviews the leading A.I. expert Yann LeCun’s learning-based perspective as to why the model-scale AGI-is-nigh hypothesis doesn’t seem credible to him either.
ITEMS MENTIONED IN THIS PODCAST:
DID YOU ENJOY THE PODCAST?
- Where do you stand on the AGI-is-nigh debate?
- Download The Transcript
Podcast Transcript
(00:06):
This is Five-Minute Friday with Part 1 of how Artificial General Intelligence is Not Nigh.
(00:19):
In Episode #565 of the SuperDataScience Podcast with Jeremie Harris, Jeremie argued that “artificial general intelligence”, which AGI for short — a single algorithm that has the capacity to learn anything a human could. So Jeremie argued that AGI might only be a few years away in that episode. The crux of his argument — which is held by many folks I’ve talked to off-air in recent months — is that large language models like GPT-3, PaLM, and LaMDA exhibit capabilities on tasks beyond what they were expected to be able to by their designers, by virtue of them having an order of magnitude more model parameters relative to their respective predecessors.
(01:04):
Given that every couple of years recently the number of parameters in these large natural language models become ten times greater, proponents of the AGI-is-nigh hypothesis like Jeremie posit that in the next decade the number of model parameters in a large language model could reach the number of connections between neurons — brain cells — in a human brain. So there is this point approaching in future where large A.I. models could have as many model parameters as there are connections between neurons in human brain. Since modern A.I. systems — artificial neural networks — are loosely inspired by the way biological brain cells work, these AGI-is-nigh proponents then make the big assumption that once scale on the order of human-brain connections is achieved in machines, then an algorithm with human-level intelligence will emerge.
(02:05):
As a neuroscientist, my instinct is that this big assumption that model scale is by itself largely sufficient to facilitate AGI is excessively generous. Human brains — or, indeed the brains of even much cognitively simpler organisms like birds, reptiles, and other mammals — contain a vast amount of anatomical complexity that enables them to exhibit their broad and efficient learning capabilities. To name a few examples from the dozens examples I could, we have specialized brain structures like the amygdala, which plays a leading role in processing emotions, we’ve got the hippocampus, that plays a specialized role in memory formation, and we’ve got specialized areas like the fusiform face area, that are critical to recognizing the faces of other humans specifically. We don’t understand any of these areas well enough to model them effectively with a machine and there are probably anatomical regions of the human brain that play a critical role in our intellectual capacities that we haven’t even discovered yet. Modern A.I. systems make heavy use of the transformer architecture and then replicate that transformer architecture over and over to remarkable effect no question, but this is meager complexity relative to the dozens of varieties of interacting anatomical components in the human brain.
(03:30):
In addition to artificial neural networks being relatively uniform and vastly simplistic in their gross anatomical layout relative to biological brains, biological brains are also vastly more complex at a small scale — that is, at the cellular level. Firstly, artificial neurons are computationally simpler than the biological ones that they mimic. And, secondly, biological brains have cells other than just neurons that play a role in building and retaining memories and behaviors. Despite their role in intelligence, these support cells — other than neurons with names like glial cells and oligodendrocytes — these support cells have not been modeled in A.I. systems in any meaningful, large-scale way.
(04:16):
Finally, on top of A.I. systems not having anywhere near the gross anatomical complexity nor the cell-level complexity of biological nervous systems, biological brains also carry out massively, massively parallel processing. At any given point in time, your brain has countless brain cells processing information signals in parallel in a way that computer systems are nowhere near replicating. While it might be only a decade or so until there is an algorithm with as many model parameters as there are connections between neurons in a human brain, it would take a vastly different computing approach than the one that prevails today — perhaps one day something like quantum computing — to have anything like the massive parallel processing of a biological brain. And this kind of a vast scale in quantum computing is not yet visible on the technological horizon.
(05:06):
So in summary, due to gross anatomical, fine-grained cellular, and parallel processing differences between biological nervous systems and A.I., the model-scale-alone AGI-is-nigh hypothesis doesn’t hold much water from my neuroscience perspective. For FiveMinute-Friday next week, I’ll review the leading A.I. expert Yann LeCun’s learning approach-based perspective as to why the model-scale AGI-is-nigh hypothesis doesn’t seem credible to him either.
(05:39):
Until then, keep on rockin’ it out there, folks, and I’m looking forward to enjoying another round of the SuperDataScience podcast with you very soon.
Show all
arrow_downward