Podcastskeyboard_arrow_rightSDS 590: Artificial General Intelligence is Not Nigh (Part 2 of 2)

6 minutes

Data ScienceArtificial Intelligence

SDS 590: Artificial General Intelligence is Not Nigh (Part 2 of 2)

Podcast Guest: Jon Krohn

Thursday Jul 07, 2022

Subscribe on Website, Apple Podcasts, Spotify, Stitcher Radio or TuneIn


Welcome back to the Five-Minute Friday episode of the SuperDataScience Podcast!

Last week, in Episode #588, Jon provided an overview of neuroanatomical arguments as to why “artificial general intelligence” (AGI for short — a single algorithm that has the capacity to learn anything a human could) will not be realized anytime soon. This week, he's reviewing points made by Turing Prize winner and Chief A.I. Scientist at Meta Yann LeCun in recent, widely-read social media posts that throw further cold water on the idea that AGI is nigh.


In his post, Yann acknowledges that new machine learning concepts are needed in addition to scaling model-parameter counts in order to achieve human-level generalizability with A.I. You can read Yann’s post for the details on the seven new concepts that are needed for attaining AGI, but in brief these are:
  • Develop machine learning systems that can make causal inferences.
  • Attain AGI is machine learning systems that learn how the world works by observing like babies.
  • Improved capability of dealing with the unpredictability of real-world events.
  • The ability to predict how sequences of actions will impact the world in order to make long-term plans.
  • The ability to display abstract representations of the world hierarchically. 
  • The ability to decompose complex tasks into a hierarchy of sensible subtasks.

In short, Yann believes that there are many groundbreaking new machine learning concepts required to realize human-level learning abilities and the ones he listed in his post are only the most obvious ones, making us not only more than a decade away from realizing AGI, but it is not possible to predict how long it will take us to realize AGI — if we can realize AGI at all. 

ITEMS MENTIONED IN THIS PODCAST:

DID YOU ENJOY THE PODCAST?
  • What are your thoughts on whether AGI will be realized soon? Do you agree or disagree with Yann LeCun’s explanation?
  • Download The Transcript
(00:06): This is Five-Minute Friday with Part 2 of how Artificial General Intelligence is Not Nigh.

(00:28): Last week, in Episode #588, I provided an overview of neuroanatomical arguments of mine as to why I don’t think “artificial general intelligence”, AGI for short — a single algorithm that has the capacity to learn anything a human could, I covered why I don’t think AGI is going to be realized anytime soon. This week, I’m reviewing points made by Yann LeCun in recent, widely-read social media posts that throw further cold water on the idea that AGI is nigh.

(01:01): Prof. LeCun is the Chief A.I. Scientist at Meta (one of the world’s largest tech companies) as well as a beloved professor at New York University. Alongside the other two so-called godfathers of deep learning, Geoff Hinton and Yoshua Bengio, Yann LeCun was awarded a prestigious Turing Prize, analogous to a Nobel Prize, but for computer science, for his contributions to the field of A.I. research. Yann has been succeeding at making major machine learning breakthroughs since the 1980s, for longer than I’ve been alive, so you may want to hold his opinions on AGI — which are based on his understanding of learning processes — in even higher regard than my own neuroscience-based opinions.

(01:45): So with respect to his post, Yann starts off by acknowledging that scaling up model architectures by orders of magnitude is helping us move in the direction of increasingly general model capabilities. And this is undeniably true. Scaling up model architectures is absolutely leading to big leaps in A.I. capability, particularly in recent months in models that incorporate natural language. However, Yann points out that fundamentally new machine learning concepts are needed in addition to scaling model-parameter counts in order to achieve human-level generalizability with A.I.

(02:22): Yann’s first big fundamental concept that we need to figure out to attain AGI is machine learning systems that learn how the world works by observing like babies. Today’s most advanced A.I. systems require massive data sets with billions of data points to learn their representations of the world, while even non-human babies can often develop an understanding of phenomena from one or a few examples instead of billions of them.

(02:50): Yann’s second big fundamental concept for attaining AGI is that we need to develop machine learning systems that can make causal inferences. Again, even non-human babies appear to be able to make some of these kinds of inferences about causality but the largest deep learning-powered A.I. systems of today cannot distinguish correlation from causality except in a few highly limited and narrowly applicable cases.

(03:17): You can read Yann’s post, which is linked in the show notes, for all of the details on his other five new concepts that are needed for attaining AGI, for a total of seven, but in brief these are: A much better ability to deal with the uncertainty, the unpredictability of real-world events; An ability to predict how sequences of actions will impact the world in order to make long-term plans; An ability to represent abstract representations of the world hierarchically; And an ability to decompose complex tasks into a hierarchy of sensible subtasks.

(03:51): Finally, Yann implies that all of these new A.I. capabilities would probably have to be realized with gradient descent — a partial-derivative calculus technique you can learn about from my Calculus for Machine Learning YouTube course, if you are not already familiar with it. And the reason why he says this is that all current large-scale machine learning approaches depend on gradient descent and there is no viable alternative kicking around as yet. This dependence on gradient descent is a significant constraint on all of the capabilities we need to develop since it requires us to have continuous mathematical functions to perform calculus upon, and representing the real world — particularly real-world hierarchies — can be tricky to represent this way.

(04:36): In short, Yann believes that there are many groundbreaking new machine learning concepts required to realize human-level learning abilities and the ones he listed in his post are only the most obvious ones. We don’t know how to tackle even these obvious ones today and there could be countless more less-obvious or more-complex hurdles towards realizing AGI once we figure some of the obvious ones out. This means, not only are we more than a decade away from realizing AGI, but it is not possible to predict how long it will take us to realize AGI — if we can realize AGI at all.

(05:12): In the meantime, until AGI arrives, my two cents is that narrower applications of A.I. will nevertheless make a massive transformative impact to the way humans live their lives over the coming years and decades. And, for those of you listeners who are so inclined, there will probably be no shortage of challenging and exciting A.I. problems for you to tackle in your lifetime.

(05:36): All right, that’s it for today’s episode. Keep on rockin’ it out there, folks, and I’m looking forward to enjoying another round of the SuperDataScience podcast with you very soon. 

Show all

arrow_downward

Share on