106 minutes
SDS 729: Universal Principles of Intelligence (Across Humans and Machines), with Prof. Blake Richards
Subscribe on Website, Apple Podcasts, Spotify, Stitcher Radio or TuneIn
This week, Dr. Blake Richards demystifies artificial intelligence and its connection to human thought processes. Learn about the essence of intelligence, hear his take on the status of Artificial General Intelligence (AGI), and learn how AI research informs our understanding of the human brain. Plus, discover the potential future scenarios where AI and humanity might intersect.

Thanks to our Sponsors:
Interested in sponsoring a Super Data Science Podcast episode? Visit JonKrohn.com/podcast for sponsorship information.
About Blake Richards
Blake Richards is an Associate Professor in the School of Computer Science and Department of Neurology and Neurosurgery at McGill University and a Core Faculty Member at Mila. Richards’ research is at the intersection of neuroscience and AI. His laboratory investigates universal principles of intelligence that apply to both natural and artificial agents. He has received several awards for his work, including the NSERC Arthur B. McDonald Fellowship in 2022, the Canadian Association for Neuroscience Young Investigator Award in 2019, and a Canada CIFAR AI Chair in 2018. Richards was a Banting Postdoctoral Fellow at SickKids Hospital from 2011 to 2013. He obtained his PhD in neuroscience from the University of Oxford in 2010 and his BSc in cognitive science and AI from the University of Toronto in 2004.
Overview
In the captivating realm of artificial intelligence and human cognition, Dr. Blake Richards emerges as a trailblazer, offering a fresh perspective that challenges established norms and redefines the discourse on AI. According to Blake, intelligence lies in the ability to conform to predefined norms, pushing the boundaries of conventional thinking and paving the way for a new understanding of machine capabilities.
The conversation takes a thrilling turn as he explores futuristic possibilities, delving into brain-computer interfaces and raising thought-provoking ethical questions about AI's potential influence on human thoughts and behaviors. This exploration blurs the lines between humanity and technology, sparking profound reflections on our society's future.
From an evolutionary standpoint, Blake champions cooperation as the cornerstone of intelligence's evolution. He passionately argues that super-intelligent AI should actively seek collaboration with biological life, shattering pessimistic narratives and igniting a sense of hope. With a remarkable ability to simplify intricate concepts such as cost functions and backpropagation, Blake clarifies the intricate relationship between AI and human intelligence.
To navigate the complexities of AI without stifling innovation, Blake advocates for a multidisciplinary approach, encouraging us to draw insights from diverse fields. He champions independent model auditing, like they do in the financial industry, while discouraging rigid government regulations in favor of voluntary initiatives. In specific contexts, such as military applications, Blake underscores the need for meticulous and responsible AI management, underscoring the importance of human oversight.
Altogether, Blake's insights encourage us to reevaluate our stance on these critical topics, fostering a deep sense of optimism about the boundless potential of artificial intelligence. You’ll be left with not only a wealth of knowledge but also a sense of optimism about the potential future of AI and its collaboration with humanity.
In this episode you will learn:
- Blake's research and his take on intelligence [09:56]
- How we can evaluate progress in artificial general intelligence [15:54]
- Blake's thoughts on biomimicry [20:57]
- Why Blake thinks the fears regarding AI are overdone [25:38]
- The most effective strategies to mitigate AI fears without hindering innovation [35:31]
- What steps can we take to ensure that AI supports human flourishing [45:23]
- The importance of interpreting neuroscience data through the lens of ML [55:08]
- Backpropagation, gradient descent and the brain [1:17:32]

Items mentioned in this podcast:
- This episode is brought to you by Gurobi
- This episode is brought to you by CloudWolf (30% membership discount included)
- Mila
- The Hippocampus as a predictive map
- A deep learning framework for neuroscience
- Fears about AI’s existential risk are overdone, says a group of experts
- One of the “godfathers of AI” airs his concerns
- Biologically feasible deep learning with segregated dentrites
- Dendritic solutions to the credit assignment problem
- Island by Aldous Huxley
- Piranesi by Susanna Clarke
- Neuromancer by William Gibson
- Dr. Strangelove
- Jon’s O'Reilly Deep Learning Course
- SDS special code for a free 30-day trial of O’Reilly: SDSPOD23
- Merantix A.I. Campus
Follow Blake:
- Twitter
- Bluesky: @tyrellturing.bsky.social
- Mastodon: @tyrell_turing@fediscience.org
- Google Scholar
- LiNC Lab
Podcast Transcript
Jon Krohn: 00:00:00
This is episode number 729 with Dr. Blake Richards, Associate Professor at McGill University and Core Faculty Member at Mila. Today's episode is brought to you by Gurobi, the decision intelligence leader, and by CloudWolf, the Cloud Skills platform.
00:00:20
Welcome to the Super Data Science Podcast, the most listened-to podcast in the data science industry. Each week we bring you inspiring people and ideas to help you build a successful career in data science. I'm your host, Jon Krohn. Thanks for joining me today. And now let's make the complex simple.
00:00:51
Welcome back to the Super Data Science Podcast. We've got another incredible episode for you today with the extraordinarily clever and extraordinarily lucid Professor Blake Richards. Blake is Associate Professor in the School of Computer Science and Department of Neurology and Neurosurgery at the revered McGill University in Montreal. He's also a Core Faculty Member at Mila, one of the world's most prestigious AI research labs, and it's also in Montreal. His lab investigates universal principles of intelligence that apply to both natural and artificial agents, and he has received a number of major awards for his research. He obtained his PhD in neuroscience from the University of Oxford and his bachelor's in cognitive science and AI from the University of Toronto.
00:01:34
Today's episode contains tons of content that will be fascinating for anyone. A few topics near the end, however, will probably appeal primarily to folks who have a grasp of fundamental machine learning concepts, like cost functions and gradient descent. In this episode, Blake details what intelligence is, why he doesn't believe in artificial general intelligence, why he's skeptical about existential risks from AI. He talks about the many ways that AI research informs our understanding of how the human brain works and how in the future AI could directly influence human thoughts and behaviors through brain computer interfaces. All right, you ready for this amazing episode? Let's go.
00:02:18
Blake, welcome to the Super Data Science Podcast. It's surreal to have you here and to be talking to you after all this time. I think the last time that we would've had a chance to cross paths would've been 2010, because I think that's when you finished up your PhD in Oxford.
Blake Richards: 00:02:36
That's right.
Jon Krohn: 00:02:38
But yeah, I knew you relatively well for someone that was several years ahead of me. You followed the same PhD in neuroscience program at the University of Oxford as I did. You were assigned as my mentor when I started the program, and so thank you for all the guidance back then. But on top of that, we met, you were actually integral in my decision to go to Oxford at all. At the time, when I was an undergrad student in my final year, we had an amazing faculty member at the time at my undergrad institution, Sukhvinder Obhi. He's now at McMaster. Sukh knew lots of people at UCL because he'd done his PhD there. He said to me, "I'm flying to London next week to spend time with my family. Do you want to come and meet UCL neuroscience professors?" I was like, "Absolutely. That sounds great."
00:03:39
I was staying in a dorm... Not a dorm. A hostel, that's the word. I was staying in a hostel with three bunk beds, so six people sleeping in a room in central London, right around where UCL is. I think it's called Bloomsbury is the neighborhood. It was an interesting experience spending... It was like Monday through Thursday meeting a who's who of amazing neuroscience researchers at UCL, and then Sukh said, "I also know somebody up at Oxford. I'm not going to go up there with you and introduce you to them, but if you want to take the bus up or the train up to Oxford, you can meet Matthew Rushworth."
Blake Richards: 00:04:19
Ah, yes.
Jon Krohn: 00:04:22
Matt Rushworth was un... It was a surreal experience getting to the experimental psychology department at Oxford University. He insisted on taking... I had this huge hiking backpack because I was backpacking through Europe after the London visit, and he insisted on carrying my bag, and then he'd set up the opportunity for me to meet with current PhD students, including you.
Blake Richards: 00:04:46
Oh, okay. Interesting. I don't actually remember this, but this all fits very well. So indeed. Great.
Jon Krohn: 00:04:57
Yeah, it would've been a random 15 minutes or something with me as you had a coffee or a tea in that horrible cafe.
Blake Richards: 00:05:09
That cafe was just the worst. The experimental psychology building in general was just abysmal.
Jon Krohn: 00:05:15
Yeah. I understand that now they've gutted it and are redoing it, which is long overdue.
Blake Richards: 00:05:20
I heard that, which is very overdue.
Jon Krohn: 00:05:23
But yeah, we sat there and you... Yeah, you convinced me. It was like... You probably more than anyone else influenced my decision to go to Oxford and study neuroscience, which was a really memorable, incredible set of years in my life. That was... Yeah.
Blake Richards: 00:05:45
Yeah. It clearly worked out well. I can stand by my initial enthusiasm that I launched at you. It was the right choice.
Jon Krohn: 00:05:54
Yeah, yeah, for sure. Anyway, I've gone on and on about how we know each other. Yeah. You've had an amazing journey since. You're now a leading neuroscience and AI researcher. Where are you based right now, Blake?
Blake Richards: 00:06:10
I am at McGill University where I'm an associate professor with a joint appointment in the departments of computer science and neurology and neurosurgery. I'm also a core faculty member at Mila, which is the Quebec AI Institute.
Jon Krohn: 00:06:24
Yeah, super famous Mila. It must be amazing. We're going to get into that a bit in this episode as we get into your research, but being in that kind of environment, really extraordinary. To make a bit of a connection here as well, two weeks ago, the episode that aired, episode number 725, we had Kim Stachenfeld on the show. Kim also does research at the intersection of neuroscience and AI. When I told her that I was going to be interviewing you shortly after her, she was so excited. She had so many great things to say about you. And then I understand from you that she had a really influential paper in this space. I wasn't even aware of that at the time of interviewing her, so maybe you can fill us in on that bit.
Blake Richards: 00:07:07
Yeah, sure. As I had mentioned to you earlier, Kim has had a huge impact on computational neuroscience and on my lab's research, in particular. When she was at Sam Gershman's lab at Harvard, she published a paper looking at the connection between the hippocampus, which is a critical brain region for memory and navigation. She linked the representations in the hippocampus to a particular form of reinforcement learning representation called the successor representation. This is something that had been invented by Peter Diane back in the 90s. Kim's paper showed that a lot of what's going on in the hippocampus seems to actually resemble successor representations in many ways. This was for many, myself included, a real eye-opening paper that guided the direction of our research for a number of years.
00:08:12
I still have students studying successor representations and their neural network equivalents, successor features, both just for AI in general, but for understanding the hippocampus as... Understanding the functional role of the hippocampus in reinforcement learning. Indeed, Kim's work influenced my own lab greatly, but like I said, I could probably safely say hundreds of labs have been influenced by Kim's work in this area. She really is a real leader in this space.
Jon Krohn: 00:08:48
That's wild. We didn't even end up talking about successive representations in her episode. We had so much to cover. That sounds amazing. We ended up talking a bit about... I don't know if this relates at all. It was the dopamine RPEH. Is that related to this?
Blake Richards: 00:09:07
It's semi-related in so far as they're both examples of how reinforcement learning theory has helped us to interpret neuroscience data and shaped our understanding of the brain, but they are a slightly different question within that space.
Jon Krohn: 00:09:23
Nice. Well, that is super cool, and I'll have to dig into that more. I'll be sure to include Kim's paper if it ends up being that I didn't include that already in the show notes from her episode a couple of weeks ago. We did have a lot of her papers and talks in the show notes for that one already. But yeah, let's talk about your research Blake. We are going to end up, just as in that episode, we talked about the hippocampus a fair bit. I'm sure that'll come up again here, really important brain structure. Your research is at the intersection of neuroscience and AI, as you said. More specifically, yeah, your lab is at this renowned Montreal Institute of Learning Algorithms, Mila, which has been influential in AI globally for decades. It investigates universal principles of intelligence that apply to both natural and artificial agents, something we've already managed to talk about a bit in this episode. I know that there isn't a good formal definition of intelligence, but maybe you could kind of give us your take on what the definition of intelligence is.
Blake Richards: 00:10:30
Sure thing. I think it's worth noting that I don't think that there is necessarily a unitary definition of intelligence. I am a firm believer in the idea that there are different types of intelligence, but the thing that defines different types of intelligence are essentially different norms, different definitions of what is good or bad. How I'm tempted to define intelligence is to say, once you receive some kind of norm, something that says this is what's desired, this is undesired, then intelligence is the ability to adhere to the norm. When we talk about an intelligent system, we're talking about a system that is somehow capable of adhering to some norm, whatever that norm may be. In the case of human beings, there are, and maybe even animals more broadly, we can say there are all sorts of norms that we adhere to that are obviously endowed to us courtesy of evolution and the desire, not just desire, the need to survive and reproduce ourselves.
00:11:42
A lot of the norms that we are concerned with and that define our intelligence are more specifically related to behaviors that help to ensure our safety, help us to plan out the necessary things to achieve our goals of finding energy, to sustain ourselves, of raising our families, et cetera, et cetera. When we talk about artificial intelligence, I think it's worth noting that the... Even defining that itself then becomes a kind of funny exercise. Because as I said, there are different forms of intelligence depending upon the norms that you apply. Arguably, all of computer science is concerned with getting computers to do useful things according to some norm. But when we talk about artificial intelligence specifically as a subfield of computer science, what we're really talking about are systems that can adhere to the sorts of norms that people can.
00:12:43
That's what we mean by artificial intelligence is specifically those forms of intelligence that are in some ways kind of human-like. Then we're kind of narrowing down to a specific form of intelligence, that form of intelligence that can successfully reach goals in order to successfully... Well, do things like survive or obtain necessary resources, or more broadly, be part of a team, so do the things... This is another key part of human intelligence is the way that we adhere to social norms. When another person or an organization asks you to do something, you can do that thing. An intelligent person can do that thing that's being asked of them. Likewise, that's what we're looking for with artificial intelligence, is when we give it a request as part of its joint role in our broader organization, whatever it is, or society, that the computer is capable of fulfilling that request of adhering to that norm.
00:13:52
This is a very long-winded way of saying that that's kind of the definition that I think we want to operate with, is the ability to adhere to the kinds of norms that human beings can and to reach goals and plan out steps necessary to reach goals of the sort that human society demands of an intelligent agent.
Jon Krohn: 00:14:11
Gurobi Optimization recently joined us to discuss how you can drive decision-making, giving you the confidence to harness provably optimal decisions. Trusted by 80% of the world's leading enterprises, Gurobi's cutting edge optimization solver, lightweight APIs and flexible deployment, simplify the data to decision journey. Gurobi offers a wealth of resources for data scientists, webinars like a recent one on using Gurobi in Databricks. They provide hands-on training, notebook examples, and an extensive online course. Visit Gurobi.com/sds for these resources and exclusive access to a competition illustrating optimizations value with prizes for top performers. That's G-U-R-O-B-I.com/sds.
00:14:57
Very cool. I like that definition. It sounded, the adhering to human-like norms started to sound pretty scary when you define those mainly as survival and reproduction. But then once you got into the teamwork thing, I was like, phew.
Blake Richards: 00:15:12
Yes. Well, we can come back to that because...
Jon Krohn: 00:15:15
We will be.
Blake Richards: 00:15:17
... I think a lot of people get freaked out when they think about those particular norms with respect to artificial intelligence, but for me, it doesn't actually freak me out at all. I can describe later why.
Jon Krohn: 00:15:26
Yeah. Oh, for sure. We'll be digging into that absolutely. Quickly before... I don't know how quickly. We'll see. But I have a few questions for you before we get into fears about AI and AI alignment, that kind of thing.
Blake Richards: 00:15:43
Sure.
Jon Krohn: 00:15:44
With that definition in mind that you just provided us, which I'd love, which I hadn't heard before, how can we evaluate progress in AI? How do we know that we are going in the direction of artificial general intelligence today?
Blake Richards: 00:16:00
Right. Well, this comes back to this point that you need some metric to somehow represent the norms that you're trying to get the system to follow. In AI, the way that this is done typically is by having certain data sets and benchmarks and stuff like that. That's a perfectly good way of assessing it to some extent because it is a way of assessing at least, okay, for this specific metric, how is your system doing? Often those metrics are measuring something real. If you look at say, the ability of a system to categorize images and that's your metric, what percentage of the images does it successfully categorize? That is a norm that people successfully adhere to. You will recognize a dog versus a car when you look at it. If I measured your abilities using this metric, you would do quite well.
00:16:56
Then when we have computer systems that get better and better at these metrics, that's a perfectly feasible way of measuring their intelligence. But when it comes to that question of artificial general intelligence, this is where I really take a pause because per what I was saying, I don't actually believe in artificial general intelligence, per se. I think that intelligence is necessarily a multifaceted thing. There are different forms of intelligence. Really when we're talking about measuring artificial general intelligence, I think it's almost impossible. What you can do is you can have a huge collection of different metrics that you apply. You can ask for the oodles and oodles of different metrics we have, how does this system perform across all of them? We might be then willing to say that you get closer to something like artificial general intelligence the more and more of these metrics you see improvements on across the board.
00:17:51
Certainly I think that's not unreasonable. In the same way that we would say that a human being is generally intelligent if they can successfully pass the SATs well and successfully, I don't know, write an essay that gets a positive response from the general public, or who knows what metrics you want to apply. You could have all sorts of different metrics that you apply to a person. Likewise, you could do the same to an AI. If they do well in it, you'd say it's more generally intelligent. But I don't think there's any way to measure the broader concept of artificial general intelligence as a unitary idea from super intelligence. I think that doesn't actually even exist.
Jon Krohn: 00:18:35
That makes a lot of sense. Yeah, even if we can't measure AGI with one metric, you are personally optimistic that the breakthroughs to achieve AGI, whether that's a great term or not, and yeah, I guess that this could be happening in our lifetime. What gives you that optimism?
Blake Richards: 00:18:59
Well, I think it's worth saying, as I said, I don't fully believe even in the concept of AGI, but here's what I will say. I have optimism that we will see artificial intelligence systems that get better and better across a broad swath of these metrics, such that you no longer have a system that can only do one of the metrics, can only recognize images, but systems that can recognize images, write poetry, whatever you want, of the sort of metrics that we would be inclined to measure them on.
00:19:31
Now, the reason I'm optimistic in that front is simply the data that I've received so far, which is that we've seen the models get better and better across broad swaths of metrics. Also the realization that so much of what matters for this kind of stuff is the question of the data that you train the system on and the scale of the system itself. I think that given the constant fire hose of data that our society is, I suspect that there will be more than enough data to train artificial intelligence systems that are broadly good at most of the things we would be inclined to measure a human being's intelligence on, and which, courtesy of their size, will be able to just generally keep improving on these things and the entire swath of tasks.
Jon Krohn: 00:20:35
Nice. We can talk about this in more detail later on when I have a whole bunch of questions, topic areas specifically about your research, both translations from AI to neuroscience, as well as the inverse neuroscience to AI. But I guess kind of relatively quickly for now, do you think that biomimicry, that replicating the kinds of things that we have in a biological brain, probably particularly in a human brain, do you think that mimicking those kinds of things is key to being able to replicate, or to being able to have a machine that can perform as well as us, or better than us on such a broad range of tasks?
Blake Richards: 00:21:18
I think if you're asking the question with respect to low level biology, the answer is no. We don't need the biomimicry at all. I think what is important is a sort of functional mimicry. There are certain functions that the brain can engage in, which are probably critical to some of our capabilities. If you want an AI system that can do as well as us more broadly, you need to give them these capabilities. An example that I like to trot out often is episodic memory. One of the things that's missing from current large language models, for example, is an episodic memory. Episodic memory refers to those memories that we have of our own lives, things that have happened to us, and they include details about the sensory experiences that we had, exactly what was said, where we were when it happened, et cetera.
00:22:13
Those episodic memories are critical for our ability to really place the things that have happened to us in a specific place in a specific time, and use that to plan out the right next steps for achieving the goals we have in our life. I think that it is assuredly the case that for large language models to get to the point where they can be as performant as human beings on as wide a range of tasks, you're going to need to endow them with something like an episodic memory. Will it need to look like the specific cellular mechanisms for episodic memory that we have in the human brain? No, I think not. But I think that the broad functional principle will have to be there. I think we've already seen that in AI throughout the history of AI, or at least the last 40 years, that the advances that were made were always about capturing certain functionalities.
00:23:10 [inaudible 00:23:13] successfully capture some of the in-variance properties of the visual system that are critical for the kinds of visual functions that we can engage in. Attention systems successfully capture some of the critical abilities to only process certain bits of information that our brains have as a function and which we now give to our AI systems via these attention mechanisms. But convolutions are not how the brain achieves in-variance in visual representations. Softmax is not how our brains achieve attention mechanisms. That functional mimicry has been critical to AI progress, the biological memory mimicry really much less so, and I expect that pattern to continue.
Jon Krohn: 00:24:02
Nice. Yeah, it sounds so sensible. Everything that you just said, that it seems patently obvious to me.
Blake Richards: 00:24:10
That's how I feel. Whenever I have debates with people, I'm like, "How are we even having a debate? This is fairly clear." But anyway.
Jon Krohn: 00:24:18
Yeah, that all makes perfect sense. As much as it might be a little bit jarring for our listeners, this kind of changing of gears, let's put a pin in that because I have lots that I want to talk about specific to AI and neuroscience. But before we get there, I want to talk about where this capacity, this ability to have this broad range of intelligence, which being as intelligent as humans, or more intelligent than humans on a huge collection of different metrics like you described, the thing that immediately caused me to ask you to be on the show after years of having you on my list is somebody to ask, was that you recently had an article published in The Economist. It was called Fears about AI's Existential Risk Are Overdone. Right next to that, in the same economist issue, the scientific director of Mila and one of the most well-known people in AI touring award winner, Yoshua Bengio, aired his concerns about the risks. I thought that was a really cool contrast to read side by side. Yeah, in your opinion, why are the fears about AI's existential risks overdone? How does your view differ from people who are big X risk warriors?
Blake Richards: 00:25:48
The actual difference in opinion when you focus in on the things that we truly disagree on are relatively small. I'm perfectly willing to recognize that AI could both be used by human beings for a number of nefarious purposes, and it's entirely possible, maybe even one might say likely, that there will be instances in the not too distant future where AI systems, due to misalignment and due to emergent behaviors that researchers have not anticipated, do stuff that we don't like that could potentially have very negative consequences. Where I really split with many people, including with Yoshua, to some extent, though I agree with a lot of what he says, where I split with many people though is this question of, would there be a situation where a super intelligent rogue AI system literally wipes out humanity, or at least enslaves the entirety of humanity? I do not think this is a very likely scenario. I consider it a pretty wild hypothetical. I don't think it's something we should be focusing our discussions on, nor something that should be occupying more than a small niche slice of research.
00:27:06
And to be clear, I'm not saying no one should discuss this. I'm not saying no one should study it. I'm just saying that I have a sort of gut like, "Come on, really?" reaction, when I hear people talking about it being a massive priority that we should be pouring tons of resources into. And the reason I'm skeptical comes down to the following. I think that many people, when they make these statements, what they're doing is they're extrapolating, to return to something else we were discussing a moment ago, from the idea that, if an AI starts to adhere to norms of survival and reproduction, necessarily, the next logical step is that it's going to seek our extinction or seek to dominate us. And I think that this is a reflection of a fundamental misunderstanding of the nature of natural selection and how species interactions actually work.
00:27:58
And I think that's in part due to the fact that most of the people saying these things, with all due respect to all of my colleagues, are people coming from a pure computer science background, who don't actually know very much about biology and ecology and who don't really understand fully how natural selection works. And the reason I say this is, when you look at the actual things that natural selection tends to favor and how evolution works, it's not about dominance and competition between species. It's all about finding a niche that works. You will successfully reproduce if you find a niche that actually positions you in a complimentary nature to all of the other species in the environment. So generally speaking actually, competition and dominance are the exception to the rule in natural selection, not the key force. Instead, it's actually mutualism and cooperation and complimentary niches that are what evolution really favors.
00:29:02
The only time you have direct competition between two species, where there's some kind of quest for dominance in the ecosystem, is when the two species really occupy the same niche. They've just happened to randomly evolve towards the same niche, and maybe one's an invasive species or something like that, then you will see competition between the species. And there will be potentially a sort of winner and a loser. But I think the key point there is they have to occupy the same niche. And this now brings me to why I don't fear it with AI. AI does not occupy the same niche as human beings. AI is not seeking the same energy inputs. AI is not seeking the exact same raw materials. And in fact, when you look at our relationship to AI systems, we occupy perfectly complimentary niches. We are the critical determinant of most of the resources that AI needs.
00:29:55
We're the ones who produce the electricity. We're the ones who produce the computer chips, who do all the mining necessary to get the materials for the computer chips, et cetera, et cetera. I could go on with a big long list. I think that the idea that an AI system would ever seek to extinguish us is absurd. Any AI system worth its salt, that is adhering to the norm of survival and reproduction, would actually seek the preservation of the human species above all. And furthermore, I think that what any AI system, that was actually truly intelligent and able to adhere to these norms of survival and reproduction, would do is figure out the best ways to work in a complimentary nature with human beings, to maximize our respective success at achieving our goals.
00:30:40
That's what natural selection and evolution would favor. That's what an instinct to survival and reproduction would favor. And I think that that's what we're going to see in our society. And I'm really pretty confident about that pronouncement. I don't say it with a hundred percent certainty, but I'm confident enough that I sleep very well at night having published an article telling everyone to chill the hell out about this topic. And I really do stand by this perspective.
Jon Krohn: 00:31:12
As we often discuss on air with guests, deep learning is the specific technique behind nearly all of the latest artificial intelligence and machine learning capabilities. If you’ve been eager to learn exactly how deep learning works, my book Deep Learning Illustrated is the perfect place to start. Physical copies of Deep Learning Illustrated are available in seven languages but you can also access it digitally via the O’Reilly learning platform. Within O’Reilly, you’ll find not only my book but also more than 18 hours of corresponding video tutorials if video’s your preferred mode of learning. If you don’t already have access to O’Reilly via your employer or school, you can use our code SDSPOD23 to get a free 30-day trial. That’s S-D-S-P-O-D-2-3. We’ve got a link in the show notes.
00:32:02
I love everything that you just explained, and again, it resonates with me perfectly. The interesting thing, for me at least, is that I have spent a fair bit of time, in recent years, reading about or having conversations with people about X risk. And nobody has explained it like you just did to me. And I suddenly feel like I'm going to be sleeping better at night, because that does make a huge amount of sense to me. Even humans, who are at the extreme end of not cooperating well with our environment and testing its limits, even we, it's like tons of people and more and more people all the time as a proportion are raising this flag about, "We've got to cooperate with this ecological system if we're going to continue to [inaudible 00:32:51]"
Blake Richards: 00:32:50
Exactly. Exactly. Sorry, I didn't mean to interrupt you, but that's a perfect example.
Jon Krohn: 00:32:55
No, not at all.
Blake Richards: 00:32:57
I think, when we look at humans, I think part of the reason that there's this assumption that the AI will try to extinguish us all is because there has been a tendency, sometimes in human evolution, for humans to extinguish other species and to overstrain our capacity and not to act in a complimentary way to other species. Though, for the record, we do act in a complimentary way to many species. Raccoons are perfectly happy with humans, but regardless, I think that the key point here is that, if humans continue to behave like this, we will not be adhering to the norm of our own survival. We will eventually extinguish ourselves, if we continue to act in a non-complimentary nature to other species on earth. And so, that would, arguably, be an example of human stupidity, not human intelligence.
00:33:47
And so, this is why, again, funnily enough for me, I see this in terms of both Yoshua, who I greatly admire, and my ex-professor and someone who's been a real mentor for me, Geoffrey Hinton, when they're talking about these issues, clearly, the possibility of superintelligence is part of what freaks them out and part of what leads them to these concerns. But I actually come to the opposite conclusion. The possibility of superintelligence is what makes me more confident that the AIs will eventually cooperate with us. That's what a superintelligent system would do. What I fear more, funnily enough, are dumb AI systems, AI systems that don't figure out what's best for their own survival, but which, instead, make mistakes along the way and do something catastrophic.
00:34:33
That, I fear much more. The analogy I always use is with the system in Dr. Strangelove. So in Dr. Strangelove, the nuclear holocaust that occurs is a result of a Russian doomsday device, that will automatically launch all of Russia's nuclear weapons if Russia's ever attacked. That's not a very smart system, that's not a superintelligence, but it leads to the end of the world, precisely because it's this overly narrow dumb thing. And that's actually what I fear much more than a rogue superintelligence.
Jon Krohn: 00:35:06
Yeah, so that makes a lot of sense, a ton of sense. I am with you a hundred percent, Blake, on everything you've said. And so, it seems like the kinds of things that we should be most worried about are these things that dumb AIs can be doing to us and things that they are doing to us today, privacy violations, disinformation. So yeah. What do you think are the most effective strategies to mitigate these kinds of issues, without hindering AI innovation?
Blake Richards: 00:35:36
Yeah, that's a really good question, and it's a very difficult circle to square, as it were. We don't have, right now, good mechanisms for dealing with this in AI, but I think probably our best starting point is to look at what's happened in other industries. It's worth noting that these concerns around the appropriate design of systems and testing of systems have been present in many other disciplines, whether it be engineering or finance or medicine or whatever have you. So I think we can look to some of these other disciplines to ask these questions. One of the things that jumps out for me immediately, and which I think many other AI researchers would resonate with, is the need for auditing, external auditing in AI. So ideally, AI models that are released to the public should be audited by some external body. Now, what that would look like, it could be done in many different ways.
00:36:37
One possibility, though I know some people would critique this, but I think it's worth considering nonetheless, is what they do in finance, which is that you actually have there be independent auditing agencies, who then will be paid by the companies to audit them. Now, why would they do that? Because it's a way of getting a mark on your product, that says this is a safe, reliable product. So in the same way that you want a rating from Moody's saying, "Yes, these guys are financially sound," you could get a rating from the AI auditing agency, that says, "Yes, these models have been thoroughly stress tested. They're not going to leak your private data or tell your child to kill themselves," or something like that, whatever you have you. So that's one option. The other option would be some more restrictive regulatory mechanisms implemented by government, that force auditing and various stress testing on models. I think the tough trouble with that is you might start to really impair the nascent AI economy if you take that kind of approach.
Jon Krohn: 00:37:41
Like Europe.
Blake Richards: 00:37:42
Like Europe, yes, exactly. And so, I personally wouldn't advocate for that. I think we should first try these more voluntary auditing mechanisms, that would be driven by the desire to actually have your product be well certified. Now, the only thing I'd add to that is what we should probably have is at least some legal mechanism for recourse in cases of obvious negligence. So it's one thing if you've got a model that's not doing anything too sensitive and it screws up and you didn't get it audited and fine. But when you've got something that's actually playing a key role, say a self-driving car, where people could really get hurt, if it does badly, then maybe you either need required audits, independently mandated by governments or whatever, or alternatively, you need to put in place legal mechanisms that put the responsibility on companies, such that they'll be on the hook if they don't get their systems audited appropriately.
Jon Krohn: 00:38:45
That makes a ton of sense. And I can poop on Europe and their regulatory regime, but the stuff that they are coming up with, they're trying to make an effort to do that, where things where there's lethal risk, there's much higher regulation than your newsfeed.
Blake Richards: 00:39:04
Yep, exactly. And I think there's other things we can also say, which is there are certain situations where AI should just not be deployed in an autonomous fashion ever. So the example I would give is, in the military, I personally would like to, I don't know this is going to hold, unfortunately, but in an ideal world, AI systems would only ever be there to supplement human decision making in military applications. It would never be something where an AI is actually deciding to pull the trigger on something. That's the kind of scenario that actually makes me really worried, both in terms of the potential for, I would hope that no one's dumb enough to put autonomous AI systems in, say, nuclear chains of command vis-a-vis Dr. Strangelove, but even things like, if you've got fighter jets or whatever, that are controlled by autonomous AI, you could imagine there being some situation that occurs, that leads an autonomous AI to make a decision that then triggers a war.
00:40:05
These are the sorts of things that I do worry about, and I think it would be nice if we had... This is where I think some heavy government regulation is more appropriate, aligned with what Europe's doing. When people's lives are actually at stake, when you're talking about anything potentially lethal, then you got to come down with a heavier hand, as we see with medicine. You can't get a medicine approved by just getting some independent auditing, that you happen to pay someone for. You need to run through the FDA, and that's appropriate.
Jon Krohn: 00:40:38
Data Science and Machine Learning jobs increasingly demand Cloud Skills—with over 30% of job postings listing Cloud Skills as a requirement today and that percentage set to continue growing. Thankfully, Kirill & Hadelin, who have taught machine learning to millions of students, have now launched CloudWolf to efficiently provide you with the essential Cloud Computing skills. With CloudWolf, commit just 30 minutes a day for 30 days and you can obtain your official AWS Certification badge. Secure your career's future. Join now at cloudwolf.com/sds for a whopping 30% membership discount. Again that’s cloudwolf.com/sds to start your cloud journey today.
00:41:19
Yeah, that makes sense. The military one is certainly a worrying one, because for one, some of the biggest players in the military, like the US, often don't sign up to landmine treaties that the rest of the world signs up for. And yeah, like cluster bombs, and then, so there's that. And then, on the other hand, in the war right now in Ukraine, there's tons of drone warfare, hundreds of drones at least, I don't know if it's thousands, of drones being destroyed every day, coming from both sides. And it makes a lot of sense for these drone operators to be trying to make the systems more and more autonomous, because as the drone gets further away from the operator, and you want it to be, because you're trying to get behind enemy lines, jamming in these kinds of things, make it harder to control that thing from a distance. And so, if you're trying to make an effective killing system, that's taking out the artillery of your enemy, that's taking out the minesweepers of your enemy, you're probably going to try to have a machine vision system that is pulling the trigger, without that human operator needing to be there pressing the button, at that remote distance, that is likely to be interfered with.
Blake Richards: 00:42:50
I agree. And this is where I do stay up at night more is I think there is going to be a natural push amongst militaries across the world to incorporate more and more autonomous AI into their systems, for exactly the sorts of reasons you just described. And moreover, as we're seeing the world enter a new phase of geopolitical rivalries, that's going to add fuel to that fire. And that's where I think, ahh, if only we could all collectively kind of focus our efforts on reducing the risk of these sorts of uses of AI, that would be a really good use of our time.
Jon Krohn: 00:43:36
Pouring all of the money that we pour into war into education, nuclear fusion, into dealing with the consequences of climate change and trying to mitigate climate change. That would be nice. Come on.
Blake Richards: 00:43:50
It would be. It would be. It would indicate actual superintelligence amongst [inaudible 00:43:55]
Jon Krohn: 00:43:55
Yeah, we'll know that artificial superintelligence is here when we're all getting along.
Blake Richards: 00:43:59
Yes.
Jon Krohn: 00:44:03
So yeah, so you actually, on the note of that utopian future that we just gave a really brief glimpse of there, nobody wants to hear about utopias, they don't sell, nobody's going to buy our records, our Super Data Science records, Blake, if we're just talking about utopias, we need dystopias.
Blake Richards: 00:44:21
Okay.
Jon Krohn: 00:44:22
We need these Super Data Science records flying off the shelves. And so, you've mentioned the dystopian aspect of a fully automated economy before. So I sometimes, in some scenarios, and I haven't done this in, maybe I did it once in a publicly recorded talk, but privately, certainly, I often get really excited about how a fully automated economy and achieving nuclear fusion could mean things like no more war, no more threats of physical violence at all, everybody having a great education and feeling a sense of fulfillment and having all the nutrition that they need. It seems within reach in our lifetimes to have that kind of utopia. But there's also a potential... There's more roots to dystopia than utopia. There's more possible dystopias. So yeah, you've talked about a dystopian aspect of a fully automated economy. So yeah, so what are you most worried about there? And what steps can we take to ensure that the integration of AI into more and more parts of our economy supports human flourishing?
Blake Richards: 00:45:37
I think the best way to approach that, to some extent, is to really force ourselves to view AI systems as tools that supplement human capabilities. And in that respect, I am actually fairly bullish that that's how things are largely going to evolve. So indeed, I think a fully automated economy wouldn't be desirable and would look a lot like a dystopia. The reason I would say that is in part related to a... Well, I'll actually relate it to a Robert Heinlein quote I have here in my office that I really like to stick to. And what it says is that basically... Well, here, I'll read it if that's okay. It's a fun little quote.
Jon Krohn: 00:46:27
Please.
Blake Richards: 00:46:28
So "A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects."
00:46:54
So I like that quote, because it gets at what we were discussing with respect to intelligence. What defines a human being and what makes us intelligent agents really is our generality, our ability to do many different tasks and to adhere to many different norms that are necessary for our survival. And I think that human beings flourish when they have the opportunity to really have a rich life where they're doing many different things. And I'm not generally a Marxist, but I think many of Marx's analyses of our economy was correct. And I think, even just within the shift towards capitalism, we saw the alienation of workers as they became specialized cogs and machines. And I actually think it's worse for human emotional development if you're just kind of doing the same thing constantly. So where I think we could have a real problem that way is if you have a fully, fully automated economy, then what are humans actually up to?
00:47:55
And I worry about not having pressures on us to do many different things. I actually think, the more tasks we have, the more work we have to do, the better we are, the more we flourish. So I don't think we want to shift towards a fully automated economy. That being said, I think we've seen that human beings, with additional tools by their side, can continue to be generalists, who do a lot of different stuff, but they do it more efficiently and more productively. And that's where I see a more positive future for AI's incorporation into our lives. I think then that this is what we're really going to have to try to stick to. And where we're going to have to really be careful about it is specifically with respect to what we might call either white collar work or even just intellectual and creative enterprises. I actually think, between you and me, the full automation of the economy is never going to happen in the next few decades.
Jon Krohn: 00:48:51
I'll keep it between us for sure.
Blake Richards: 00:48:54
Yeah, that's right. Don't tell anyone else. I've said this publicly. That's a funny linguistic, isn't it? Anyway, between you, me, the audience, and everyone who's read my Economist article, I actually don't think a fully automated economy is all that viable, in large part because robotics is not advancing at the same pace that AI is. Actual physical systems for manipulating the world are really difficult to design, and they don't benefit from scale and automatic optimization. You can't just back prop your way out of the difficulty of designing a good limb with sophisticated sensors to pick up and do very complicated manual manipulations of objects. That can only be done through careful slow engineering. It's going to take us forever.
Jon Krohn: 00:49:50
Yeah, yeah, yeah. Although at least we have the ability to simulate. So to have a robot arm in a simulator, it helps a lot. But I see what you're saying. I see what you're saying.
Blake Richards: 00:50:02
It helps a lot. But even with the simulators, it's still difficult to get to the point where you've got something at the same level of sophistication as a human body. Because I think this is worth saying, evolution can be viewed as an optimization process itself. So our ability to manipulate the world physically is a result of an optimization process, a sort of automated optimization process, which was natural selection, and that's why they're incredibly sophisticated. Until we have the ability to do our own optimization like that, then I think we're going to constantly struggle to get to the point where robots are anywhere near as capable as human beings. So that's why I personally think physical tasks will probably be ones where we get robot supplements. So we already see that in factories, in farming, there'll be robots that will help with physical tasks, but you still need people there, very much so. Because you have to have that little primate, with dextrous hands, who can deal with basically any weird little situation [inaudible 00:51:05]
Jon Krohn: 00:51:04
Let the kids do it,
Blake Richards: 00:51:05
Let the kids do it, exactly. Your robot is just going to do its usual thing and then, occasionally, need some help from the kids. The much more concerning area, I think, of the economy is, as I said, sort of intellectual and creative aspects of the economy, where instead, we can just take advantage of scale, we can just take advantage of automatic optimization. We're already seeing that with the sort of image generation and text generation capabilities that are emerging in AI systems. And there, I think there's a real concern that you would have potentially AI systems that wholly replace human beings in certain aspects of the economy.
00:51:44
And that wouldn't be good. I don't think it would be good to ever have any part of the economy be fully automated, because we want people to have many different experiences and many different pressures on them. That's how they thrive. So my hope is that we don't see that, and due to some semi-conscious planning and also even just some natural human inclination to trust humans more than AI systems, we'll see these AI tools as supplements, as things that help artists and writers and lawyers and office workers be a hundred times more productive, but they're still going to be in there doing their stuff.
Jon Krohn: 00:52:24
That makes a lot of sense to me. There's an author, Aldous Huxley, who's super famous for a book in particular, a Brave New World, although also lots of books on taking hallucinogens, and I think, if my memory serves correctly, the final novel that he wrote is called Island. And Island was written as his answer to the dystopia of a brave new world. So he tried to conjure up his idea of a utopia for this book, and there still ends up being drama. Because it's just this one island of utopia left, while this dystopian military dictatorship, that's the rest of the world, is starting to encroach on the one remaining utopic island. But in each chapter of the book, he tries to outline his ideal for education, and then, another chapter is the ideal for work, and another chapter is the ideal for spiritual fulfillment. And in his chapter on work, he has everybody rotating between tasks, so that you're...
Blake Richards: 00:53:45
Right. Yep. I think that's exactly it. This is sort of back to Marx's point, the alienation of the worker, that's probably where human beings, what we evolve to be most comfortable with. When you're part of a small tribe, surviving in the wild, maybe you're going to have some specialization. Certain people in the tribe will do certain things more often than others, but you're all ultimately sharing all of the tasks at the end of the day. And that seems to be where we really do feel best. And I kind of see that in my own life even. When I just do the same thing constantly, it's not good. I feel better when I've got lots of different activities I get up to.
Jon Krohn: 00:54:25
Yeah, it makes a ton of sense to me. Cool. All right, so that kind of covers this section that I wanted to cover on AI alignment and existential risk, your Economist article in general. And so, that pin that we put in a while ago around biomimicry and the relationships between neuroscience and AI, I'd love to dig into that now. And so, let's start with how AI influences neuroscience, and then, later on, we'll go the other way.
Blake Richards: 00:55:00
Sure.
Jon Krohn: 00:55:00
So your research, as we already mentioned, is at the nexus of AI and neuroscience, and so, let's dive into that from the deep learning perspective. You've mentioned before, in other talks that you've given, the importance of interpreting neuroscience data through the lens of machine learning. Why is this a good strategy? It seems like it's something that's really obvious to you.
Blake Richards: 00:55:19
Yeah, so I think this is a good strategy, because I think that one of the things that neuroscience is not always great about adhering to is the principle that we should really be asking ourselves, " How can animals and their brains actually accomplish the things that they really do, not simply the things that we give them in the lab?" This is a critical idea in my mind because I think what has happened in neuroscience historically, and I get why I see the reasons for this, and this is not a critique of neuroscience broadly, but what we've seen historically in neuroscience is there was always this attempt to design very well controlled experiments, reasonable goal, where you have things that are simple enough that you can be sure that you're controlling all the variables and you can just change one little variable and you can see how things change and then you do your experiments this way.
00:56:27
The problem is, that that typically involves these very artificial tasks for animals and people that don't resemble even a little bit what they actually do in their lives. And then when we come to model that and to think about how the brain is engaged in computations in order to accomplish different tasks, the way that we, I think go wrong in neuroscience has been because we've laser-focused in on these very specific tiny dinky little well controlled tasks, we end up building models that are way too simplistic and which are not actually addressing the real problem. So I think it's important in neuroscience for us to occasionally step back and say, "Okay, I don't want to know how an animal selects between the left square or the right square based on the fact that 70% of the time the left square gives it water, but only 30% of the time the right square does. I want to know how an animal successfully navigates an environment when all it's receiving is this huge high dimensional input of visual and vestibular and somatosensory inputs."
00:57:33
I think that when we start to ask those questions and we ask, how could a brain even solve that? How could you do that task? It feels at first in neuroscience, for many people I think like, well, it's too hard. You're asking too big a thing right from the get go. That's impossible for us to think about mathematically and concretely. But this is where machine learning comes in. Machine learning and AI have been concerned with exactly that problem. How do you engineer a system to successfully solve these kinds of tasks? To take the sort of high dimensional inputs that humans receive and behave in appropriate ways according to the sorts of norms that humans follow? And what I think is super useful about that is when you start actually trying to build systems that actually do these things for real, you encounter certain fundamental problems that have to be resolved.
00:58:29
And that then lets us look back at the neuroscience and say, "Well, if this is a fundamental problem that has to be solved, the neuroscience, the brain has to somehow solve this problem as well." So to give my favorite example of this, one of the issues when you're actually really doing a challenging task of high dimensional inputs, high dimensional outputs, is you come to realize that there's this problem of credit assignment. So when you're engaged in some task and you make an error or you succeed at something, how do you successfully figure out what were the things that you did that led to your error or led to your success? And AI researchers have long contended with this problem. It's been something that from the get go, they realized they had to deal with. If you're going to design a system to accomplish some task, you have to figure out what were the parts of the system that were responsible for any errors or what were the parts of the system that got me some rewards?
00:59:29
And these are things that people in reinforcement learning and supervised learning, any area of machine learning and AI have contended with. So now when we go to the neuroscience, when we look at certain data, we can actually ask, how does the brain solve this problem? How does the brain solve credit assignment? And I think what we've seen in neuroscience over the last 30 years is that often, there are universal enough principles that the AI can actually indicate to us potential solutions that the brain might employ. We saw this most apparently with reinforcement learning. So I've already mentioned the credit assignment problem. Reinforcement learning is designed in part to answer the question of how do you assign credit for the rewards that you get later on to different states that you visited or different actions that you took. And reinforcement learning researchers have developed excellent systems for solving that credit assignment problem.
01:00:31
And then neuroscientists in looking at the signals in the brain, and in particular the signals in an area of our midbrain, which is an evolutionarily ancient part of our brain, responsible for a lot of our most basic functions, when they look at the signals in the midbrain from dopamine neurons, a particular neuromodulatory system in the brain, they observed, "Oh, these neuromodulatory systems actually look in particular dopamine, look like they're doing something really similar to the reward prediction signals used in reinforcement learning for credit assignment". And that's just, it's wild on some level, but there it is. The data is very clear.
01:01:12
Now people argue about whether the brain is actually doing the exact form of reinforcement learning that we use in AI, and it's probably not exactly the same, but they're close enough that we see many similarities between them. And we only got there though, because some researchers in the nineties, some neuroscientists were asking themselves, what is the problem that the brain would have to solve in order to do the sorts of things that it does? Like reinforcement learning, and learning from rewards and then saying, "Oh, well, the AI folks have this solution that they've developed to this credit assignment problem. What if we look at whether this solution is in any way being employed by the brain?" And that has been an immensely productive approach in neuroscience. And computational neuroscience I think is taking that inspiration from AI theory.
Jon Krohn: 01:02:05
And so I can't help but notice some of these similarities to the conversation I was having with Kim in the episode that came out two weeks ago. So we're talking about dopamine reward system, we're talking about reinforcement learning. Now, you are talking about the credit assignment problem. She was talking about reward prediction error, the reward prediction error hypothesis. Are these in some way related?
Blake Richards: 01:02:33
They are very much so, so within machine learning, the way that we address the problem of assigning credit for rewards to the actions in the states that a system visited is via, well, one of the ways we do it is via these temporal difference based learning algorithms. So these are learning algorithms where you're resolving the following problem. So let's say I want to learn to maximize my rewards in the environment. I want to maximize my rewards out into the future. I don't really care about just the immediate reward I obtain. I want to make sure that with some discounting, because I don't care about the infinite time horizon, with some discounting, I am capable of maximizing rewards into the future. Now, when you try to estimate what we call the value of different states and different actions in the environment, you see that you've got a mathematical conundrum, which is you want to know on average, how much reward are you going to get out into the future from this thing, from the state you're in or from the action you're taking.
01:03:48
The problem is that to actually do that calculation properly, you would need to wait forever to see what rewards you eventually get. And it's not an effective way to actually learn anything. So assigning credit for the rewards that you are going to receive in the future becomes this very challenging task. The solution that reinforcement learning researchers came up with is this temporal difference learning approach where you say, "Actually what we're going to do is we're not going to wait to receive all of the rewards into the future. We're going to use what's called a bootstrapping approach, and we're going to take the reward that we get in the immediate time, and then we're going to bootstrap off of our current estimate of how much reward we're going to get into the future. So we're going to say that our current prediction, as it were about the rewards we're going to receive is going to be contrasted to the actual things that we get from the environment."
01:04:48
This is where the reward prediction error comes from in trying to resolve this credit assignment problem in a manner that doesn't require you to wait forever to see what rewards you get in the future. You come up with a reward prediction error that just falls out of the math, and it then just so happens that this reward prediction error that helps you solve your temporal credit assignment problem by bootstrapping off of your current estimates of value is very similar to what we see going on in the dopaminergic systems of the brain of the midbrain.
Jon Krohn: 01:05:23
Cool. Cool. Cool. So I think I get it. So from my understanding of reinforcement learning the machine system in reinforcement learning, it's trying to maximize reward, which is some function that we define, but it could be maximizing the score in a video game, say, in a more real world scenario. It might be the kinds of things that we think about are maximizing a financial return or having as much fun as possible, I guess, for a human, fun units. And so with this kind of in the machine learning sense, you can very strictly define this reward maximization as trying to maximize that reward into the future. So over however many future times steps, and typically we have a hyperparameter gamma-
Blake Richards: 01:06:21
That is usually what it is called, gamma. Yes.
Jon Krohn: 01:06:25
And so we use that to down-weight. The further something... The more time steps away in the future, some hypothetical reward is the more it gets down-weighted by the gamma hyperparameter. And so similarly, in a biological brain like ours, we are trying in some sense to be doing that same kind of computation. We're trying to figure out, we're trying to think, "Okay, if I take this job or that job or I go to on this trip or this other trip, will I in the first instance have more financial security and better hours? And in the second case, will I have a more fun time or have a really interesting time on that trip?" But because it is too computationally complex to really work out for sure what the right career choice is, you take shortcuts. And so this dopamine reward, prediction error hypothesis is like it's a model of how we try to take that shortcut.
Blake Richards: 01:07:39
Yeah, it's roughly that. That's right. It's a model of how we try to bypass the need to actually wait and see what happens in the future by instead saying, "Well, we're going to use our current internal best guess for how good an action this would be to take."
Jon Krohn: 01:08:03
Oh, yeah. So yeah, that is a key distinction from what I was describing, because basically I was saying it's the computational complexity of trying to unrule all the outcomes that would be coming ahead. But yeah, you're saying actually it's a matter of also just literally not wanting to wait and find out for sure what the truth is as opposed to our computed guess of the truth rolling forward?
Blake Richards: 01:08:30
Right. Now, that is, so the distinction that I think you were instead discussing that distinction between kind of mentally rolling out like, "Okay, what's going to happen if I do X, Y, Z?" That's another example of an interesting sort of general principle that AI has tackled and which then helps us to think about it a bit more in neuroscience. So one of the things that we see in AI is precisely as you say, you would ideally, in some world, you'd think just really think through all possible trajectories you could take through the environment. So how are you going to sequence your actions in order to arrive at the exact goal that you want? But the experience in AI has shown us that this can be a very challenging approach. Not only is it computationally really inefficient, it requires a lot of compute to do all those roll-outs.
01:09:34
It's challenging because if your internal model of the world is incorrect, then you might be coming to the wrong conclusions anyway. So what we see in AI is that to date this approach of really trying to plan through all your steps has been successful in some domains, but not in others. And in fact, in most domains, what's been more successful is just what we call a model-free based approach where you just try to estimate for every state and action, what is the future reward that you would expect from this state in action? And then you select the actions and try to get to the states that maximize your future reward without actually planning it all through. This is actually quite effective, and it's how a lot of modern reinforcement learning works. But this is where it's interesting then to see it in the brain is we see systems in our ancient evolutionary brain structures that look a lot like these model-free reinforcement learning things.
01:10:39
So that temporal difference learning error we were just discussing, and a lot of the sorts of representations required to support that seem to exist in more evolutionarily ancient parts of the mammalian brain. But we also have the ability to do that rollout, to think about potential futures, to do what's called model-based reinforcement learning. And I think that what the AI has helped us to understand is that there are good reasons for the brain to have both systems in place because you want to be able to trade off between the strengths and weaknesses of these two different systems. And so it's through the experience in AI that we come to kind of understand the teleology of the structures we see in the brain. Why would evolution have favored this sort of multiple ways of solving the same problem? Approach?
Jon Krohn: 01:11:31
The what ology?
Blake Richards: 01:11:34
Teleology, sorry. Yeah. The cause, what is the reason for something?
Jon Krohn: 01:11:40
Right.
Blake Richards: 01:11:41
I think it's rooted in theological terminology. It was like with teleology, you were trying to determine what is God's reason for this thing? But I use it scientifically to refer to sort of why questions? Why does our brain show this structure and not this, etc.? And I actually think that's a broad way of framing it. AI helps us answer why questions in neuroscience and reinterpret our data in light of these why questions.
Jon Krohn: 01:12:11
Very cool. That was fascinating. I love that. Yeah. You're coming up with all of these great lessons that we've learned about biological neuroscience from AI. So something that we have in machine learning systems, at least supervised learning systems, and in another way, reinforcement learning systems. So we've been talking about maximizing reward in the reinforcement learning system. There's this quantitative thing that you can be trying to maximize number of dollars or number of fund units. So in supervised learning, we have a loss function or cost function, which we're trying to minimize. And so can you expand on the role of these kinds of loss functions in the brain? My understanding from you have a bunch of papers on this idea, so yeah, so tell us a bit more about that and how close are we to accurately modeling this in the context of human learning?
Blake Richards: 01:13:21
So I think it's worth starting with time loss functions to our previous conversation. Loss functions and reward functions are how we quantify the norms that we want an intelligent system to follow. And in AI, they are arguably one of the key tools that engineers have to design. If you look at modern AI practice, most of what AI researchers are working on is basically to design the architecture of the network and the loss functions that they're going to train it with, and everything else you handle through automatic differentiation and everything like that. So these loss functions are a key way of doing artificial intelligence, because they provide us these quantification of our norms. In neuroscience, I think it's good for us to ask the question, what loss functions or reward functions is the brain using? And the reason that this is a key question is because in the absence of having an answer to that question, I actually think it's very hard to understand what's happening in a neural circuit.
01:14:41
The reason I say that is that neural circuits are like the artificial neural networks we use in AI, parallel distributed processing computers. They are designed in such a way that you've got billions of different units all doing slightly different things from each other simultaneously. And that's how the compute works. And when you look at a parallel distributed processing system, it can be very hard to understand what the hell it's doing. It's almost impossible. And I think that this is one of the things that's a real challenge for neuroscience is basically their goal is to understand this device or this organ that is almost by definition non-intelligible because it's done in this distributed parallel way that is not nice for analysis by human intuitions. And so that challenge is, I think, best resolved by instead asking the question, "Well, what would this system be optimized for?" Even if we can't identify what every individual neuron in the circuit is doing and how it's contributing to the computation, at least we can answer the question, "What was this circuit collectively optimized for?" And that will then allow us to interpret the data that we get from the cells with better understanding. So I view this as one of the key questions in neuroscience actually, is identifying the loss functions and the reward functions that ultimately shape our brain and shape the actions that we take.
Jon Krohn: 01:16:16
Very cool. I love that idea. So yeah, so basically, because again, this is another time in this episode that you said something to me that seems abundantly straightforward and clear, and I guess I've had the inklings of this thought but not articulated it is clearly as you really brought shape to it. This idea that the reason why the human brain is so hard to study is because it has an enormous number of parallel computations happening at the same time across quite a bit of space. And so being able to have an intuition about how that's working is probably futile. But the objectives of this system can be modeled, the outcomes can be modeled relatively effectively. And so cost functions or reward functions in machine learning, we know that that's our key tool. So applying those to human learning or animal behavior seems like a fertile ground.
Blake Richards: 01:17:27
Exactly. That's exactly it. That's a really nice rephrasing of my point.
Jon Krohn: 01:17:31
So in machine learning, so whether we're talking about a supervised learning system or the reinforcement learning system with cost and reward functions respectively, in either of those cases, we use backpropagation in order to adjust our model weights up and down the deep learning system. And so is there a kind of biological analog of backpropagation? It seems to be a question that has sparked a lot of debate in the neuroscience community.
Blake Richards: 01:18:03
Yeah, it has. And I ultimately am, for better or for worse, associated with one fairly, let's say, extreme position on that question, which is yes, I think the brain has some kind of approximation to backpropagation. Now, let me explain why I say that. This comes back to the question of credit assignment and the sort of general principles that systems will have to solve. So when you look at a parallel distributed computer and you're trying to figure out how to train it in a way that it gets better at the particular tasks that you've given it, i.e., better at minimizing the loss or maximizing the rewards that you've given it's very hard to know how to change your system.
01:19:00
All of those billions of individual units that are acting in parallel, it's very hard to know what to change in that system because it's all interconnected. It's this huge gigantic network where everything kind of depends on everything else. And so figuring out why did we get an error here? Can be very challenging. Now, it just so happens that we have a really nice solution in AI, and that is back propagation. Back propagation solves that problem for us. It tells us, "Okay, here was the set of synapses that were really kind of responsible for this error. If you change them, you're going to reduce your error."
Jon Krohn: 01:19:36
And really quickly there a synapse is where two neurons, where two brain cells connect with each other?
Blake Richards: 01:19:43
Yes, exactly. Exactly. And so we sometimes refer to the connections between units and artificial neural networks as synapses, or at least as synaptic weights. And what back prop does is it gives you a rule for updating the synaptic weights in your artificial neural network in order to guarantee that your loss will go down over time or your rewards will increase over time. Now, the thing that I always come to is that the brain faces this same problem. The brain is a parallel distributed processing system. It has to somehow solve the problem of figuring out which of the synapses and neurons were responsible for the errors or the successes. So I think most neuroscientists, they're not all, and this is something that bugs me sometimes, but let's just say the vast majority of neuroscientists will recognize this as a real problem. We'll say, "Yes, indeed. The brain has to somehow figure out what physiological changes are going to actually help an animal to get better at something in some way."
01:20:47
The question then becomes, is there any chance that the brain's solution to this problem bears any resemblance to backpropagation whatsoever? And that then is less of a sort of base assumption and more an empirical question that we can try to address. The reason that I come down on the side of saying that the brain probably does have something kind of like back prop, or at least kind of like gradient descent, which is what back prop is, it's a version of gradient descent. And we can go over what that is if you want, just let me know.
Jon Krohn: 01:21:30
I think, I guess we can kind of really quickly. It's like, so gradient descent is something that is common to a lot of machine learning algorithms and backpropagation is like, I don't know if the right word is extension of that or specific case of that to neural networks where we need to be doing something special. And it's exactly what you're about to describe in the biological system where, because the artificial neural network almost, unless it's an extremely shallow neural network, if it's a deep learning network, then it has many layers of artificial neurons. And so this gradient descent, it's a special case of it because we need to be updating the model weights, the synaptic weights across all of those layers. So I think that's probably a good enough explanation for it.
Blake Richards: 01:22:20
Okay, great. So I mean, I think maybe though to, actually, I'm realizing I should unpack it a little bit more for what I'm about to say. So the way that gradient descent works is it works according to the following principle. So let's say you've got some loss function that's measuring how badly you're doing at your task. It's measuring, say your error. Gradient descent as a principle says, okay, the way that you're going to get better at this task is you're going to look at the slope of that loss function. So you're going to look at which direction in your synaptic weight space, in which direction does the error get lower, and in which direction does the error get higher? Whatever direction the error is going lower in, we're going to follow that direction. That's how gradient descent works, and it guarantees that your error is going to go down over time because you're always updating your synaptic connections in a manner that is following that hill of your loss function. Backpropagation, as you said, is just one specific way of doing gradient descent in multilayered artificial neural networks.
01:23:22
What I really believe is not that the brain does backprop per se, but that the brain does some form of gradient descent. Now, why do I say that? I say that for two reasons. The first is that mathematically speaking, and this is a point that is sometimes not always well received and not always easy to understand, but I'm going to try my best to articulate it to you. Mathematically speaking, if your system is making small changes to itself, then necessarily if you're getting better at the task over time, you have to be going down that hill. You can't be getting better at the task while making small steps if you're not going down that loss function hill. It's just a mathematical guarantee you have to be going down the hill.
01:24:17
So then the question is, well, maybe the brain's going down the hill not because it's actually figuring out the slope of the hill and following that slope, but maybe it's doing something else. Maybe it's somehow got some way of randomly exploring the space and if things get better, it keeps going in that direction, and if things get worse, it doesn't go in that direction, for example. That could be a solution that doesn't involve an explicit calculation of which direction the hill is going down in.
01:24:49
What we've seen in AI, and this brings me to my second reason. So we know that the brain has to be doing at least following the slope, and then the question is, is it actually actively calculating the slope or is it just that what it does happens to bring you down the slope? I think it actively calculates the slope. And the reason I say that is because one of the fascinating things that we see when it comes to optimizing parallel distributed processing systems is that most algorithms for solving this credit assignment problem, for guaranteeing that you're going down the slope of your loss function, most of these algorithms get worse as the network gets larger. It gets harder and harder, it takes more time to train your system, et cetera.
01:25:33
For example, that approach I just described of randomly trying stuff and then keeping what works and discarding what doesn't work, that just gets totally garbage when you get to really large networks, and it's because it's just too much space to explore. If you've got a billion synapses, you can't just randomly try different perturbations of those synapses and hope that some of them work. You've got way too much space to explore for that. So what we see is that most of the algorithms that people develop don't scale well. They actually get worse as the networks get bigger. The one exception is gradient descent.
01:26:15
Gradient descent is fascinating because it gets better the bigger the network gets, and the reason for that is that gradient descent is following this hill down. Well, I should say we don't fully know all the reasons for it, but one of the reasons for it is gradient descent is following this hill down. And a problem that gradient descent can encounter is that as it's following the hill down, if it gets to the bottom of a hill, it will stay there because it'll say, okay, I've got no more slope to follow. We're at the bottom. But you could reach what's called a local minima, i.e. you could reach, say, just like a tiny little ditch on the top of your mountain. So, you're actually doing terrible, but because it just so happens that all around you, the slope is going up, you're going to stay where you are. And for a long time, researchers thought that this is why gradient descent was a terrible algorithm and you shouldn't use it. And when I was a student in the late 1990s, that's exactly what I learned in AI class. You don't want to use gradient descent-
Jon Krohn: 01:27:13
Wow.
Blake Richards: 01:27:14
You get stuck in local minima, it's garbage.
Jon Krohn: 01:27:17
Oh, wow.
Blake Richards: 01:27:18
But as you add more dimensions, as you increase the space, the dimensionality of the space that you're exploring, now, the chances that every single dimension is at a local minima, [inaudible 01:27:32] to an evaporating small probability. And the vast majority of the time, you will never encounter a local minima unless you're very close to the actual true minima of your problem. Now, there can be points that are called saddle points where you've got a minima in one dimension but not in the other, and they can be harder to escape, but there are tricks for escaping those. And so what we've discovered is that if you've got your tricks for escaping saddle points, then gradient descent just works better and better and better the more dimensions you add because you reduce the number of local minima that might be bad, and the system just converges on beautiful solutions.
01:28:07
So, we have this issue that solving the credit assignment problem, figuring out which synapsis to change is challenging in a parallel distributed processing system. Most algorithms suck the larger your network gets, but for one, which is gradient descent. And what we see in brains is that there has been an apparent evolutionary pressure to increase the size of brains in mammals. Humans have stupidly large brains. It's worth being explicit about that. When I say stupidly large, I really do mean that literally because our brains take an inordinate amount of energy. So our energy budget is, I think 20% of it is expended just on keeping our brain alive. And then when we're thinking that increases even further. So we're spending some stupid amount of money, well, not money, energy on this one organ.
Jon Krohn: 01:29:05
I guess money too.
Blake Richards: 01:29:06
Money too.
Jon Krohn: 01:29:08
Burning out money to fuel the neuron factory.
Blake Richards: 01:29:11
That's right, exactly. So we're burning a stupid amount of energy and it also makes human birth a (beep) nightmare, right? It's funny watching other animals give birth and it's like a piece of cake. And then, if you've ever witnessed a human birth, you're like, it's not a piece of cake. And the reason is simply because of the size of the baby's heads, right? So if you were just thinking about the survival of the human race, the most obvious thing to do would be to decrease the brain size because why do you want to have all this energy expenditure and horrible births, but if there's some reason that bigger brains helps you, that's why, right?
01:29:53
Now, bigger brains are only going to help you if you can do something with them. So I think it's safe to say that whatever learning algorithm our brains have scales well. It works well with more neurons, and I think that's clear from the way that evolution has favored larger brains. But that being said, then there's the question of, well, what if there's another algorithm other than gradient descent that scales well? And so this is where I think that there are, I will place my careful scientist hat on and say, we don't know that the brain is approximating gradient descent, but I think it's a very viable hypothesis because it is this algorithm that scales well with size and we see an evolutionary pressure for size. Moreover, when we train artificial neural networks with gradient descent, they develop features that look a lot like what we see in the brain.
01:30:46
And, as a final thing, and this is part of what my lab's done, there's now lots of scientific evidence showing that there are mechanisms in place in the brain that could make gradient descent possible. So even though it sounds outlandish to some ears, there are literal physiological mechanisms that we have proven could be used to do gradient descent. Now, we don't yet have the experimental data to verify that the brain is doing gradient descent, that's TBD, but I am bullish on this question. I think we know that the mechanisms are in place that the brain could do it. We know that brains have representations that look like what you get out of gradient descent, and we know that gradient descent is the one algorithm we've discovered so far that scales well with size. Add it all up, I'm willing to say I put more money on the brain does something like gradient descent pot than the brain doesn't do gradient descent pot?
Jon Krohn: 01:31:41
Nicely said. Yeah, nicely said, I'm in. I'm in.
Blake Richards: 01:31:48
Good.
Jon Krohn: 01:31:48
Show me what betting shop I can put all my money down on that. Very cool. I have so many questions for you on the inverse. So now we've talked about transfer of academic research from AI to neuroscience quite a bit. I have so many questions, actually, I have even more questions going the other way from neuroscience to AI, but there's just no way we're going to have time to fit it in right now. So I think, and I'm putting you on the spot a bit here, but just between you and me, maybe we can do a part two in the future where we specifically do the neuroscience AI thing. So I'm just going to skip past that section for now-
Blake Richards: 01:32:34
Sure.
Jon Krohn: 01:32:35
... and keep our audience excited about that for the future.
Blake Richards: 01:32:38
Sounds good.
Jon Krohn: 01:32:40
So more quickly, and maybe it's something we can do in that future episode as well, we can talk about this more, is this emerging things coming out of AI and neuroscience research being combined together. Specifically in a talk for the Center for the Neurobiology of Stress, which has a really cute name CNS, which is the central nervous system. So, it's funny. You proposed in that talk a thought-provoking science fiction idea of directly propagating external cost functions into the brain via gradient signals.
Blake Richards: 01:33:20
Yes.
Jon Krohn: 01:33:22
And so that follows on pretty nicely from what you were just talking about, even if it is sci-fi like. Do you want to explain that idea more and whether you still think it's possible?
Blake Richards: 01:33:32
I still think it's possible. And, sure, let me unpack that a little bit more. So as I said, I think that we now have clear evidence that in principle, there are physiological mechanisms in the brain that could support gradient calculations, that could be trying to figure out, okay, this is the direction I have to change things in order to follow that hill of my loss function downwards. Now, if that turns out to be correct, that indeed some of these physiological mechanisms that we've identified are doing that, then what you could imagine doing is replacing the brain's own gradient signals with gradient signals that you externally impose. So you then have a brain interface where you're stimulating particular dendritic segments that are, sorry, dendrites or the branches off of neurons. And one of the things that we've shown in our papers is that there are particular dendritic branches that could be supporting gradient calculations.
01:34:33
So let's imagine that you then have a way of stimulating these dendritic branches in the brain, and you can basically give the brain gradients for loss functions that the brain doesn't have, but which you have in some external system. That would then give signals to the brain that would let it actually design itself to get better and change its synaptic connections to reduce this other loss function that you have provided it with. That would be very cool potentially for really seamless brain computer interfaces. One of the things that I think is worth noting is the way that this works right now in brain computer interfaces is we rely indirectly on the reinforcement learning mechanisms that exist in the brain. We drop some electrodes into a person or a monkey's brain, we try to train them to control, say, a cursor on a computer, and the way they get trained is just that the monkey will receive a little bit of juice when it moves the cursor to the correct area. The person will get some personal satisfaction when they do that. And so as a result, what the person's brain or the monkey's brain is actually trying to do is maximize rewards and we're giving rewards that indicate the kind of thing that we want. But that means then that the credit assignment is indirect.
01:35:53
As far as the brain is concerned, it's maximizing rewards, and that's the gradients that it's following, but we are giving rewards that are trying to direct it towards this other problem. But if, let's say, instead you were able to literally send the brain a signal about the explicit error on its cursor, like, no, no, no, you overshot that to the left here and now you run gradients for that error back through the brain using this sci-fi system I've described, you could imagine that you would get to the point where you could control that cursor as easily as you control your hands, and it just feels completely natural. It's completely seamless.
01:36:32
So I think there would be a lot of promise for this sort of sci-fi level brain computer interface that I have always dreamed of since I was a young kid reading Neuromancer. But I think that the interesting question about that as well is that it would bring with it a very difficult ethical conundrum, which is that part of what makes human beings naturally free arguably, is that we all have our own internal cost functions and no one can force a different cost function on us directly. The way that we force other people to do things is by tapping into their existing cost functions. So either by saying, "Okay, you want rewards, so we're going to give you rewards for doing the stuff we want you to do," or, "You don't want pain, so we're going to give you pain if you don't do the stuff we want you to do."
01:37:34
But these are all indirect ways, as I discussed with the BCI, of getting the brain's reward function to ultimately do the heavy lifting for you, and it's just you imposing a particular reward function structure. But if instead you could really give direct gradients into the brain, you would have a situation where for the first time in human history, there could be people where they're receiving an external loss function that their brains have not calculated themselves, and that would mean that it would almost be like a situation where the slave doesn't realize they're a slave because it would feel like you were just doing what your brain wants to do. It's just you wouldn't be aware or maybe you would, depending on how it was set up, that that loss function wasn't coming from you internally, but coming from this external source.
Jon Krohn: 01:38:23
Yeah, we can allow the artificial super intelligence via a brain computer interface to reprogram all of our error functions so that we live in peace and we stop trying to send AI-powered drones over the border to destroy the other humans on the other side. Let's now put a pin in our whole conversation for an indefinite period. So many more things that I'd love to ask you about neuroscience to AI, transfer of knowledge, as well as these kinds of frontier questions, what might be possible in the future as a result of more convergence in AI and neuroscience research? For now, I'll let you go. Before I do that though, we always ask our guests for a book recommendation, and you kind of just gave one, Neuromancer. I don't know if you want to use that.
Blake Richards: 01:39:18
No, I feel like I shouldn't use that because I would be shocked if any of your readers, or sorry, any of your listeners haven't read Neuromancer. Who knows? Maybe. If you haven't read Neuromancer, go read Neuromancer. But another one that I want to give as a book suggestion that I read not too long ago is Piranesi by Susanna Clarke. And I give this as a recommendation because... So one of the things that I think has been fascinating in AI in recent years is the fact that large language models work as well as they do because really a large language model is just trained on all the texts they can find on the internet, and yet it seems to develop a pretty good understanding of the physical world, of human interaction, all these sorts of things. And the reason arguably is, is because contained within all that text is the sort of sum knowledge of the human species. And so the AI is ingesting some knowledge of the human species.
01:40:21
Anyway, I bring up this book, Piranesi, because I don't want to give too much away, but it's a sort of fun book that starts with this other dimension that is literally built off of the sum of ancient human knowledge having poured into this alternate reality. And then there are some characters who are sort of living in this world, as it were, this alternate world. And that's literally all I'm going to say because I don't want to give away more of what's happening. It's a beautiful book. It's fascinating. It also really touches on the question of madness and what it means to be sane, and I thoroughly, thoroughly recommend it.
Jon Krohn: 01:41:07
I love it. That sounds like a really great recommendation. Also, you had Dr. Strangelove earlier, which sounds like a pretty [inaudible 01:41:13]
Blake Richards: 01:41:13
Yeah, if you haven't seen that movie, you should go see Dr. Strangelove or: How I Learned to Love the Bomb, a Stanley Kubrick classic. Definitely worth seeing, especially I think in light of these AI debates, because it goes to show how back in the 1960s people were thinking about, "Well, what if we automated weapons systems? That would probably not turn out well." So, long before super intelligence was a concern, there were people realizing that we shouldn't be putting AI in charge of our weapons systems.
Jon Krohn: 01:41:44
Yeah. Sounds like some more very sensible advice from you in this episode, like a lot of your brilliant thoughts, and so well articulated that it was so easy to be on exactly the same page as you, at least for me personally, and I'm sure for a lot of our listeners out there as well. Blake, for people who want to keep up to date with your latest thoughts between now and your future Super Data Science episode, what are the best ways to do that?
Blake Richards: 01:42:13
Well, so if you had asked me this three years ago, I would've said, you can follow me on Twitter. I'm @tyrell_turing.
Jon Krohn: 01:42:20
What?
Blake Richards: 01:42:23
Yeah. I picked that user handle back when I was 18 years old, not on Twitter, on various social media, and then it just became my social media handle when I created the Twitter account a decade ago.
Jon Krohn: 01:42:38
You didn't think Blake Richards was sufficiently unambiguous?
Blake Richards: 01:42:41
No, no. I went with Tyrell_Turing. But it just so happens that you can find me on Bluesky and Mastodon under the same handle, Tyrell_Turing, and I'm pretty active on social media. I always release our research on it, and you can follow me there. You can also find out more about our research, both at the MiLA website and our lab website, which is linclab.org.
Jon Krohn: 01:43:13
Awesome. Thank you so much, Blake. It has been a treat to catch up with you after all these years. Crazy that it's been so long. It doesn't feel like it. Yeah, very easy chatting with you as always. Thank you so much for taking the time and for making this amazing episode, and we'll catch up with you again soon.
Blake Richards: 01:43:32
Thank you, Jon. It was wonderful speaking to you. Your questions were very insightful and dead on. I had a lot of fun, and it's nice to see you again after so many years.
Jon Krohn: 01:43:46
Whoa, what a trip. In today's episode, Blake filled us in on how intelligence is the ability to adhere to a norm once that norm's defined, how AGI may not exist, but we can nevertheless have a huge collection of different intelligence metrics to get a sense of whether a machine is as broadly intelligent as a human. He talked about how biomimicry isn't essential to attain human-like intelligence, but a functional ability such as episodic memory formation probably is. He talked about how cooperation is the norm in evolution, and so a super intelligent AI should not pose an existential risk, but rather should seek complementarity and cooperation with biological life on earth. And he talked about how machine learning's cost and reward functions are wildly useful for modeling human behavior and how biological brains may even learn through a mechanism similar to the backpropagation of artificial neural networks.
01:44:40
As always, you can get all the show notes, including the transcript for this episode, the video recording, any materials mentioned on the show, the URLs for Blake's social media profiles, as well as my own at superdatascience.com/729. And beyond social media, if you live in Germany, we could meet in person soon. At the time of recording this message, I'm still sorting out the details, but what I do know for sure is that I'll be at the iconic Merantix AI Campus in Berlin from November 13th to 17th. I'll likely be moderating a panel session on how to build a commercially successful AI startup, and I'll interview a number of exceptional AI entrepreneurs. I'll post more details on my LinkedIn and Twitter feeds as November 13th approaches, but as cool as it would be to meet in person at the Merantix AI Campus, don't worry, I'll also be recording all of the sessions as Super Data Science episodes.
01:45:28
All right, thanks to my colleagues at Nebula for supporting me while I create content like this Super Data Science episode for you. And thanks of course, to Ivana, Mario, Natalie, Serg, Sylvia, Zara, and Kirill on the Super Data Science team for producing another amazing episode for us today. You can support this show by checking out our sponsor's links, by sharing, by reviewing, by subscribing, but most of all, please just keep on tuning in. I'm so grateful to have you listening and I hope I can continue to make episodes you love for years and years to come. Until next time, keep on rocking it out there and I'm looking forward to enjoying another round of the Super Data Science podcast with you very soon.
Show all
arrow_downward