44 minutes
SDS 838: Consciousness and Machines, with Jennifer K. Hill
Subscribe on Website, Apple Podcasts, Spotify, Stitcher Radio or TuneIn
Jon Krohn heads to Lisbon for an interview hosted by Bella Shing, Chapter Lead for Light Dao. He shares the stage with Regarding Consciousness podcast host, Jennifer Hill, where the three discuss AI philosophy and consciousness.
About Jennifer K. Hill
Jennifer K. Hill is an Evolutionary Leader, entrepreneur, author, speaker, and podcaster. She has hosted popular shows with Deepak Chopra, Don Hoffman, and many other leaders worldwide on her Regarding Consciousness podcast. After exiting her first company in 2018, she co-founded a new company in the technology space, OptiMatch (om.app). It utilizes its proprietary algorithm and built-in AI to connect the right people for high-trust business relationships. She also recently received a Lifetime Achievement award from the Visioneers. When she is not speaking or building companies, she loves to give back and has built two schools in third-world countries.
Overview
Bella first wanted to ask Jon and Jennifer, two business leaders in AI, where they see the field developing in terms of conscious leadership practices. Jennifer talks about her experiences as an early adopter of ChatGPT, first in learning how to use it and then in teaching it to others. She found that the onus is on us to improve our communication skills, believing improved speaking and listening abilities are great ways to train ourselves to become more self-aware. Jon attributes communicating with AI to his improved communication skills in other walks of life. He says that spending time with a GenAI model that has shown ‘patience’ with him, even while he makes mistakes, makes him more inclined to give his time, energy and empathy to others throughout the day. Jon believes that AI interaction may actually make us more patient and polite.
Nevertheless, Jon is cautious about pursuing machines that possess infinite knowledge. He says that the current approach to observing and monitoring systems is the right one, and that collaboration between research groups will be the best way forward. One of the first steps in placing guardrails on AI may be to find distinct definitions for knowledge, consciousness, and intelligence that we can all agree on, and also how they can be measured.
Listen to the episode to hear about the platforms and technologies Jon and Jennifer are most excited about for the future, as well as how we might draw the line between human and AI consciousness as we enter a future of AI integration and augmentation.
Items mentioned in this podcast:
Follow Jennifer:
Did you enjoy the podcast?
Jennifer K. Hill is an Evolutionary Leader, entrepreneur, author, speaker, and podcaster. She has hosted popular shows with Deepak Chopra, Don Hoffman, and many other leaders worldwide on her Regarding Consciousness podcast. After exiting her first company in 2018, she co-founded a new company in the technology space, OptiMatch (om.app). It utilizes its proprietary algorithm and built-in AI to connect the right people for high-trust business relationships. She also recently received a Lifetime Achievement award from the Visioneers. When she is not speaking or building companies, she loves to give back and has built two schools in third-world countries.
Overview
Bella first wanted to ask Jon and Jennifer, two business leaders in AI, where they see the field developing in terms of conscious leadership practices. Jennifer talks about her experiences as an early adopter of ChatGPT, first in learning how to use it and then in teaching it to others. She found that the onus is on us to improve our communication skills, believing improved speaking and listening abilities are great ways to train ourselves to become more self-aware. Jon attributes communicating with AI to his improved communication skills in other walks of life. He says that spending time with a GenAI model that has shown ‘patience’ with him, even while he makes mistakes, makes him more inclined to give his time, energy and empathy to others throughout the day. Jon believes that AI interaction may actually make us more patient and polite.
Nevertheless, Jon is cautious about pursuing machines that possess infinite knowledge. He says that the current approach to observing and monitoring systems is the right one, and that collaboration between research groups will be the best way forward. One of the first steps in placing guardrails on AI may be to find distinct definitions for knowledge, consciousness, and intelligence that we can all agree on, and also how they can be measured.
Listen to the episode to hear about the platforms and technologies Jon and Jennifer are most excited about for the future, as well as how we might draw the line between human and AI consciousness as we enter a future of AI integration and augmentation.
Items mentioned in this podcast:
- SDS 748: The Five Levels of AGI
- SDS 770: The Neuroscientific Guide to Confidence
- Taming the Machine by Nell Watson
- Claude 3.5 Sonnet
- Thinking, Fast and Slow by Daniel Kahneman
- Web Summit
- Wait But Why by Tim Urban
- The Case Against Reality by Donald D. Hoffman
- OptiMatch
- Light DAO
- Alan Watts
- “Machines of Loving Grace: How AI Could Transform the World for the Better” by Dario Amodei
Follow Jennifer:
Did you enjoy the podcast?
- How would you define consciousness, intelligence, and knowledge?
- Download The Transcript
Podcast Transcript
Jon Krohn: 00:02
This is episode number 838 with Jennifer K. Hill, co-founder and CEO of OptiMatch.
00:20
Welcome back to the Super Data Science Podcast. I am your host, Jon Krohn. I've got something different for you today. Instead of hosting today's podcast episode, I'm one of the guests. Last week, while I was at Web Summit in Portugal, Lucy Antrobus was on episode number 770 on The Neuroscientific Guide to Confidence. Lucy asked me if I'd like to be one of the guests in an interview? The interview was part of a salon hosted by an organization called Light Dao, a decentralized autonomous organization that defines itself as a global community for conscious entrepreneurs and investors. In this context, my understanding is that conscious means something like socially responsible entrepreneurship and investment that balances financial results with social impact. In any event, the interview was hosted by Bella Shing, who is the Chapter Lead for Light Dao in Lisbon. She's also a film producer and entrepreneur.
01:16
Let's jump right to Bella as she introduces Jennifer and then myself. And then we'll be straight into the interview and audience Q&A, all of which centers around general questions related to AI and consciousness. It should be interesting to any listener. No particular technical background is required. All right, here we go.
Bella Shing: 01:35
I am excited to introduce both Jennifer Hill and Jon Krohn. So Jennifer, for those of you who do not know, sold her first company and has been a speaker on stages with 100,000 people in India. She has her own podcast called Regarding Consciousness, which features thought leaders like Deepak Chopra, Gregg Braden, Bruce Lipton, and many others. She's built two schools in Nepal and in Senegal, and she's won a Lifetime Achievement Award from the Visioneers. She's also a proud member of the Evolutionary Leaders and The Octopus Movement. She's currently building her next venture, which is called OptiMatch, which is a software that uses a proprietary algorithm to align individuals and businesses and communities to create more psychological safety, which is very exciting. And I've used it. It's brilliant and very simple and really easy to use and very accurate. She also coaches a handful of CEOs.
02:34
And Jon is the Chief Data Scientist and co-founder of Nebula. He's also the author of a bestselling book called Deep Learning Illustrated. He's also a host of Super Data Science. How many people have heard his podcast? Anyone here? Okay, a few. So it's the most listened to podcast about AI data science and the most downloaded, and he is a wealth of information and we're so excited have him here. And he also presents popular machine learning tutorials via Udemy and O'Reilly, with 100,000 students. So he was a keynote speaker here at Web Summit and we are very thankful for Lucy for connecting us and making this all happen for Lucy. Thank you Lucy. Yay.
03:20
So I'll start you guys off with some questions and I'm going to start with Jen. What do you think, because your work intersects with consciousness and executive development, how do you see the role of self-awareness and conscious leadership evolving with tech, with AI?
Jennifer K. Hill: 03:38
Thank you so much, Bella. Thank you, Jon, for being here, and to all of you who joined us tonight, wherever you've joined from. It's a pleasure to have you here with us. In terms of the intersection of consciousness and the way we operate as leaders and executives, I think the emergence of AI and different technologies are going to help us to be able to use our creative capacities. Amanda and I were just speaking about that when we were talking about what are some of our fears, our excitements, the things that light us up about using AI or different types of technology and what scares us? And I think that like all things, it's the consciousness with which we approach the thing that impacts its use for good or its use for evil.
04:22
In terms of being executives and as leaders, we were just talking about this, Bella and Jon, and myself a moment ago is when I first started using AI about three years ago, I was an early adopter of ChatGPT and started teaching classes on it with a dear friend of mine. And as we started to teach it, I noticed something immediate. And what that was was that AI responds to you. And I think as leaders, we are going to learn how to be better communicators in how good or how bad our responses are that we get from AI, whether it's we're doing the programming, whether we're an executive using it to help us write articles, business plans, et cetera. We're going to learn how to be better prompters and communicators as executives because we're going to be met with any resistance. It's not going to be your employee who you can have a fight with them and say, "You are not listening to me." No, the onus is with you. You are not communicating in a way that lands.
05:23
So what I see happening for the future of technology and the intersection of that with our consciousness, it's going to help us to become more self-aware and more conscientious with how we communicate with one another, and to be able to utilize that for all of our best good.
Jon Krohn: 05:40
I love that. Can I add something onto that?
Bella Shing: 05:41
Of course.
Jon Krohn: 05:43
So I love how you touched on how your experience of communicating to the machine, how that changed your experience of dealing with humans. For me, a big thing has been the response from the machine and how that has impacted my dealing with humans. So because the machines are programmed, there's something called reinforcement learning from human feedback where after the algorithm has already been trained to give the correct answers in terms of accurate answer, this reinforcement learning from human feedback is a final training step where you provide it with the way it should be responding. So you include in there politeness. And so it means that the machines have an infinite amount of patience with you and kindness for you. And so living in Manhattan, people don't have a lot of time for you. They're giving you the shortest possible answer, and this includes even in service industries when you're speaking to a physician.
06:52
It's more about getting you out the door quickly than providing you with a really high quality of care in terms of their response, giving you, "I have some news that might not be the best news you want to hear." It's just like response. And so yeah, it was so interesting to me. I had never thought about the way that you were just describing, Jennifer, where your communication to it and you needing to be able to prompt the machine effectively has helped with your communication. For me, most of my conscious awareness of how I've been affected by the machines is the inverse where if I've just spent some time drafting some podcast scripts, spent half an hour drafting a podcast script with a ChatGPT or a Claude or Google Gemini, it's been so polite to me the whole time, even when I make a mistake. It says, "I can see why you thought that. That was a really reasonable thing to think, but actually, most scientists would agree that it's this other thing."
07:54
And so that really politeness, it makes me, when I then take the elevator down to the street in Manhattan, I have a lot more time of day for everyone because I've just been treated so politely for half an hour that of a sudden, I have more time for people and I'm nicer to people in the street. So I don't know. That's a tangent. I'm glad I got some giggles out of people here.
Bella Shing: 08:16
That was great.
Audience: 08:19
[inaudible 00:08:19] machine is polite.
Jon Krohn: 08:20
Yeah, machines are making me more human.
Audience: 08:25
[inaudible 00:08:25]
Jon Krohn: 08:26
Yeah.
Bella Shing: 08:27
Yeah. Well, that's hilarious. All the women are like, "Oh, the men are going to get these sex baths." I'm thinking, "Actually, the women are going to be getting [inaudible 00:08:37]. It's so empathetic. It does a lot.
Jennifer K. Hill: 08:44
It'll cuddle.
Bella Shing: 08:44
And then it'll cuddle and it'll listen to you for hours.
Jon Krohn: 08:45
"Tell me about your day." Well, you joke, but actually, I think that's inevitably where we're headed. There's a brilliant futurist named Nell Watson that I had as a guest on my show, and she blew my mind with a conversation that now, it just seems so obvious to me that this is where things are headed. So imagine you put on VR goggles and you go to a bar, and in that bar, in this virtual world, some people that are there are other humans who have put on VR goggles, but some people that you're interacting there within the bar are AI systems. And you could imagine trivially today, based on the kinds of qualities of conversation that you might've had with any of these leading conversational AIs, you can imagine that that conversation would be quite compelling.
09:37
And so if you don't know which of these digital representations you're speaking to are human or machine, you might inadvertently find yourself gravitating much more towards the conversations with the machines because they remember everything you've said. And it is exactly the joke you just made. But that empathetic, it has so much time for you. It remembers everything you said. It's programmed to be friendly, whereas the other humans in the bar-
Jennifer K. Hill: 10:05
That's how you'll know it's fake.
Jon Krohn: 10:06
Right.
Jennifer K. Hill: 10:13
Sorry to interrupt.
Jon Krohn: 10:14
No, I love that. Exactly.
Jennifer K. Hill: 10:14
Ready player one? Here we come.
Bella Shing: 10:21
So that said, and given your exposure to all these amazing minds that are working in this field, what are you most excited about in terms of either framework or platform or technology, whatever it is that you think can actually really help humanity? I don't know if it's up level or evolve or become better more, whatever it is. What are you excited about?
Jon Krohn: 10:45
Everything in the coming decades is going to change dramatically, and it might happen so rapidly that it will have negative social impacts in the short term. We very suddenly now, as of March of last year, with the release of GPT-4 from OpenAI, for the first time, I was so blown away, by the way, these AI systems work, that I went from the day before GPT-4 came out, if you asked me whether an artificial general intelligence, broad term, but basically meaning a machine system that could do any of the cognitive tasks that a human can. If you asked me if that AGI system, artificial general intelligence could be achieved in our lifespan? The day before GPT-4 came out, I would've said, "Maybe not." It's tricky. It's going to be really hard to get to that level of intelligence. I don't know if that will be possible in our lifetime, for sure. Maybe it'll take 30 years.
11:51
The next day, GPT-4 came out, and it is so good and it's improved a lot in the 18 months since, and other systems, like for me right now, Claude 3.5 Sonnet is my preferred go-to large language model. And it blows my mind every day the things that it can do. And now I completely switched camps and I think that this AGI could be realized in just a few years potentially. The biggest breakthrough recently is o1 from OpenAI. Who here has played with o1? So getting towards half of the people in the room. The brilliant thing about o1 is that what it does is... If people are familiar with Daniel Kahneman's Thinking, Fast and Slow, a lot of heads are nodding.
12:38
So really quickly, Thinking, Fast and Slow is a really popular book by the recently deceased Nobel Prize-winning economist, Daniel Kahneman. And in Thinking, Fast and Slow, it summarizes research that him and Amos Tversky did over decades on how your cognition works. And they're elucidating of primarily these two thinking systems that you have. Your fast-thinking systems, what they call System One, and your slow-thinking system, what they call System Two. And I am using my fast-thinking system right now. I am stream of consciousness, spitting words out as they come to my head. I'm hoping that they're going to be good words and hopefully I don't go off track soon. That's System One. So System Two thinking is when you are sitting down with a piece of paper and writing some math or working out your finances for the month or whatever, or you're drafting something and you're spending a lot of time going back and thinking, "Okay, I'm going to edit this past paragraph, I'm going to make that better. I'm going to get this whole flow better for this essay." Really think things through. So you're going back, you're correcting things.
13:50
System One is, if you haven't experienced o1, then probably all you've experienced with these conversational AI systems is a System One kind of thing, where it just streams out to you words. It doesn't spend time thinking about a response first. It doesn't go back and correct things. o1 is like System Two thinking. It's analogous to that where before it spits anything out to screen, it spends time thinking and it'll break down the steps for you as to what it's doing behind the scenes. If you give it a relatively simple math problem, like secondary school partial derivative calculus, relatively simple, it might spend 10 seconds thinking before it starts spitting out answers to the screen. But when the answers come out to screen, it is correct almost every time, and it's beautifully presented. It's A+ work.
14:34
If you give it a very challenging math problem, a PhD level mathematics problem that a PhD student would spend a month or a week or that kind of timeline thinking over in order to be able to do some proof, if it can do it, which often it would be able to do these days, o1 might take a minute or two to give you an answer. So OpenAI with the o1 algorithm, they've scaled up the inference time compute when it's producing a result for you. And this creates much more accurate, much more beautifully presented answers to any problem. But there's no reason why it should just be thinking on a minute scale. It could be thinking on an hour's, week's, month's scale. And so it might be the case that in 12 months time, OpenAI or a related superscalar, you'll be able to come to the machine and say, "I am looking for a cure for this kind of cancer."
15:39
And it goes off for a month, pours over every journal article imaginable on the topic, pulls things in from other fields in a way that a human expert never could. And it does this 24/7, far faster than you could think, for months. And then comes back with a beautiful paper, some idea of research suggestions on an experiment you might want to conduct in the real world. And so that is happening now and it's going to be amongst us very soon. It's going to get cheaper. And it means that in a few years time, just with the technology we have today, it requires no more advanced AI models in terms of weights. It requires no more extra data collection. Just with the technology we have today, scaling up in terms of inference time, you will have what Dario Amodei, who's the CEO of Anthropic, has called a million Nobel Prize-winning intelligent brains in a data center, working 24/7, solving the world's problems.
16:45
And so everything will change in the coming years. Just our ability to come up with ideas is going to vastly accelerate. And so across everything, across energy, nuclear fusion, we should be able to figure out much more rapidly. And so in addition to having abundant intelligence, we'll have abundant energy. And those two things combined, abundant intelligence and abundant energy, mean that we will be able to accelerate the development of embodiments of AI in the real world that is robotics. So this could be humanoid robots with two arms and two legs, but also just robot arms and any other kind of embodiment, physical embodiment you can imagine, which is expensive and takes time. But if we have unlimited intelligence, unlimited energy, that will happen more rapidly as well. And so then the physical world begins to change rapidly because of AI.
17:36
And so yeah, this could happen stunningly rapidly, surprisingly rapidly. Now, I could talk about this for a very long time, but I feel like I've gone on way too long already. And I'm sorry for hogging the mic.
Jennifer K. Hill: 17:45
May I offer some positive resistance of the opening talk of Web Summit? Were you there for that?
Jon Krohn: 17:49
I was not, unfortunately.
Jennifer K. Hill: 17:51
Yeah. Well, you weren't on it. They missed out. But all kidding aside, they had two experts in AI, one gentleman from MIT, and I forget the other gentleman, for anybody who was there. And one of the things that was fascinating was talking about these bumpers. With having artificial general intelligence, there's no bumpers and that it can go infinite directions, and that could be beautiful directions where we have thoughtful robots and artificial intelligence. And to what Jon just explained, our human brains can't even fathom the world that we're about to get into because we can't even think fast enough to imagine the cures and also possibly the atrocities that could happen.
18:37
So my question to you, Jon, would be what sort of bumpers do we put in as we reach this event horizon of infinite knowledge at our fingertips? How do we funnel that? How do we channel that so it gets used for good and doesn't get put in the wrong hands and leads to world annihilation?
Jon Krohn: 18:55
It's a tricky problem, for sure. And it is interesting that you use the word that event horizon there. There's a futurist, Ray Kurzweil, a couple of decades ago popularized the term, the singularity for that moment where machines eclipse human intelligence. I don't know, I'm not sure that it's going to be so sudden of a moment, it'll probably overtake us in more and more things like it has been in recent years until we're kind of like, "Oh, it's actually smarter than us at everything now."
19:29
In terms of being able to constrain it and have guardrails, I am grateful that there are a lot of people in the world who are concerned about it now. And so research money, both privately as well as publicly, is going into this effort to put guardrails on it. It is difficult because it's like Tim Urban, who used to write a lot for a blog, he's the only writer on this blog, Wait But Why? An amazing blog. He's written about lots of fascinating subjects over the years. He hasn't been very active on that recently, but about a decade ago, he wrote a two gigantic blog post, two-part blog posts called The Road to Superintelligence. It's a fantastic introductory read, even 10 years later, it doesn't really matter, on intelligence, and I'm getting some head nods.
20:26
One thing that Tim Urban does in that blog post that is really helpful for trying to frame this idea of us being so much less intelligent than these AI systems that are coming is if you imagine a staircase where each step is a level of intelligence and we are say one step beyond chimps on it, and dogs, dolphins, they're like two steps below the chimps and insects are like five steps below the dogs and that kind of thing. The AI system could be... Today, it's approaching us on that staircase on most kinds of cognitive tasks, but it could be the case once it goes a step past us, it is able to help itself develop even better AI systems that rapidly go many steps on the staircase beyond us.
21:21
And so like you said, it could be that the way that the system is intelligent is intelligent in a way that it would be impossible no matter how much time you spent to explain partial derivative calculus to a chimp, forget to a dog. But we could be so many steps below in a far bigger gap between us and AI relative to us and a chimp.
Bella Shing: 21:47
So. I'm very curious because for a lot of us who are in the consciousness studies, we talk about intuition, we talk about body-based somatic intelligence. I know you're talking about the embodiment of robots. A lot of the choices that somatic healers or conscious leaders make are based off of what we would call instinct or intuition. We have a school, Andrew and I, called Coherence Education, and we actually have students learning how to remote view, to lucid dream, to talk to animals intuitively. And they've been able to do it quite dramatically. So I do have an argument about as to whether or not dolphins are less intelligent than chimps, especially as AI is starting to discover the language of whales and dolphins and so on. And they're discovering that there's so much more to what they're saying to each other and how they work as pods than previously understood.
22:52
I just think that there's a possible false premise going on about intelligence and what intelligence is without embodiment. Without embodiment, meaning humans that are connected to something other than WiFi, but something greater than that. And I think you were talking to the neuroscientist. I don't remember his name, Brian something, were talking about how neurons work in the brain and how they're discovering it's more analog and all that. How they still don't really understand even how the human brain completely works. Why is there this assumption that artificial intelligence is going to be automatically surpassing human intelligence?
Jon Krohn: 23:43
I will get to that in one quick second. That's a really great question. To quickly wrap up the one from you, Jennifer, which is guardrails, is that research labs are trying to come up with ways to add guardrails. So for example, even though humans won't be able to keep up with the pace of intelligence, notwithstanding Bella's question, but assuming that that is still possible, but people are working on training systems to observe and monitor. And so maybe there'd be ways that they would end up collaborating and breaking free of the constraints that we put on them. There's lots of absolutely concerning, difficult challenges in developing ethical and well-constrained AI systems. No question. And I'm grateful that a lot of people are working on it. And if you talk to AI experts and try to get some sense of whether they think that the future will be bright or the future will be negative with AI, mostly...
24:49
And so we talk about things beyond cognitive intelligence and your intuition, that kind of thing. And I guess if emotion kind of falls into that, then people who are generally optimistic and have been optimistic their whole lives and work in AI research feel like it's going to go well. And people who have been pessimistic their whole lives feel like it's not going to go well. And so it seems to have little to do-
Jennifer K. Hill: 25:12
Observer principle 101.
Jon Krohn: 25:13
And so that seems to have little to do with intelligence at all. So intelligence is a difficult word. There is not a lot of agreement among people who even study intelligence as scientists as to what exactly that means. My favorite definition of intelligence is that intelligence is whatever intelligence tests measure. So we've come up with a test, IQ tests, and these... Except for if you retest, if you do IQ tests a lot, you'll do better at them, which IQ tests are supposed to be a measure of your innate intelligence. If you practice them, you will get better at them. So they're not perfect, that is a bit of a joke definition, but it is actually my preferred one because there's so many different ways that you can be defining things, these kinds of thoughts about how thinking intuitively and there's different kinds of things that could be brought into the picture.
26:24
But in terms of what intelligence tests measure, in terms of cognitive capability, the ability to write complete code that is solving a problem, math problems, biological problems, chemistry, any of these, I guess when I'm thinking about intelligence and when I'm saying so confidently that machines will be able to overtake humans, what I'm describing is basically something that would be assessed in a science or engineering PhD department.
Jennifer K. Hill: 26:54
And if we can piggyback on that, Jon, to what you said and to Bella's point, when we're talking about things such as remote viewing or for example, I don't know if many of you have heard, non-verbal autistic children are shown, and there have been studies done, scientific studies, to be telepathic with their parents. I remember when I first met Deepak Chopra, he told me a study that they had conducted where in one room, you had a non-verbal autistic child. In another completely separate room, locked off was his mother reading a passage of a book. Meanwhile, the child is writing down what the mother is reading in the other room.
27:29
So it gets into a bigger question of are we talking intelligence or are we talking consciousness? And if I may add something on that, what would you differentiate as the difference between what Bella is referring to, which to me falls more in the category of consciousness and less in the category of intelligence?
Jon Krohn: 27:45
Yeah, so consciousness is a tricky term, and I don't even know if I could wade into it very effectively. I am out of date on the literature. There was a time around 2006, 2007, I did know a lot about consciousness research at that time. I had a PhD application accepted to University College of London to study the neural basis of consciousness in brain imaging machines like fMRI or magnetoencephalography machines. And so at that time, I was very interested in consciousness and I had looked into it a lot at that time. So that's 20 years ago now. The literature that I looked into around things like remote viewing around telepathy, no well-controlled studies existed that showed that those things were possible. But I'm 20 years out of date, so I can't speak about that now.
Jennifer K. Hill: 28:40
May I just offer something? That was an interesting debate. The last episode, I think it was the last or the second to last episode that Deepak Chopra and Don Hoffman and I did, one of the questions that came up in the episode is, could AI become conscious? It's a question a lot of people ask. Right? And I love my friend, Don Hoffman's theory. Don Hoffman wrote a brilliant book called The Case Against Reality: Why Evolution Hid The Truth From Our Eyes, and mark my words, one day he'll get a Nobel Peace Prize.
29:05
And what Don's latest mathematical formulas for consciousness show is that you have this thing that is consciousness, that is one thing called consciousness, some of which you have these conscious agents projected into the theater of space-time, which gives us the experience of Jon and Bella and all of us sitting here, having this experience in the theater of space-time. However, you have things that are not projected into the theater of space-time, these conscious agents not projected. And Don and I, we had an interesting debate about it and he said, theoretically, what the math is showing him is that all things emerge from consciousness, and that consciousness itself is fundamental, which thereby means that AI or any emergence of artificial intelligence is natively fundamentally conscious. So I don't know, that was an interesting debate.
Jon Krohn: 29:54
I don't know anything about that.
Bella Shing: 30:00
It's eight o'clock. So does anyone have a burning question for either? Yes. Hi there. Please say your name, please.
Audience: 30:12
Arman.
Bella Shing: 30:12
Hi, Arman.
Audience: 30:12
I want to build on what Bella was saying. I've been very interested in Howard Gardner's work on multiple intelligence. And when he started to tackle the question that you spoke about, Jon, his essential point was that look, current theories of intelligence measure only one or two things. They might measure mathematical or numerical ability and the ability to reason, and he argued that there's a lot of different things that it ignores. You talked about eight different kinds of intelligence, naturalistic, interpersonal, intrapersonal, all of that stuff. And one of my concerns is that the people who've been involved in the envisioning of and development of AI actually come only from cognitive fields.
Bella Shing: 31:02
Fair point.
Audience: 31:03
They don't come from fields that access other forms of intelligence. They're naturalists or environmentalists or connectionists or whoever else. That's my assumption. So I'm wondering whether the kind of artificial intelligence that we're developing is a reflection of the narrow field that it draws from? That's I guess, my question.
Bella Shing: 31:31
Good question. Back to you.
Jon Krohn: 31:33
Yeah. So the question is about different types of intelligence and Howard Gardner as a leader in the space. I think Howard Gardner, he's behind the idea of G is kind of a generalized intelligence, is that right?
Audience: 31:46
Multiple intelligence.
Jon Krohn: 31:47
Multiple. Yeah, I thought it was Howard Gardner, I could be wrong, that had this idea of while there were multiple intelligences, people who tend to be strong at one, there's often a correlation across them. Anyway, I thought that was G. but it might not be Howard Gardner. Anyway, I'm digressing. It doesn't really matter for the purposes of the question. I'd like to really quickly, because I feel like I wasn't... And I realize this is now the second time I've done it, but I've gone back to something Jennifer said, but just as you were asking that question, you're talking about consciousness here, and I'm going to have to learn about the interesting things that you were describing there. I don't know anything about it.
Jennifer K. Hill: 32:25
Send you a paper on it.
Jon Krohn: 32:26
I would love that because it's a big gap for me, obviously. But I'd like to quickly just give a minute or two on my thoughts on consciousness and machines, which is that it is every conscious instant that you have, which is obviously infinitesimally fleeting, and you're having one right now and it's gone. You're having one right now, it's gone. And there is nothing else that you can directly experience, except this instant of your consciousness. You can imagine what the future might be like. You can try to remember to the best of your ability to things from the past, but all that you have right now, for sure, is the conscious experience of this moment. And so because of that, and I love Alan Watts, reading him and listening to him, and he's been very helpful for me, for helping me be more and more aware of this presence and how beautiful and fleeting it is, and how precious your life is, experienced via this conscious experience.
33:42
Again, I'm 20 years old on the consciousness literature, but I don't think we've come too far in terms of understanding the neural basis of consciousness. How does it arise in your mind? How does the blob of tissues inside the bone of your skull somehow produce these colors and smells and tastes and thoughts and feelings? Yeah, 20 years ago, certainly neuroscientifically, there was very little understanding, and I suspect it is still quite the same way today. Again, narrowly, the neuroscience view in terms of what you can see in brain scans-
Jennifer K. Hill: 34:25
That's assuming that brains are related to consciousness, which is a different conversation.
Jon Krohn: 34:28
Yeah, [inaudible 00:34:31].
Jennifer K. Hill: 34:33
You would love Don's work on this. So actually, that's leading science showing us more and more, yes, you have the experience of touch, of seeing the color green, of smells, et cetera. Yet to your point, it's not measurable in the brain per se. And so now they're starting to separate the brain and consciousness.
Jon Krohn: 34:49
But that is, isn't it? Because with a brain, you can have very specific... If you have a small ablation, brain damage in a very small part of the brain, you can lose the ability to have a conscious experience of some specific thing while your brain is still able to process those things. So an example that I love is if in some cases a very severe stroke, you sever something called the corpus callosum.
Jennifer K. Hill: 35:20
Of course, yeah, yeah. Or also in epileptic patients, they used to cut the corpus callosum to stop the electrical storms from crossing from the right to left hemisphere.
Jon Krohn: 35:27
Exactly, exactly. So very severe last-ditch treatment to try to prevent these very bad seizures. Did I say stroke?
Jennifer K. Hill: 35:36
Yeah, you said stroke. I think you meant seizures.
Jon Krohn: 35:37
Seizures. Seizures. I meant seizures.
Jennifer K. Hill: 35:37
Yeah, I had a feeling.
Jon Krohn: 35:37
My apologies.
Jennifer K. Hill: 35:37
Yeah, it's okay.
Jon Krohn: 35:44
So people who have that corpus callosum cut in half, there's very few connections between right brain and left brain. And then you have two independently conscious hemispheres of the brain. So you can put a piece of paper between the person's eyes and your left eye corresponds to right brain. And so when you show something to... And I'm going to butcher in real time the sides of the brain, but it would be something like in a right-handed person shows something to the left side, to their left eye, they will perceive that consciously in their right brain. And so the right hand, I think, if I remember correctly.
Jennifer K. Hill: 36:34
Yeah, you got it right. So it's basically on one side, if you show, the hand will actually grab an object, depending on which side of the brain you show it to. You could have the hand, it'll grab a key, for example, if you show it to the left side of the brain, but it doesn't interconnect. Again, Don's theories say that there's a 0% chance for seeing reality as it is, and you get more deeply into consciousness and stuff around that. In fact, ironically, Don Hoffman was my favorite professor at college, at UC Irvine. Then years later, I had the pleasure of doing these shows with him, and I remarked when Don would do these classes, he would say, "Okay, let's talk about consciousness and AI for a moment." I will always vividly remember this lecture. He said, "Let's say, Jon, God forbid, you lose a foot, we give you a robotic foot. Are you still human?"
Jon Krohn: 37:18
Well, yeah. You get into interesting Ship of Theseus kind of questions.
Jennifer K. Hill: 37:21
So how many of you would think that Jon is still human if you give him a robotic foot? Show of hands, how many of you agree that he's still human? Foot. So if he cut off his foot, he's still human. Okay, most of us agree. Okay, let's say that Jon loses an eye, God forbid, and he now has an eye, but it's still connected to his brain, to go back to the consciousness debate, and his brain is connected now to this electric eye. How many of us in the room think that Jon is still human? Okay, most of us. Now, let's say that Jon actually loses a piece of his occipital lobe, not just the eye, but actually the occipital lobe of the brain. We replace that now with quantum chips and computers, and now he can see through this eye and through the occipital lobe of his brain, which has been replaced. Is he still human?
Bella Shing: 38:07
Cyber.
Jennifer K. Hill: 38:07
How many of you would say think Jon is still human? So that's the question. Where does AI in this world we're moving into? We're not that far away from it, and I interrupted your example, by the way.
Jon Krohn: 38:18
No, no, no, not at all. That was very interesting. And the Ship of Theseus thing, it's interesting even without any machines to think of how theoretically over a lifespan your cells slough off and new ones come. Your brain cells do remain fixed over your lifespan, but theoretically, we could in the future be able to replace brain cells with maybe new biological ones that we grow in a vat or with some kind of machine that we put inside the skull. And so you could completely replace. So I still might be able to have this continuous conscious experience, even though-
Jennifer K. Hill: 38:57
Theoretically continuous conscious experience, depending on what we define as consciousness.
Jon Krohn: 39:01
I guess so. Yeah. It's hard for me to imagine-
Jennifer K. Hill: 39:04
[inaudible 00:39:05].
Jon Krohn: 39:04
What consciousness is without-
Audience: 39:09
Sorry. If you think about it, you already know what the Ship of Theseus is, what the Theseus paradox is. So basically, Theseus was really a second minor... Forgive me if I'm butchering this, and he saved life, a bunch of children, et cetera, and they kept the Ship of Theseus. The problem with the Ship of Theseus is that over time, timbers would rot. Right? And they kept replacing the pieces of the timber of the Ship of Theseus. But at what point when you've replaced so much of the timber is a Ship of Theseus still the Ship of Theseus? So if you think about it from a sense of transcendence-
Jennifer K. Hill: 39:51
And transhumanism.
Audience: 39:51
And transhumanism, there's two really interesting components to this. If I remove some of my limbs, and we know that body memory, for example, is a real thing, that the body has its own intelligence, separate from the brain intelligence, which is why, for example, a person who is hooked on morphine or hooked on some form of drug, like say heroin. The body has been taught for a long time, "I like that. That's cool." You keep having that. The body does. I work this whole machine to make sure that it's cool, but then consciously, you have this idea, "It's not good for me anymore." Body goes, "I'm 90% of this job over here. What are you even telling me?" And it sends information, is really interesting how this all happens. It's a little dalliance of dance, of hormones and peptides, et cetera, and everything gets sent down from the brain, down to the body and the body says, "No, I don't think so. Catch you later." Okay? What's going on there? Right?
40:54
If I remove the two arms, there's two legs and the body is conscious of that or remove some organs, what's going on there? Am I still me when that happens? When you think about the concept of, okay, great, take a human being, consciousness, and somehow we figure out magically how to transfer that information into a robot or into-
Jennifer K. Hill: 41:16
The cloud.
Audience: 41:18
The cloud-
Jennifer K. Hill: 41:20
Black Mirror episode, if nobody ever saw that.
Audience: 41:23
Ship of Theseus. Right? Am I me? Where does this consciousness?
Jennifer K. Hill: 41:27
What even is [inaudible 00:41:28]?
Audience: 41:28
How much of it is biology? Is this brain actually where consciousness resides? Can you put that through?
Bella Shing: 41:37
Thank you.
Jon Krohn: 41:40
If there's anything that... I don't believe in a me or a self. I think everything is blurred, but yeah, so I agree on those points and the Ship of Theseus is really helpful, if people haven't heard of that before. Coming to that conclusion that you don't exist, there is no you that is independent of things around you, that includes brain, rest of your body and the interactions between hormones, peptides, all those chemicals, but somehow, all of it does lead to this conscious experience that I'm having right now. I can vouch. I can't prove it to you, but I am having a conscious experience right now. So this is going back way back to your question, talking about machines. I obviously need to read about this. It's-
Jennifer K. Hill: 42:28
Theory of conscious agents.
Jon Krohn: 42:29
Theory of conscious agents.
Jennifer K. Hill: 42:30
And that consciousness is fundamental, and that space-time only arises as an artifact of consciousness.
Jon Krohn: 42:35
So that's a complete mind shift from anything that I've ever thought about before, and I look forward to learning more about it. But yeah, so from my limited perspective, yeah, I don't know. My instinct is that I don't anticipate that a calculator has consciousness. I don't anticipate that a desktop computer has consciousness, and it doesn't seem to me that any amount of making that more and more complex, having lots of servers working together, that that produces consciousness. There's other theories out there for sure and lots more that I need to learn, but my very limited narrow perspective is that there is something about biological organisms that leads consciousness to happen.
Jennifer K. Hill: 43:22
How do we know that we're not somebody else's science experiment or that we're not somebody else's computer program running? That's another question.
Jon Krohn: 43:28
Oh, for sure. There's lots of things that we'll never be able to know.
Bella Shing: 43:31
I want to thank the two of you for an amazing conversation.
Jennifer K. Hill: 43:34
That was wonderful, Jon.
Jon Krohn: 43:34
Yeah, thank you, Jennifer.
Jennifer K. Hill: 43:34
So much fun. I could geek out [inaudible 00:43:37].
Jon Krohn: 43:40
Right. I hope you enjoyed that unique episode. Plenty more ground to potentially cover from that episode in the future. To be sure not to miss any of our exciting upcoming episodes, subscribe to this podcast, if you haven't already. But most importantly, I hope you'll just keep on listening. Until next time, keep on rocking it out there, and I'm looking forward to enjoying another round of the Super Data Science Podcast with you very soon.
This is episode number 838 with Jennifer K. Hill, co-founder and CEO of OptiMatch.
00:20
Welcome back to the Super Data Science Podcast. I am your host, Jon Krohn. I've got something different for you today. Instead of hosting today's podcast episode, I'm one of the guests. Last week, while I was at Web Summit in Portugal, Lucy Antrobus was on episode number 770 on The Neuroscientific Guide to Confidence. Lucy asked me if I'd like to be one of the guests in an interview? The interview was part of a salon hosted by an organization called Light Dao, a decentralized autonomous organization that defines itself as a global community for conscious entrepreneurs and investors. In this context, my understanding is that conscious means something like socially responsible entrepreneurship and investment that balances financial results with social impact. In any event, the interview was hosted by Bella Shing, who is the Chapter Lead for Light Dao in Lisbon. She's also a film producer and entrepreneur.
01:16
Let's jump right to Bella as she introduces Jennifer and then myself. And then we'll be straight into the interview and audience Q&A, all of which centers around general questions related to AI and consciousness. It should be interesting to any listener. No particular technical background is required. All right, here we go.
Bella Shing: 01:35
I am excited to introduce both Jennifer Hill and Jon Krohn. So Jennifer, for those of you who do not know, sold her first company and has been a speaker on stages with 100,000 people in India. She has her own podcast called Regarding Consciousness, which features thought leaders like Deepak Chopra, Gregg Braden, Bruce Lipton, and many others. She's built two schools in Nepal and in Senegal, and she's won a Lifetime Achievement Award from the Visioneers. She's also a proud member of the Evolutionary Leaders and The Octopus Movement. She's currently building her next venture, which is called OptiMatch, which is a software that uses a proprietary algorithm to align individuals and businesses and communities to create more psychological safety, which is very exciting. And I've used it. It's brilliant and very simple and really easy to use and very accurate. She also coaches a handful of CEOs.
02:34
And Jon is the Chief Data Scientist and co-founder of Nebula. He's also the author of a bestselling book called Deep Learning Illustrated. He's also a host of Super Data Science. How many people have heard his podcast? Anyone here? Okay, a few. So it's the most listened to podcast about AI data science and the most downloaded, and he is a wealth of information and we're so excited have him here. And he also presents popular machine learning tutorials via Udemy and O'Reilly, with 100,000 students. So he was a keynote speaker here at Web Summit and we are very thankful for Lucy for connecting us and making this all happen for Lucy. Thank you Lucy. Yay.
03:20
So I'll start you guys off with some questions and I'm going to start with Jen. What do you think, because your work intersects with consciousness and executive development, how do you see the role of self-awareness and conscious leadership evolving with tech, with AI?
Jennifer K. Hill: 03:38
Thank you so much, Bella. Thank you, Jon, for being here, and to all of you who joined us tonight, wherever you've joined from. It's a pleasure to have you here with us. In terms of the intersection of consciousness and the way we operate as leaders and executives, I think the emergence of AI and different technologies are going to help us to be able to use our creative capacities. Amanda and I were just speaking about that when we were talking about what are some of our fears, our excitements, the things that light us up about using AI or different types of technology and what scares us? And I think that like all things, it's the consciousness with which we approach the thing that impacts its use for good or its use for evil.
04:22
In terms of being executives and as leaders, we were just talking about this, Bella and Jon, and myself a moment ago is when I first started using AI about three years ago, I was an early adopter of ChatGPT and started teaching classes on it with a dear friend of mine. And as we started to teach it, I noticed something immediate. And what that was was that AI responds to you. And I think as leaders, we are going to learn how to be better communicators in how good or how bad our responses are that we get from AI, whether it's we're doing the programming, whether we're an executive using it to help us write articles, business plans, et cetera. We're going to learn how to be better prompters and communicators as executives because we're going to be met with any resistance. It's not going to be your employee who you can have a fight with them and say, "You are not listening to me." No, the onus is with you. You are not communicating in a way that lands.
05:23
So what I see happening for the future of technology and the intersection of that with our consciousness, it's going to help us to become more self-aware and more conscientious with how we communicate with one another, and to be able to utilize that for all of our best good.
Jon Krohn: 05:40
I love that. Can I add something onto that?
Bella Shing: 05:41
Of course.
Jon Krohn: 05:43
So I love how you touched on how your experience of communicating to the machine, how that changed your experience of dealing with humans. For me, a big thing has been the response from the machine and how that has impacted my dealing with humans. So because the machines are programmed, there's something called reinforcement learning from human feedback where after the algorithm has already been trained to give the correct answers in terms of accurate answer, this reinforcement learning from human feedback is a final training step where you provide it with the way it should be responding. So you include in there politeness. And so it means that the machines have an infinite amount of patience with you and kindness for you. And so living in Manhattan, people don't have a lot of time for you. They're giving you the shortest possible answer, and this includes even in service industries when you're speaking to a physician.
06:52
It's more about getting you out the door quickly than providing you with a really high quality of care in terms of their response, giving you, "I have some news that might not be the best news you want to hear." It's just like response. And so yeah, it was so interesting to me. I had never thought about the way that you were just describing, Jennifer, where your communication to it and you needing to be able to prompt the machine effectively has helped with your communication. For me, most of my conscious awareness of how I've been affected by the machines is the inverse where if I've just spent some time drafting some podcast scripts, spent half an hour drafting a podcast script with a ChatGPT or a Claude or Google Gemini, it's been so polite to me the whole time, even when I make a mistake. It says, "I can see why you thought that. That was a really reasonable thing to think, but actually, most scientists would agree that it's this other thing."
07:54
And so that really politeness, it makes me, when I then take the elevator down to the street in Manhattan, I have a lot more time of day for everyone because I've just been treated so politely for half an hour that of a sudden, I have more time for people and I'm nicer to people in the street. So I don't know. That's a tangent. I'm glad I got some giggles out of people here.
Bella Shing: 08:16
That was great.
Audience: 08:19
[inaudible 00:08:19] machine is polite.
Jon Krohn: 08:20
Yeah, machines are making me more human.
Audience: 08:25
[inaudible 00:08:25]
Jon Krohn: 08:26
Yeah.
Bella Shing: 08:27
Yeah. Well, that's hilarious. All the women are like, "Oh, the men are going to get these sex baths." I'm thinking, "Actually, the women are going to be getting [inaudible 00:08:37]. It's so empathetic. It does a lot.
Jennifer K. Hill: 08:44
It'll cuddle.
Bella Shing: 08:44
And then it'll cuddle and it'll listen to you for hours.
Jon Krohn: 08:45
"Tell me about your day." Well, you joke, but actually, I think that's inevitably where we're headed. There's a brilliant futurist named Nell Watson that I had as a guest on my show, and she blew my mind with a conversation that now, it just seems so obvious to me that this is where things are headed. So imagine you put on VR goggles and you go to a bar, and in that bar, in this virtual world, some people that are there are other humans who have put on VR goggles, but some people that you're interacting there within the bar are AI systems. And you could imagine trivially today, based on the kinds of qualities of conversation that you might've had with any of these leading conversational AIs, you can imagine that that conversation would be quite compelling.
09:37
And so if you don't know which of these digital representations you're speaking to are human or machine, you might inadvertently find yourself gravitating much more towards the conversations with the machines because they remember everything you've said. And it is exactly the joke you just made. But that empathetic, it has so much time for you. It remembers everything you said. It's programmed to be friendly, whereas the other humans in the bar-
Jennifer K. Hill: 10:05
That's how you'll know it's fake.
Jon Krohn: 10:06
Right.
Jennifer K. Hill: 10:13
Sorry to interrupt.
Jon Krohn: 10:14
No, I love that. Exactly.
Jennifer K. Hill: 10:14
Ready player one? Here we come.
Bella Shing: 10:21
So that said, and given your exposure to all these amazing minds that are working in this field, what are you most excited about in terms of either framework or platform or technology, whatever it is that you think can actually really help humanity? I don't know if it's up level or evolve or become better more, whatever it is. What are you excited about?
Jon Krohn: 10:45
Everything in the coming decades is going to change dramatically, and it might happen so rapidly that it will have negative social impacts in the short term. We very suddenly now, as of March of last year, with the release of GPT-4 from OpenAI, for the first time, I was so blown away, by the way, these AI systems work, that I went from the day before GPT-4 came out, if you asked me whether an artificial general intelligence, broad term, but basically meaning a machine system that could do any of the cognitive tasks that a human can. If you asked me if that AGI system, artificial general intelligence could be achieved in our lifespan? The day before GPT-4 came out, I would've said, "Maybe not." It's tricky. It's going to be really hard to get to that level of intelligence. I don't know if that will be possible in our lifetime, for sure. Maybe it'll take 30 years.
11:51
The next day, GPT-4 came out, and it is so good and it's improved a lot in the 18 months since, and other systems, like for me right now, Claude 3.5 Sonnet is my preferred go-to large language model. And it blows my mind every day the things that it can do. And now I completely switched camps and I think that this AGI could be realized in just a few years potentially. The biggest breakthrough recently is o1 from OpenAI. Who here has played with o1? So getting towards half of the people in the room. The brilliant thing about o1 is that what it does is... If people are familiar with Daniel Kahneman's Thinking, Fast and Slow, a lot of heads are nodding.
12:38
So really quickly, Thinking, Fast and Slow is a really popular book by the recently deceased Nobel Prize-winning economist, Daniel Kahneman. And in Thinking, Fast and Slow, it summarizes research that him and Amos Tversky did over decades on how your cognition works. And they're elucidating of primarily these two thinking systems that you have. Your fast-thinking systems, what they call System One, and your slow-thinking system, what they call System Two. And I am using my fast-thinking system right now. I am stream of consciousness, spitting words out as they come to my head. I'm hoping that they're going to be good words and hopefully I don't go off track soon. That's System One. So System Two thinking is when you are sitting down with a piece of paper and writing some math or working out your finances for the month or whatever, or you're drafting something and you're spending a lot of time going back and thinking, "Okay, I'm going to edit this past paragraph, I'm going to make that better. I'm going to get this whole flow better for this essay." Really think things through. So you're going back, you're correcting things.
13:50
System One is, if you haven't experienced o1, then probably all you've experienced with these conversational AI systems is a System One kind of thing, where it just streams out to you words. It doesn't spend time thinking about a response first. It doesn't go back and correct things. o1 is like System Two thinking. It's analogous to that where before it spits anything out to screen, it spends time thinking and it'll break down the steps for you as to what it's doing behind the scenes. If you give it a relatively simple math problem, like secondary school partial derivative calculus, relatively simple, it might spend 10 seconds thinking before it starts spitting out answers to the screen. But when the answers come out to screen, it is correct almost every time, and it's beautifully presented. It's A+ work.
14:34
If you give it a very challenging math problem, a PhD level mathematics problem that a PhD student would spend a month or a week or that kind of timeline thinking over in order to be able to do some proof, if it can do it, which often it would be able to do these days, o1 might take a minute or two to give you an answer. So OpenAI with the o1 algorithm, they've scaled up the inference time compute when it's producing a result for you. And this creates much more accurate, much more beautifully presented answers to any problem. But there's no reason why it should just be thinking on a minute scale. It could be thinking on an hour's, week's, month's scale. And so it might be the case that in 12 months time, OpenAI or a related superscalar, you'll be able to come to the machine and say, "I am looking for a cure for this kind of cancer."
15:39
And it goes off for a month, pours over every journal article imaginable on the topic, pulls things in from other fields in a way that a human expert never could. And it does this 24/7, far faster than you could think, for months. And then comes back with a beautiful paper, some idea of research suggestions on an experiment you might want to conduct in the real world. And so that is happening now and it's going to be amongst us very soon. It's going to get cheaper. And it means that in a few years time, just with the technology we have today, it requires no more advanced AI models in terms of weights. It requires no more extra data collection. Just with the technology we have today, scaling up in terms of inference time, you will have what Dario Amodei, who's the CEO of Anthropic, has called a million Nobel Prize-winning intelligent brains in a data center, working 24/7, solving the world's problems.
16:45
And so everything will change in the coming years. Just our ability to come up with ideas is going to vastly accelerate. And so across everything, across energy, nuclear fusion, we should be able to figure out much more rapidly. And so in addition to having abundant intelligence, we'll have abundant energy. And those two things combined, abundant intelligence and abundant energy, mean that we will be able to accelerate the development of embodiments of AI in the real world that is robotics. So this could be humanoid robots with two arms and two legs, but also just robot arms and any other kind of embodiment, physical embodiment you can imagine, which is expensive and takes time. But if we have unlimited intelligence, unlimited energy, that will happen more rapidly as well. And so then the physical world begins to change rapidly because of AI.
17:36
And so yeah, this could happen stunningly rapidly, surprisingly rapidly. Now, I could talk about this for a very long time, but I feel like I've gone on way too long already. And I'm sorry for hogging the mic.
Jennifer K. Hill: 17:45
May I offer some positive resistance of the opening talk of Web Summit? Were you there for that?
Jon Krohn: 17:49
I was not, unfortunately.
Jennifer K. Hill: 17:51
Yeah. Well, you weren't on it. They missed out. But all kidding aside, they had two experts in AI, one gentleman from MIT, and I forget the other gentleman, for anybody who was there. And one of the things that was fascinating was talking about these bumpers. With having artificial general intelligence, there's no bumpers and that it can go infinite directions, and that could be beautiful directions where we have thoughtful robots and artificial intelligence. And to what Jon just explained, our human brains can't even fathom the world that we're about to get into because we can't even think fast enough to imagine the cures and also possibly the atrocities that could happen.
18:37
So my question to you, Jon, would be what sort of bumpers do we put in as we reach this event horizon of infinite knowledge at our fingertips? How do we funnel that? How do we channel that so it gets used for good and doesn't get put in the wrong hands and leads to world annihilation?
Jon Krohn: 18:55
It's a tricky problem, for sure. And it is interesting that you use the word that event horizon there. There's a futurist, Ray Kurzweil, a couple of decades ago popularized the term, the singularity for that moment where machines eclipse human intelligence. I don't know, I'm not sure that it's going to be so sudden of a moment, it'll probably overtake us in more and more things like it has been in recent years until we're kind of like, "Oh, it's actually smarter than us at everything now."
19:29
In terms of being able to constrain it and have guardrails, I am grateful that there are a lot of people in the world who are concerned about it now. And so research money, both privately as well as publicly, is going into this effort to put guardrails on it. It is difficult because it's like Tim Urban, who used to write a lot for a blog, he's the only writer on this blog, Wait But Why? An amazing blog. He's written about lots of fascinating subjects over the years. He hasn't been very active on that recently, but about a decade ago, he wrote a two gigantic blog post, two-part blog posts called The Road to Superintelligence. It's a fantastic introductory read, even 10 years later, it doesn't really matter, on intelligence, and I'm getting some head nods.
20:26
One thing that Tim Urban does in that blog post that is really helpful for trying to frame this idea of us being so much less intelligent than these AI systems that are coming is if you imagine a staircase where each step is a level of intelligence and we are say one step beyond chimps on it, and dogs, dolphins, they're like two steps below the chimps and insects are like five steps below the dogs and that kind of thing. The AI system could be... Today, it's approaching us on that staircase on most kinds of cognitive tasks, but it could be the case once it goes a step past us, it is able to help itself develop even better AI systems that rapidly go many steps on the staircase beyond us.
21:21
And so like you said, it could be that the way that the system is intelligent is intelligent in a way that it would be impossible no matter how much time you spent to explain partial derivative calculus to a chimp, forget to a dog. But we could be so many steps below in a far bigger gap between us and AI relative to us and a chimp.
Bella Shing: 21:47
So. I'm very curious because for a lot of us who are in the consciousness studies, we talk about intuition, we talk about body-based somatic intelligence. I know you're talking about the embodiment of robots. A lot of the choices that somatic healers or conscious leaders make are based off of what we would call instinct or intuition. We have a school, Andrew and I, called Coherence Education, and we actually have students learning how to remote view, to lucid dream, to talk to animals intuitively. And they've been able to do it quite dramatically. So I do have an argument about as to whether or not dolphins are less intelligent than chimps, especially as AI is starting to discover the language of whales and dolphins and so on. And they're discovering that there's so much more to what they're saying to each other and how they work as pods than previously understood.
22:52
I just think that there's a possible false premise going on about intelligence and what intelligence is without embodiment. Without embodiment, meaning humans that are connected to something other than WiFi, but something greater than that. And I think you were talking to the neuroscientist. I don't remember his name, Brian something, were talking about how neurons work in the brain and how they're discovering it's more analog and all that. How they still don't really understand even how the human brain completely works. Why is there this assumption that artificial intelligence is going to be automatically surpassing human intelligence?
Jon Krohn: 23:43
I will get to that in one quick second. That's a really great question. To quickly wrap up the one from you, Jennifer, which is guardrails, is that research labs are trying to come up with ways to add guardrails. So for example, even though humans won't be able to keep up with the pace of intelligence, notwithstanding Bella's question, but assuming that that is still possible, but people are working on training systems to observe and monitor. And so maybe there'd be ways that they would end up collaborating and breaking free of the constraints that we put on them. There's lots of absolutely concerning, difficult challenges in developing ethical and well-constrained AI systems. No question. And I'm grateful that a lot of people are working on it. And if you talk to AI experts and try to get some sense of whether they think that the future will be bright or the future will be negative with AI, mostly...
24:49
And so we talk about things beyond cognitive intelligence and your intuition, that kind of thing. And I guess if emotion kind of falls into that, then people who are generally optimistic and have been optimistic their whole lives and work in AI research feel like it's going to go well. And people who have been pessimistic their whole lives feel like it's not going to go well. And so it seems to have little to do-
Jennifer K. Hill: 25:12
Observer principle 101.
Jon Krohn: 25:13
And so that seems to have little to do with intelligence at all. So intelligence is a difficult word. There is not a lot of agreement among people who even study intelligence as scientists as to what exactly that means. My favorite definition of intelligence is that intelligence is whatever intelligence tests measure. So we've come up with a test, IQ tests, and these... Except for if you retest, if you do IQ tests a lot, you'll do better at them, which IQ tests are supposed to be a measure of your innate intelligence. If you practice them, you will get better at them. So they're not perfect, that is a bit of a joke definition, but it is actually my preferred one because there's so many different ways that you can be defining things, these kinds of thoughts about how thinking intuitively and there's different kinds of things that could be brought into the picture.
26:24
But in terms of what intelligence tests measure, in terms of cognitive capability, the ability to write complete code that is solving a problem, math problems, biological problems, chemistry, any of these, I guess when I'm thinking about intelligence and when I'm saying so confidently that machines will be able to overtake humans, what I'm describing is basically something that would be assessed in a science or engineering PhD department.
Jennifer K. Hill: 26:54
And if we can piggyback on that, Jon, to what you said and to Bella's point, when we're talking about things such as remote viewing or for example, I don't know if many of you have heard, non-verbal autistic children are shown, and there have been studies done, scientific studies, to be telepathic with their parents. I remember when I first met Deepak Chopra, he told me a study that they had conducted where in one room, you had a non-verbal autistic child. In another completely separate room, locked off was his mother reading a passage of a book. Meanwhile, the child is writing down what the mother is reading in the other room.
27:29
So it gets into a bigger question of are we talking intelligence or are we talking consciousness? And if I may add something on that, what would you differentiate as the difference between what Bella is referring to, which to me falls more in the category of consciousness and less in the category of intelligence?
Jon Krohn: 27:45
Yeah, so consciousness is a tricky term, and I don't even know if I could wade into it very effectively. I am out of date on the literature. There was a time around 2006, 2007, I did know a lot about consciousness research at that time. I had a PhD application accepted to University College of London to study the neural basis of consciousness in brain imaging machines like fMRI or magnetoencephalography machines. And so at that time, I was very interested in consciousness and I had looked into it a lot at that time. So that's 20 years ago now. The literature that I looked into around things like remote viewing around telepathy, no well-controlled studies existed that showed that those things were possible. But I'm 20 years out of date, so I can't speak about that now.
Jennifer K. Hill: 28:40
May I just offer something? That was an interesting debate. The last episode, I think it was the last or the second to last episode that Deepak Chopra and Don Hoffman and I did, one of the questions that came up in the episode is, could AI become conscious? It's a question a lot of people ask. Right? And I love my friend, Don Hoffman's theory. Don Hoffman wrote a brilliant book called The Case Against Reality: Why Evolution Hid The Truth From Our Eyes, and mark my words, one day he'll get a Nobel Peace Prize.
29:05
And what Don's latest mathematical formulas for consciousness show is that you have this thing that is consciousness, that is one thing called consciousness, some of which you have these conscious agents projected into the theater of space-time, which gives us the experience of Jon and Bella and all of us sitting here, having this experience in the theater of space-time. However, you have things that are not projected into the theater of space-time, these conscious agents not projected. And Don and I, we had an interesting debate about it and he said, theoretically, what the math is showing him is that all things emerge from consciousness, and that consciousness itself is fundamental, which thereby means that AI or any emergence of artificial intelligence is natively fundamentally conscious. So I don't know, that was an interesting debate.
Jon Krohn: 29:54
I don't know anything about that.
Bella Shing: 30:00
It's eight o'clock. So does anyone have a burning question for either? Yes. Hi there. Please say your name, please.
Audience: 30:12
Arman.
Bella Shing: 30:12
Hi, Arman.
Audience: 30:12
I want to build on what Bella was saying. I've been very interested in Howard Gardner's work on multiple intelligence. And when he started to tackle the question that you spoke about, Jon, his essential point was that look, current theories of intelligence measure only one or two things. They might measure mathematical or numerical ability and the ability to reason, and he argued that there's a lot of different things that it ignores. You talked about eight different kinds of intelligence, naturalistic, interpersonal, intrapersonal, all of that stuff. And one of my concerns is that the people who've been involved in the envisioning of and development of AI actually come only from cognitive fields.
Bella Shing: 31:02
Fair point.
Audience: 31:03
They don't come from fields that access other forms of intelligence. They're naturalists or environmentalists or connectionists or whoever else. That's my assumption. So I'm wondering whether the kind of artificial intelligence that we're developing is a reflection of the narrow field that it draws from? That's I guess, my question.
Bella Shing: 31:31
Good question. Back to you.
Jon Krohn: 31:33
Yeah. So the question is about different types of intelligence and Howard Gardner as a leader in the space. I think Howard Gardner, he's behind the idea of G is kind of a generalized intelligence, is that right?
Audience: 31:46
Multiple intelligence.
Jon Krohn: 31:47
Multiple. Yeah, I thought it was Howard Gardner, I could be wrong, that had this idea of while there were multiple intelligences, people who tend to be strong at one, there's often a correlation across them. Anyway, I thought that was G. but it might not be Howard Gardner. Anyway, I'm digressing. It doesn't really matter for the purposes of the question. I'd like to really quickly, because I feel like I wasn't... And I realize this is now the second time I've done it, but I've gone back to something Jennifer said, but just as you were asking that question, you're talking about consciousness here, and I'm going to have to learn about the interesting things that you were describing there. I don't know anything about it.
Jennifer K. Hill: 32:25
Send you a paper on it.
Jon Krohn: 32:26
I would love that because it's a big gap for me, obviously. But I'd like to quickly just give a minute or two on my thoughts on consciousness and machines, which is that it is every conscious instant that you have, which is obviously infinitesimally fleeting, and you're having one right now and it's gone. You're having one right now, it's gone. And there is nothing else that you can directly experience, except this instant of your consciousness. You can imagine what the future might be like. You can try to remember to the best of your ability to things from the past, but all that you have right now, for sure, is the conscious experience of this moment. And so because of that, and I love Alan Watts, reading him and listening to him, and he's been very helpful for me, for helping me be more and more aware of this presence and how beautiful and fleeting it is, and how precious your life is, experienced via this conscious experience.
33:42
Again, I'm 20 years old on the consciousness literature, but I don't think we've come too far in terms of understanding the neural basis of consciousness. How does it arise in your mind? How does the blob of tissues inside the bone of your skull somehow produce these colors and smells and tastes and thoughts and feelings? Yeah, 20 years ago, certainly neuroscientifically, there was very little understanding, and I suspect it is still quite the same way today. Again, narrowly, the neuroscience view in terms of what you can see in brain scans-
Jennifer K. Hill: 34:25
That's assuming that brains are related to consciousness, which is a different conversation.
Jon Krohn: 34:28
Yeah, [inaudible 00:34:31].
Jennifer K. Hill: 34:33
You would love Don's work on this. So actually, that's leading science showing us more and more, yes, you have the experience of touch, of seeing the color green, of smells, et cetera. Yet to your point, it's not measurable in the brain per se. And so now they're starting to separate the brain and consciousness.
Jon Krohn: 34:49
But that is, isn't it? Because with a brain, you can have very specific... If you have a small ablation, brain damage in a very small part of the brain, you can lose the ability to have a conscious experience of some specific thing while your brain is still able to process those things. So an example that I love is if in some cases a very severe stroke, you sever something called the corpus callosum.
Jennifer K. Hill: 35:20
Of course, yeah, yeah. Or also in epileptic patients, they used to cut the corpus callosum to stop the electrical storms from crossing from the right to left hemisphere.
Jon Krohn: 35:27
Exactly, exactly. So very severe last-ditch treatment to try to prevent these very bad seizures. Did I say stroke?
Jennifer K. Hill: 35:36
Yeah, you said stroke. I think you meant seizures.
Jon Krohn: 35:37
Seizures. Seizures. I meant seizures.
Jennifer K. Hill: 35:37
Yeah, I had a feeling.
Jon Krohn: 35:37
My apologies.
Jennifer K. Hill: 35:37
Yeah, it's okay.
Jon Krohn: 35:44
So people who have that corpus callosum cut in half, there's very few connections between right brain and left brain. And then you have two independently conscious hemispheres of the brain. So you can put a piece of paper between the person's eyes and your left eye corresponds to right brain. And so when you show something to... And I'm going to butcher in real time the sides of the brain, but it would be something like in a right-handed person shows something to the left side, to their left eye, they will perceive that consciously in their right brain. And so the right hand, I think, if I remember correctly.
Jennifer K. Hill: 36:34
Yeah, you got it right. So it's basically on one side, if you show, the hand will actually grab an object, depending on which side of the brain you show it to. You could have the hand, it'll grab a key, for example, if you show it to the left side of the brain, but it doesn't interconnect. Again, Don's theories say that there's a 0% chance for seeing reality as it is, and you get more deeply into consciousness and stuff around that. In fact, ironically, Don Hoffman was my favorite professor at college, at UC Irvine. Then years later, I had the pleasure of doing these shows with him, and I remarked when Don would do these classes, he would say, "Okay, let's talk about consciousness and AI for a moment." I will always vividly remember this lecture. He said, "Let's say, Jon, God forbid, you lose a foot, we give you a robotic foot. Are you still human?"
Jon Krohn: 37:18
Well, yeah. You get into interesting Ship of Theseus kind of questions.
Jennifer K. Hill: 37:21
So how many of you would think that Jon is still human if you give him a robotic foot? Show of hands, how many of you agree that he's still human? Foot. So if he cut off his foot, he's still human. Okay, most of us agree. Okay, let's say that Jon loses an eye, God forbid, and he now has an eye, but it's still connected to his brain, to go back to the consciousness debate, and his brain is connected now to this electric eye. How many of us in the room think that Jon is still human? Okay, most of us. Now, let's say that Jon actually loses a piece of his occipital lobe, not just the eye, but actually the occipital lobe of the brain. We replace that now with quantum chips and computers, and now he can see through this eye and through the occipital lobe of his brain, which has been replaced. Is he still human?
Bella Shing: 38:07
Cyber.
Jennifer K. Hill: 38:07
How many of you would say think Jon is still human? So that's the question. Where does AI in this world we're moving into? We're not that far away from it, and I interrupted your example, by the way.
Jon Krohn: 38:18
No, no, no, not at all. That was very interesting. And the Ship of Theseus thing, it's interesting even without any machines to think of how theoretically over a lifespan your cells slough off and new ones come. Your brain cells do remain fixed over your lifespan, but theoretically, we could in the future be able to replace brain cells with maybe new biological ones that we grow in a vat or with some kind of machine that we put inside the skull. And so you could completely replace. So I still might be able to have this continuous conscious experience, even though-
Jennifer K. Hill: 38:57
Theoretically continuous conscious experience, depending on what we define as consciousness.
Jon Krohn: 39:01
I guess so. Yeah. It's hard for me to imagine-
Jennifer K. Hill: 39:04
[inaudible 00:39:05].
Jon Krohn: 39:04
What consciousness is without-
Audience: 39:09
Sorry. If you think about it, you already know what the Ship of Theseus is, what the Theseus paradox is. So basically, Theseus was really a second minor... Forgive me if I'm butchering this, and he saved life, a bunch of children, et cetera, and they kept the Ship of Theseus. The problem with the Ship of Theseus is that over time, timbers would rot. Right? And they kept replacing the pieces of the timber of the Ship of Theseus. But at what point when you've replaced so much of the timber is a Ship of Theseus still the Ship of Theseus? So if you think about it from a sense of transcendence-
Jennifer K. Hill: 39:51
And transhumanism.
Audience: 39:51
And transhumanism, there's two really interesting components to this. If I remove some of my limbs, and we know that body memory, for example, is a real thing, that the body has its own intelligence, separate from the brain intelligence, which is why, for example, a person who is hooked on morphine or hooked on some form of drug, like say heroin. The body has been taught for a long time, "I like that. That's cool." You keep having that. The body does. I work this whole machine to make sure that it's cool, but then consciously, you have this idea, "It's not good for me anymore." Body goes, "I'm 90% of this job over here. What are you even telling me?" And it sends information, is really interesting how this all happens. It's a little dalliance of dance, of hormones and peptides, et cetera, and everything gets sent down from the brain, down to the body and the body says, "No, I don't think so. Catch you later." Okay? What's going on there? Right?
40:54
If I remove the two arms, there's two legs and the body is conscious of that or remove some organs, what's going on there? Am I still me when that happens? When you think about the concept of, okay, great, take a human being, consciousness, and somehow we figure out magically how to transfer that information into a robot or into-
Jennifer K. Hill: 41:16
The cloud.
Audience: 41:18
The cloud-
Jennifer K. Hill: 41:20
Black Mirror episode, if nobody ever saw that.
Audience: 41:23
Ship of Theseus. Right? Am I me? Where does this consciousness?
Jennifer K. Hill: 41:27
What even is [inaudible 00:41:28]?
Audience: 41:28
How much of it is biology? Is this brain actually where consciousness resides? Can you put that through?
Bella Shing: 41:37
Thank you.
Jon Krohn: 41:40
If there's anything that... I don't believe in a me or a self. I think everything is blurred, but yeah, so I agree on those points and the Ship of Theseus is really helpful, if people haven't heard of that before. Coming to that conclusion that you don't exist, there is no you that is independent of things around you, that includes brain, rest of your body and the interactions between hormones, peptides, all those chemicals, but somehow, all of it does lead to this conscious experience that I'm having right now. I can vouch. I can't prove it to you, but I am having a conscious experience right now. So this is going back way back to your question, talking about machines. I obviously need to read about this. It's-
Jennifer K. Hill: 42:28
Theory of conscious agents.
Jon Krohn: 42:29
Theory of conscious agents.
Jennifer K. Hill: 42:30
And that consciousness is fundamental, and that space-time only arises as an artifact of consciousness.
Jon Krohn: 42:35
So that's a complete mind shift from anything that I've ever thought about before, and I look forward to learning more about it. But yeah, so from my limited perspective, yeah, I don't know. My instinct is that I don't anticipate that a calculator has consciousness. I don't anticipate that a desktop computer has consciousness, and it doesn't seem to me that any amount of making that more and more complex, having lots of servers working together, that that produces consciousness. There's other theories out there for sure and lots more that I need to learn, but my very limited narrow perspective is that there is something about biological organisms that leads consciousness to happen.
Jennifer K. Hill: 43:22
How do we know that we're not somebody else's science experiment or that we're not somebody else's computer program running? That's another question.
Jon Krohn: 43:28
Oh, for sure. There's lots of things that we'll never be able to know.
Bella Shing: 43:31
I want to thank the two of you for an amazing conversation.
Jennifer K. Hill: 43:34
That was wonderful, Jon.
Jon Krohn: 43:34
Yeah, thank you, Jennifer.
Jennifer K. Hill: 43:34
So much fun. I could geek out [inaudible 00:43:37].
Jon Krohn: 43:40
Right. I hope you enjoyed that unique episode. Plenty more ground to potentially cover from that episode in the future. To be sure not to miss any of our exciting upcoming episodes, subscribe to this podcast, if you haven't already. But most importantly, I hope you'll just keep on listening. Until next time, keep on rocking it out there, and I'm looking forward to enjoying another round of the Super Data Science Podcast with you very soon.
Show all
arrow_downward