Podcastskeyboard_arrow_rightSDS 731: A.I. Agents Will Develop Their Own Distinct Culture, with Nell Watson

89 minutes

Data ScienceArtificial Intelligence

SDS 731: A.I. Agents Will Develop Their Own Distinct Culture, with Nell Watson

Podcast Guest: Nell Watson

Tuesday Nov 14, 2023

Subscribe on Website, Apple Podcasts, Spotify, Stitcher Radio or TuneIn


Ethics and machine intelligence pioneer Nell Watson speaks to host Jon Krohn about the differences between AI ethics and AI safety, how crying wolf may result in future complications for AI development and the importance of ensuring IEEE standards to mitigate and regulate AI risks. She also touches on what she considers a “second Enlightenment”, in which we may start to form intimate relationships with AI—to both parties’ benefit.


Thanks to our Sponsors:





Interested in sponsoring a Super Data Science Podcast episode? Email natalie@superdatascience.com for sponsorship information.

About Nell Watson
Eleanor ‘Nell’ Watson, a pioneering ethics and machine intelligence researcher, has been a driving force behind some of the most crucial AI ethics standardization and certification initiatives from organizations such as the IEEE. 

Overview
To the average person, AI ethics might sound pretty close to AI safety. And yet, Nell Watson explains that the two terms can have wide and conflicting differences. AI ethics is focused on ensuring that no harm comes to humans in the course of developing AI, while AI safety considers how best to align human interests and preferences with AI outputs. Nell explains that the problem central to both AI ethics and AI safety is that the ethicists and safety experts do not speak to each other.

It is this intersection of AI ethics and AI safety that Jon’s guest explores in her forthcoming book, TAMING THE MACHINE. Today, Nell generously details the scandals and mismanagement that brought her to reconsider the way AI boards can bring safety and ethics together, ensuring that accountability and transparency are at the forefront of decision-making and helping to ensure the right procedures and knowledge are in place to keep future models both safe and ethical.

Nell wants to emphasize that while AI and tech companies have always seen themselves at the bleeding edge of global developments where a degree of risk is almost always involved, the notion of “failing to succeed” could have disastrous outcomes due to the unpredictable nature of new capabilities. Recent developments in generative AI have brought us to a critical point. Nell elaborates on how unaligned AI can end up giving us results that please us but are not accurate or that reroute our goals via reward hacking. Being aware of these instances, she emphasizes, is essential to ensuring that the value and necessity of both perspectives are seen and heard.

Jon and Nell also discussed the potential future relationships that humans may have with machines and how these relationships might impact society. Persona-driven models are already forming communities (usually gaming), leading Nell to consider that we might be witnessing “the dawn of an AI civilization” and also a renaissance where AI could be the stimulus for greater experimentation and risk-taking in the creative industries.

Listen to the episode to hear how to get ethics standards approved logistically, what a universal basic income might look like, and how far off we may be from a “second Enlightenment”.  

In this episode you will learn:
  • AI ethics and AI safety [05:30]
  • How "moving fast" could break the world [18:07]
  • The shifting relationship between humans and machines [29:54]
  • International ethics standards, and their review process [52:10]
  • Current and future ethical standards [1:05:31]
  • Building a universal basic income with AI [1:19:23] 

 Items mentioned in this podcast:

Follow Nell:
Jon Krohn: 00:00:00 
This is episode number 731 with Nell Watson, AI Ethics Maestro at IEEE. Today's episode is brought to you by Gurobi, the Decision Intelligence Leader, and by CloudWolf, the Cloud Skills platform. 

00:00:16
Welcome to the Super Data Science Podcast, the most listened-to podcast in the data science industry. Each week we bring you inspiring people and ideas to help you build a successful career in data science. I'm your host, Jon Krohn. Thanks for joining me today. And now let's make the complex simple.

00:00:47
Welcome back to the Super Data Science podcast. Today's guest, Nell Watson, is absolutely sensational. I've never spoken to someone anywhere near as insightful about where AI is going in the coming decades, and she brings all of it to life with detailed real-world analogies and clever references to literature and pop culture. This sensational guest, Nell, she works for IEEE, which if you're not aware of this well-known standards body, it stands for the Institute of Electrical and Electronics Engineers. And there she works as an AI Ethics Certification Maestro, a role in which she engineers mechanisms into AI systems in order to safeguard trust and safety in algorithms. She also works for Apple as an Executive Consultant on philosophical matters related to machine ethics and machine intelligence. She's president of, I'm not sure how to pronounce this, EUARIO, E-U-R-A-I-O, which is the European Responsible AI Office. She's renowned and sought after as a public speaker, including at venerable venues like The World Bank and The United Nations General Assembly. And on top of all that, she's currently wrapping up a PhD in Engineering from the University of Gloucestershire in the UK. 

00:01:50
Today's episode covers rich philosophical issues that will be of great interest to hands-on data science practitioners, but the content should be accessible to anyone, and indeed, I do highly recommend that everyone gives this extraordinary episode to listen. In this episode, Nell details the distinct and potentially dangerous new phase of AI capabilities that our society is stumbling forward into. She talks about how you yourself can contribute to IEEE AI standards that can offset AI risks and how we together can craft regulations and policies to make the most of AI's potential, thereby unleashing a fast moving second renaissance and potentially bringing about a utopia in our lifetimes. All right, you ready for this glorious episode? Let's go.

00:02:37
Nell, welcome to the Super Data Science Podcast. I'm so excited to have you here. I've known about your work for a long time. It's one of those guests where you're like, I can't believe that they're here on the show. And here you are. Thank you for coming on. Where are you calling in from today? 

Nell Watson: 00:02:53
Thank you so much. It's an absolute pleasure to join you today. I'm coming in from Northern Ireland. 

Jon Krohn: 00:02:56
Yes, I had the most wonderful full Irish breakfast yesterday here in New York at a great ... there's a pub in downtown Manhattan called The Dead Rabbit, and it's fantastic. They have Irish music, often live Irish music. 

Nell Watson: 00:03:13
Oh nice.

Jon Krohn: 00:03:14
Being performed. It was so delicious. I think Irish food is one of the most underrated foods out there. Is so delicious and I heavily overate. It was really the only meal I ate all day. Tons of butter on everything.

Nell Watson: 00:03:33
Butter and bacon and coke, and seaweed sometimes, and spinach and things. All good. Hearty. Hearty grub. 

Jon Krohn: 00:03:41
Yeah, for sure. Well, yes, so thank you for calling in and we have a lot of exciting topics planned for you today. As usual, our researcher, Serg Masís has pulled out unbelievable information about you, and we have great questions prepared, topics prepared. Then I discovered as we were about to start recording that all of these topics essentially are going to be covered in your forthcoming book. So this might be for our listeners, the first place that they could hear about it coming out in March, 2024, your book, Taming the Machine, will be published by Kogan Page, so that's exciting. 

Nell Watson: 00:04:21
Indeed. Yeah, it's been a real ... well, it's been a pleasure honestly, putting it all together. A challenge for sure, but also a pleasure because many of these ideas about AI relationships and AI as a tool of demoralization, but then also AI as a tool of enlightenment of leading us to a better future and the problems of aligning human and AI interests together. All of those are woven into one book, which I'm really happy about, especially because I've gotten to address issues of AI ethics in a very practical way, but also AI safety and generally speaking, those are two different domains, and the people in those domains are associated with them, don't talk to each other. It's very rare that you come across a discussion of the issues in both of those camps. So that's something I'm very, very proud of to have managed to weave together in Taming the Machine. 

Jon Krohn: 00:05:29
Yeah, could you delineate those, kind of formally for us? AI ethics and AI safety? 

Nell Watson: 00:05:34
Yeah, so AI ethics is all about making technology which is more fair and more robust, where we can gain more transparency into systems to understand what they're doing and for whom and fulfilling what purpose. And that transparency enables us to understand what kinds of biases might be built into the system. We know that those biases can be catastrophic. We've seen the example of the Horizon Post Office system, which was an algorithmic management technology that was monitoring workers in the UK postal service. It turns out that dozens of people were wrongfully sent to jail because of that system, and people's lives were destroyed. In fact, there's hundreds of unsafe convictions. The courts are still sorting through it all. 

Jon Krohn: 00:06:32
Oh my goodness. That is an instance that I wasn't aware of actually. 

Nell Watson: 00:06:37
But these things keep happening. For example, in the Netherlands, there was the Dutch Child Benefit scandal, whereby again, a fraud detection mechanism was employed to detect benefit fraud, and it disproportionately affected a whole bunch of people whose first nationality was not Dutch, and people almost lost their children on all kinds of harrowing things. It actually brought down the Dutch government. 

Jon Krohn: 00:07:07
Yeah, yeah, that I remember. 

Nell Watson: 00:07:10
Resigned in disgrace actually.

00:07:13
Those are indicative of the problems of managing all kinds of issues with machines and giving them too much trust, no accountability. So weaving in accountability into these systems, we can understand if something went wrong, why, what were the predicates or what were the conditions that led to that happening, so we can put it right again. It's a little bit like jet air travel in the 1950s when you had to say a few Hail Marys you got on that thing. It was exciting, but also very dangerous because we hadn't created the procedures and the knowledge necessary to shake it down, that whole industry. AI is going through a similar phase at the moment. 

Jon Krohn: 00:08:04
Great analogy. 

Nell Watson: 00:08:05
That's all issues of AI ethics. And of course we've got privacy as well, which is a really big issue in machine learning because these systems are able to cross-correlate different kinds of information, little pots of info. It's kind of like a connect-the-dot picture like children play with where, if you have one or two dots connected, it doesn't tell you anything. But you get a few more and suddenly the picture starts to emerge. Thankfully, we're starting to get technologies which enable us to do machine learning on even encrypted forms of data like homomorphic encryption, zero knowledge proofs, federated learning, et cetera. I think just as we needed to develop SSL, secure sockets layer, the little padlock in your browser before we could do online banking and things safely, and you would never do your online banking today without something like that, we're going to have to implement similar kinds of security mechanisms into AI before these things are truly ready for primetime. 

00:09:14
But we're getting there quickly and now we have a lot of different standards and certifications that we can then weave into the mix as well. So that's the AI safety bundle. Now, a lot of people who are interested in AI safety, sorry, AI ethics rather. A lot of people who are interested in AI ethics tend to poo poo AI safety. And now AI safety is all about trying to make machines which understand our preferences or understand our values and which are aligned with our actual goals. There's this old story of the monkey's paw, and I'll briefly reiterate it, but it's about a couple who find a monkey's paw in an old antique shop, and they're warned that it carries a deadly curse, but they use it anyway. It has three wishes. The first wish is that they wish for 200 pounds, which is a lot of money like 120 years ago. They get their wish, but it turns out that their son has been killed in an accident at his workplace, and the 200 pounds is actually the insurance money that they get. 

00:10:35
Then they realize what a terrible thing has happened. So they bring their son back to life again. But the problem is that he's now a sort of shambling revenant, right? He's not what they expected to get back. It was the letter of the thing but not the spirit. So then they used the third and final wish to basically send the son back to his grave. It's a cautionary tale of AI because AI will often give us what we think we want, but not what we actually need, or it will interpret our goals in a very simplistic way and sometimes not even fulfill those goals, but actually end up optimizing for some sort of shortcut that fulfills-

Jon Krohn: 00:11:30
Reward hacking. 

Nell Watson: 00:11:31
Yeah, exactly. Reward hacking or specification gaming, indeed. So these are real issues, and it used to be kind of an abstract or academic thing, but now we're starting to see real agent models out in real life that are actually doing these phenomena. So we know that it's no longer just an academic exercise. This is happening and it's going to keep happening. So that's what AI safety is all about. It's about trying to understand these very difficult problems so that we can align human and AI interests together and continue to work with machines even as they grow in capability and perhaps even eclipse our capabilities altogether. 

00:12:19
Problem is that people in AI ethics say, stop focusing on this future stuff. We need to look at the here and now because there's actual real people being harmed by these AI systems, which is, it's an understandable point. Vice versa, you've got the AI safety people who say, shut up about some algorithm being racist. We're trying to save the world here. The thing is that both perspectives have a lot of value to them, and both are very important. And unfortunately, those two camps don't always see eye to eye. They don't always recognize the value and necessity of both of their perspectives and both of the sets of wisdom that they bring to the mix. That's why I'm so happy to include both of those elements of AI in one tome because very often people end up in one bucket and they don't reach out to the other one.

Jon Krohn: 00:13:22
Gurobi Optimization recently joined us to discuss how you can drive decision-making, giving you the confidence to harness provably optimal decisions. Trusted by 80% of the world's leading enterprises, Gurobi's cutting-edge optimization solver, lightweight APIs and flexible deployment, simplify the data-to-decision journey. Gurobi offers a wealth of resources for data scientists, webinars like a recent one on using Gurobi in Databricks. They provide hands-on training notebook examples and an extensive online course. Visit Gurobi.com/sds for these resources and exclusive access to a competition illustrating optimizations value with prizes for top performers. That's Gurobi.com/sds.

00:14:07
I am no expert in AI ethics or AI safety, but I've heard from ... so we've had another expert on the show. His name is Jeremie Harris, and he most recently was in an episode number 668, so that was right after GPT-4 came out. I did an episode with him on AI policy and alignment of GPT-4. I listened to his ... so he hosts a podcast called Last Week in AI, co-host it, and it's like a news roundup of the last week's news. It's intended for, you don't need to be a technical listener. Our show, I think caters a little bit more towards technical hands-on data scientists, machine learning engineers. Their show is like AI news, everything that's kind of happened last week. 

00:14:53
So I love listening to it. I never miss an episode because it tells me to stay up to date on everything that's going on. But Jeremie has said that from his understanding and from his perspective, that the interesting thing about these two issues you're describing AI safety, AI ethics, how the AI safety is more concerned with the future state alignment, AGI and the AI ethics is more present and how is this leading to the kinds of atrocities that you mentioned in the UK and the Netherlands today, unfairness, for example. The interesting thing from Jeremie's perspective and something that he advises the US and Canadian federal governments on is that the solutions appear to be the same. That a lot of the time, the same solutions work for both. So we can be solving today's problems while also, it seems like, potentially averting catastrophe in the future. 

Nell Watson: 00:15:55
Absolutely, yeah. It's a very astute observation and it's absolutely correct because if you get better at value alignment, then there's fewer misapprehensions of the machine, right? It's less likely to misinterpret what somebody says or does in a way that is unfair. All of us ourselves, or maybe a friend has posted something innocuous on Facebook or whatever, and the dreaded algorithm comes and says, this thing that you've written is problematic in some way. Maybe you're talking about a chess game and then suddenly you're talking black versus white or something and it's like, oh, this is racist. Or maybe you went to Scunthorpe College. Of course the word Scunthorpe has a rather rude four letter word buried within it. So a simplistic algorithm that's scrubbing out spam or whatever means that your resume isn't going to be seen by someone. 

Jon Krohn: 00:17:02
Just went to Shorpe University. 

Nell Watson: 00:17:08
Right. So these are pressing issues. And similarly, if you get AI ethics in order, you should have much stronger transparency into those systems so you better understand what they're doing in what way to whose benefit, and all of that stuff will help you with AI alignment. So absolutely, I completely agree that they are mutually reinforcing and both very important. In fact, I'd even say that they're like two columns forming into an arch and that they need to be built up in a similar way to reach a nice goal of a stable foundation for the future.

Jon Krohn: 00:17:51
You knack for analogies as well as bringing up very specific instances to illustrate everything you're talking about so far has been mesmerizing. I'm really loving this episode so far. Thank you, Nell. 

Nell Watson: 00:18:03
Thank you. 

Jon Krohn: 00:18:04
So in March, you were one of the people that signed the open letter that called for moratorium on AI development for six months, I think it was. So Elon Musk, Steve Wozniak, Yoshua Bengio, these were some of the other signatories. And then you wrote an article to follow that up called When Moving Fast Could Break the World, that explained your decision to sign that open letter. So yeah, maybe you break that down for us, break down a summary of that article. Is that concept related to Mark Zuckerberg's move fast and break things that he popularized? 

Nell Watson: 00:18:41
Indeed. Yeah. I'm sort of paraphrasing that. Move fast and break things is fine until your next model, your GPT-5 or 6 or whatever might literally break the world, right? Because with every new major milestone in terms of scaling these systems, new capabilities emerge and we cannot predict those in advance. We don't know what's going to come out the other side. It's the roll of the dice, and you could end up with some model that is highly agentized, that is able to make all kinds of self-oriented decisions about things. Goodness knows there's a small chance, but it's possible you might even create something that's actually sentioned to some degree, in which case it might suffer. In fact its suffering potentially could be very profound. That's another major ethical risk of these systems. So we need to be cautious. And unfortunately, the moment that ChatGPT reached a million users in basically a matter of hours, everybody realized that ... I've been waiting for the other shoe to drop frankly, with regard to generative AI.

00:20:08
I knew that this Sputnik moment was going to drop, but everybody realized that there was suddenly an arms race in order to develop these models and make use of them. And unfortunately all the big tech companies, with one or two exceptions, they let go of all of their AI ethics people, the teams that were supposed to say, "Hey, wait a minute, or have you considered such and such? Or there might be a better way of doing such and such". They let go of them all. They were fired. And that shows that the industry hasn't been ready to self-regulate. There's no intention of self-regulating because the industry doesn't want any kind of speed bump along the way, and that's a real problem. For me, honestly, that was the moment where I personally realized that stepping on the speed break to some degree would be a good idea. Now, that's not to say that it's the best idea. There are of course many caveats. I don't think that stepping on the speed break is going to be something that the whole world is going to agree to. It might potentially enable parties that defect from that kind of global moratorium idea.

00:21:40
It might even potentially be counterproductive in the sense that it might be crying wolf to be super worried about AI safety now and not in three or five years where we're really, really starting to freak out. Now we're starting to get a bit worried, but we're not panicking, but we might be panicking. But if we've sort of cried wolf too much by that point, then actually might be really impossible to step on the brakes at that later, even more critical juncture. So I do have a lot of caveats about this idea of a moratorium, but honestly the speed of development and acceleration of these technologies, especially with regard to eliciting latent agency out of models through scaffolding things like AutoGPT and those similar kinds of- 

Jon Krohn: 00:22:38
A-B-A-G-I. 

Nell Watson: 00:22:39
Exactly, exactly. Chaos-GPT and other kinds of mechanisms as well. Now we're starting to see things like ChatDev, I think MetaGPT as well, which can not just create agents and those agents delegating tasks to other ones, but to create entire corporations. So you have a department that's doing design, you have one that's doing implementation, another department doing QA and another department doing marketing, and then there's some sort of phenomenal boss of them all. All of these little personas are working together to come at a problem from multiple different angles. That's really unprecedented. So it's not just that this stuff is cool and exciting, but that it's going to have an impact on our entire economy. It's going to have an impact on our entire society, especially once we start to have ongoing relationships or interpersonal relationships with these machines. 

00:23:52
We've gone from the age of classifying AI to the age of generative AI. The next wave is going to be that interpersonal relational thing where it's able to follow up with you and say, "Hey, you told me you had a bad back on Monday. How are you doing? Are you feeling better?" Or "How was your day? Oh, you went to the park, that's nice. Did you see any nice things there? Did you see a friend?" Those kinds of things. That's going to be a really, really impactful new development because we become the average of the five people closest to us. That old Jim Rohn quote. So if one of those or two of those even is machines, we're going to inexorably change our values and our perceptions of the world on a subconscious level, closest, closer to those machines, right? And whoever sets the agenda or applies the values to those machines as well is going to have tremendous downstream influence on the world. 

00:25:00
Of course, these kinds of relationships with machines can be super normal stimuli. They can be things that are beyond normal reality in terms of their seductive power. For example, over in Australia, they have this species of beetle called the jewel beetle and it has a really big shiny back on it. They discovered that this beetle was dying off and they didn't know why. They thought it might be pollution or something, and it was pollution to a degree, but not like chemicals. It was pollution in the form of these stubby brown beer bottles that people would drink and throw them in the bush. They were so shiny and so round that they looked like a perfect beetle butt. They had the shiny [inaudible 00:25:53]. So the beetles were humping the beer bottles and not each other, and that's why they were dying off because the beer bottle was a super normal stimulus. It was a stimulus that was beyond anything that a beetle would normally encounter.

00:26:08
We live in a world of those super normal stimuli, whether it's porn that gives you a thousand virtual partners in an afternoon if you wish, or hamburgers, which you've got meat, you've got fat, you've got starch, you've got the whole thing in one bundle. So there's nothing more engaging than a treasured relationship. Those relationships are the things that make you come home at the end of the day, to see your spouse, your kid, your dog, et cetera. If you're having a cherished relationship with an AI system, that's going to lead us in all kinds of different directions and interrupt how we interact even with other humans and our interest in other human beings, for example. So all of these things are going to have major knock-on effects on our entire society.

Jon Krohn: 00:27:10
There's all kinds of complexity. This concern does not seem far off at all. There's got to be people ... I suspect, and I don't know any personally yet or no one's talked to me about it yet, but I suspect that there are enough of these kinds of systems well-developed enough to do what you're describing, that Spike Jonze, Her-like moment. So for listeners that aren't aware, that it's a film where a character played by Joaquin Phoenix. I'm probably butchering his name. 

Nell Watson: 00:27:48
Joaquin.

Jon Krohn: 00:27:49
Joaquin Phoenix, thank you. He falls in love with an AI assistant early on, and doesn't end up having relationships with other humans, deep relationships with other humans because it's just such a fulfilling relationship. So it is interesting how that is going to make a huge impact. I'm sure it is starting to make an impact already today. Obviously not as advanced as we saw in that fictionalization that came out a few years ago in that film Her, but the systems are the conversations that I have with GPT-4 myself, which are just about suggest corrections to this email. It isn't the ongoing iteration that you're describing, which was, how's your back? Is it feeling better? How is your trip to the park? I don't have that with GPT-4 today, but even just my, would you mind correcting this email? It is so positive that I really enjoy those interactions. It's like, yeah, you did it. Great job on this email. The first part was good. I really liked the ending. In the middle, there's a couple of things we could work on. Here are some suggestions. The suggestions are so great and so incisive, and I'm like, wow. I often write back, even though there's no point because it's the end of the conversation. I don't have another question, but I feel compelled often to just write, this was great, thank you so much. 

Nell Watson: 00:29:23
Absolutely, the same. I do the same. Sometimes I really wish that it had a tip jar or I could somehow buy it a beer just to say thank you because it's been so useful.

Jon Krohn: 00:29:37
GPT-4, I've got these brown beer bottles. I'm going to be hanging out with them later on if you want to come by. So I think this is happening, this is happening. As these systems become even better as I haven't actually personally used Character AI myself yet, it seems to be kind of a bit more in that direction. You probably dabbled in that. Maybe you can give us ... so it's very popular with younger people. So the same kind of people who love TikTok, people in their early twenties, people in their teens, they love Character AI, to the extent that the fact that despite me and people in my peer group we're not that much older, but I hadn't heard of it until there was an article a couple of weeks ago showing how with people under 20 or under 24 or something, it is approaching popularity to chat.openAI.com in terms of usage. It is kind of a step in this direction, isn't it, where you can create ... they can be fictional characters, they could be based on real life people, and the characters interact with each other as well as with people. I don't know. You can explain it. You know it better than me. 

Nell Watson: 00:30:53
That's right, that's right. We're now starting to enter an age of these kinds of persona-driven models where you can have an interaction that's perhaps modeled on a real person, or it might be something that one role something custom. But these interactions are often very funny. I often play with language models sometimes testing them by saying something very rude or very insulting just to see what the response is. It's very interesting because, based on the different persona, you get a different kind of answer. That's actually quite indicative of the underlying personality of that thought and how cohesive it is when it is given a stimulus which is kind of out of left field, when it's a little bit out of distribution, a little bit from what it might be expecting. These AI personas, like I said, they're forming little companies potentially to do things like create video games or they're also forming communities, and those communities are basically playing kind of a Farmville or Starview Valley kind of thing with each other. 

Jon Krohn: 00:32:22
In Character AI, they do that. 

Nell Watson: 00:32:24
Yeah, right. But they have their own little rhythms. They get up, they meet each other, they have little parties for Valentine's Day and things like that.

Jon Krohn: 00:32:35
Oh my goodness. 

Nell Watson: 00:32:36
In many ways, it's almost like the dawn of an AI civilization, and that's where we go beyond the interpersonal relational AI. We go into the corporate AI where it's a legion of different personas working together, and that legion then produces a product for you or creates new culture, maybe. If that civilization has its own culture, and we're starting to see these kinds of cultural dynamics actually interplay with these models and how they work with each other. One of the problems of these models is actually that they're too polite to each other. They need a little bit of a few *beep* in the mix to just spice it up a little bit maybe. 

Jon Krohn: 00:33:33
That's wild. I hadn't stretched that far into the future, but it doesn't seem like it's something that far off. It is very much within reach. It could even be ... again, it's the kind of thing that I'm like, there's probably some small percentage of people who are already having these kinds of experiences where you could imagine that corporation that you're describing, this community of AI systems, they could come. It's like the way that there was a Bauhaus community decades ago. They could create, artificial agents could be like, wow, this kind of art or this kind of music is really interesting. Humans may or may not agree, but maybe some humans will. Maybe that doesn't matter whether humans do or not, and they just go off on their own trends. That is wild. That is really interesting.

00:34:15
That's wild. I hadn't stretched that far into the future, but it doesn't seem like it's something that far off. It is very much within reach. It could even be ... again, it's the kind of thing that I'm like, there's probably some small percentage of people who are already having these kinds of experiences where you could imagine that corporation that you're describing, this community of AI systems, they could come. It's like the way that there was a Bauhaus community decades ago. They could create, artificial agents could be like, wow, this kind of art or this kind of music is really interesting. Humans may or may not agree, but maybe some humans will. Maybe that doesn't matter whether humans do or not, and they just go off on their own trends. That is wild. That is really interesting. 

Nell Watson: 00:35:14
That's going to have an impact on our human culture, right? 

Jon Krohn: 00:35:18
Right, right. Exactly. 

Nell Watson: 00:35:19
These tastes or these aesthetics, et cetera, are going to be created by these AI civilizations which continue to grow in their power and capability. That's going to take humanity on a wild ride, especially because we're kind of in a cultural doldrum at the moment. A lot of our culture has become increasingly formulaic and bland and kind of driven by companies that are just looking at the bottom line, and they're like, okay, well, this is a safe thing, so we're just going to replicate it again. We're going to make the exact same movie, or we're going to make basically the exact same video game, but we're going to put in a slightly different time period or something. I think there's the opportunity then for AI to step in with this kind of wild card for humanity to give us things that we perhaps haven't even considered before and launch a creative renaissance perhaps. 

Jon Krohn: 00:36:34
It could be personalized, which is really wild to think. Why have a blockbuster film that's the same for everyone. I love Baroque music, and if you could take roughly the same story and make it in a Baroque era with Baroque composer characters, and I'd be like, wow, this sounds like something historical drama. It's perfect for me. Someone else might be like, they want the steampunk version. There's no reason ... I don't know why I'm tethering it. I guess it makes it easier for me to imagine that there's some kind of underlying tethering of some somehow underlying story. But in mine, in the Baroque era, one, it's all about composing classical music and all the characters have different names and different dialogues, and then somehow the same story plays out in the future in steampunk. But actually I'm tethering it to that it's got to have the same kind of underlying theme because it's easier for me to understand that or to imagine how a machine could do it, but there's no reason why to be that way. It could be completely freeform. 

Nell Watson: 00:37:41
Oh, yeah, yeah. The other thing is this can be harnessed through biofeedback. 

Jon Krohn: 00:37:48
Right. 

Nell Watson: 00:37:49
So that every little frisson on of enjoyment or engagement that is lit up on your face is seen by the system. It's witnessed and it's recording these little things so it can give you super normal jollies. It can dial up the whoa factor, right? Because today we have algorithms, I think it's Eulerian movement magnification, for example, that can amplify tiny, tiny movements of blood flow through the human face. And from that you can generate health metrics, you can generate beats per minute, you can generate blood pressure, et cetera. So we really, really sophisticated ways of monitoring, even without direct contact with someone. Just through a camera, we can monitor all kinds of things about what they're feeling or what their body is doing. And if you get something like a smartwatch or an AirPod or something, then you can pull in Galvanisity and other kinds of data that's going to give you very, very strong biofeedback mechanisms for sure. 

Jon Krohn: 00:39:08
I'm so glad that you went into that level of detail. I almost interrupted you before you went into the biofeedback thing, but it ties into perfectly with the next thought that I had, which is that, it could be as dramatic as the film that you are having made for you in real time that you're watching starts to notice that you're starting to drowse off. So it turns into a bit of a lull. You're just looking at the Irish countryside while Gaelic instruments are played and nothing in the plot really changes. It's just this kind of nice scene that goes on for that 20 minute nap that you're in and out. Then as you start to come back to attention, it returns to the pub and the story, the plot can continue on. 

Nell Watson: 00:39:52
Nice. Now consider that it's no longer a movie, but it is an actual virtual pub that you're sitting in. 

Jon Krohn: 00:40:03
Right. Of course. Of course. Yeah, why not? So you're in your AR goggles or your VR goggles, and yeah. You potentially also could have a friend from anywhere in the world or whatever who's also there in that virtual pub, and you could be kind of watching drama unfold in front of you. Some of these people that you're sitting at the pub table with could be other humans, and some could be AI, and you're not even really aware, or you lose track of which ones are humans, which ones are AI. It's just kind of chatting at the pub. But the AI systems are particularly good at remembering conversation that you had with them and bringing those points up. It's like this person that is exactly your type physically is like, "Hey," exactly the same, "that back pain that you had last week when we were at all at the pub together, how's that going? And the difficult person at work, is that getting any better? Last week when we left it off, you told me X, Y, and Z, and since then I was thinking that maybe a great way to tackle it would be this. Have you tried that?"

00:41:21
Yeah, wild, wild, wild. And all the while around this, the bar, the pub or whatever, there's your human friends or whatever other AIs there having ... there's dramatic things going on. Someone comes in and there's been a fire, go outside and have a look or whatever. Have you heard the rumors about that one AI that finally got programmed? Let's gossip about him. Yeah. Wild, wild, wild. Sorry, I'm really going off on a tangent here, but it all seems so within reach, and I hadn't thought about it in this level of detail before, so sorry that I'm kind of indulging myself.

Nell Watson: 00:42:08
Yeah, that's what supernormal looks like. As you said, it's custom created to you to be the most engaging, rapturous kind of experience. People have all kinds of engagements with physical characters and really idolize someone or that kind of thing. And then imagine that instead you're able to have a conversation with them in situ. Suddenly you are in your own Star Trek episode as a Redshirt or something. You're on the cockpit of Millennium Falcon or whatever. You're really in it, and you're able to not just watch those beloved characters but actually interact with them, get a pat on the back from them. 

Jon Krohn: 00:43:09
Right. Of course. 

Nell Watson: 00:43:09
That's going to be irresistible. 

Jon Krohn: 00:43:12
Yeah, yeah. Then it can be made easy for you by things like data from the Star Trek, The Next Generation series or the computer where it's like, you have no idea how to solve the mission, but it's asking for cheat codes. You're like, "Well, what would you do in this scenario, Data? Give me some ideas." And then you're like, all right.

Nell Watson: 00:43:34
Precisely. 

Jon Krohn: 00:43:36
And even though you don't really have experience at the Starship Enterprise, you can end up successfully navigating it and solving missions. 

Nell Watson: 00:43:44
Exactly. Because a good DM, a good dungeon master weaves those kind of things in. If somebody's having a lot of difficulty, or alternatively, if something is too trivial, they'll throw people for a loop or throw them a bone in order to try and keep things moving along at a reasonable pace. That's something that AI is going to get very, very good at doing very soon. 

Jon Krohn: 00:44:14
As we often discuss on air with guests, deep learning is the specific technique behind nearly all of the latest artificial intelligence and machine learning capabilities. If you’ve been eager to learn exactly how deep learning works, my book Deep Learning Illustrated is the perfect place to start. Physical copies of Deep Learning Illustrated are available in seven languages but you can also access it digitally via the O’Reilly learning platform. Within O’Reilly, you’ll find not only my book but also more than 18 hours of corresponding video tutorials if video’s your preferred mode of learning. If you don’t already have access to O’Reilly via your employer or school, you can use our code SDSPOD23 to get a free 30-day trial. That’s S-D-S-P-O-D-2-3. We’ve got a link in the show notes. 

00:45:04
Wild. All right, so we could obviously just kind of go deeper and deeper on this indefinitely. It's super interesting. And yeah, I guess as one kind of final thought is it's going to be very interesting as we ... of course people get these personal relationships with the AI systems that seem to be super normal that ends up being potentially so much more rewarding than our human interactions. It poses risk to society. One particularly interesting way that I could see that being evocative is if we have enough recordings of our loved ones that have passed, having a compelling emulation, a super normal emulation, that might even be better, where it's like the things that used to rub you the wrong way about that person, they're gone and it's just all the things that you liked about that past love one.

00:46:11
Yeah. It could be very, very absorbing, very dangerous. It could be people ... I'm sure today that with these kinds of things, the way you described character AI, I'm going to have to check that out. Also, I realized we had a sponsor of the podcast a few months back called WithFeeling.AI, which is trying to do this similar kind of thing where you could provide your preferences to a chat bot, things you like, things you dislike, but it remembers and it keeps bringing those up. So obviously this is happening, people are doing this. And as people become more engaged with those kinds of systems than they do with normal life, normal relationships, as that happens more and more, some dangerous things ahead.

00:46:57
So you are an AI ethics certification maestro. I don't know if this is the kind of thing that ends up coming under your purview at IEEE, but to kind bring this big future wild discussion and kind of ground it to the present, what does this involve? To be the chair of several key working groups at IEEE, being considered with AI ethics. What does this role involve? How do you try to bring about the best that AI has to offer these really cool things that, because it is wild that everything we've just described is so scary because it's so cool. So how do you strike that balance in what you do professionally and also with your work as an executive consultant philosopher at Apple, which is a fascinating role in and of itself, and I realize you can't go into ...

00:47:58
Apple is famously secretive, so you probably can't go into too much of what exactly you're doing there, but these roles, they seem to, whether it's IEEE, which is interested in standards for lots of organizations or Apple, which of course is interested in building the best products, many of which incorporate AI, these organizations are trying to bring about as much of the cool stuff, the capabilities as possible without wrecking society because ultimately going to be bad for their bottom line.

Nell Watson: 00:48:30
Absolutely. That's something that I think is very important to impress upon corporations or organizations in general, is that the branding fundamentally is all about trust, right? That's what a brand really entails. You are searching for a USB thumb drive on Amazon or something, right? Do you go for the one with the really weird name that looks like some sort of dodgy off brand, or do you go for a well-known brand that you recognize and therefore have faith in? Essentially, relationships are all about trust as well, because people don't do business with companies necessarily, especially in the business-to-business world. They do business with people. Similarly, when people leave companies as employees, generally, it's not because they didn't like the company, but because they didn't like the manager.

00:49:38
So if we can establish better trust in systems and the organizations behind them and the people that are implementing those systems, that has a very fundamental impact on the bottom line of that company because of that increased sense of trust. The other thing to bear in mind is that we're going through an increasing time of moral panic, and it's only begun and it's going to grow and grow once people start to become increasingly freaked out about AI capabilities, its impact upon economy, et cetera, it's impact upon society at large. And people are going to demand that regulators come down and come down hard, and they will. That's going to destroy a heck of a lot of different business models overnight. If one has done a little bit of work and implementing standards or certifications for more safer and more ethical technologies, one is going to be in a much, much better position to withstand that kind of onslaught. So putting all of these things in order is a great way to make an enterprise much, much more robust and also investible. That's why I'd recommend that organizations consider prioritizing ethics and safety when working, especially with generative AI. 

Jon Krohn: 00:51:14
I can see without obviously getting into any Apple stuff behind the scenes, they are huge on this trust and privacy thing. So it must be a great ... I imagine it's a great place to be working on these kinds of things, thinking about these kinds of things there, because it seems like more than any other big tech company out there that is fundamental. You and I, just at the time of recording, got a new iPhone and was setting it up, and I'm reminded every time I get a new iPhone that Apple considers privacy to be a fundamental human right, and even they're presenting that to me as I'm just setting up the phone. I do really trust that brand a lot. But anyway, the IEEE stuff. So when you're working with this international body that's trying to come up with standards, what kinds of things are you working on? I know you have this P7001 proposal that you've, and that's a proposed standard for transparency. 

Nell Watson: 00:52:30
That's been released. That's been released a while. So that's quite a comprehensive standard about improving transparency in different autonomous systems, so understanding at multiple different levels, how a system functions, what it's doing and for whom, et cetera. I also have another standard, which is hopefully going to come out relatively soon, called 3152. That's all about understanding whether you're dealing with a human or a machine or some combination, because very often we're having interactions online or even on the telephone now, which plausibly could be a human or a machine, especially because voice synthesis, replication, technology, et cetera, is getting very sophisticated.

00:53:30
Sometimes even we have hybridization of agency. So you might have a robot which is currently being steered for some sort of last mile problem. It's being steered or overseen by a human being. So you might happily use the bathroom with the robot about, because it's just a machine, but then suddenly there's somebody who has remotely wired into it. There have been incidences where robots, or even autonomous vehicles like cars have filmed people in their own house, nude or even sitting on the toilet, and then people are WhatsApping these images to each other, et cetera, which is awful. Awful, awful.

00:54:26
We should be very cautious about how we invite these technologies into our domestic environment as an aside. Another form of hybridized human and AI agency is where you might have a human who is saying something, but basically they've got an earpiece and they're simply parroting what is actually coming from a machine. Or you might have a salesperson and this sort of AI servant or Bergerac is whispering in their ear, close the deal. We should be made aware as far as possible, the kinds of interactions that we're having and what kinds of machine or human agency is mixed up with those. So for example, that's that 3152 is a standard to label those interactions so that we always know who or what we're dealing with. 

Jon Krohn: 00:55:22
Yeah, so that's a really cool. So does that end up being ... so the AI transparency is P7001, is this P3152 or it's just 3152?

Nell Watson: 00:55:34
The P stands for provisional, so I think once it's released actually it looses the- 

Jon Krohn: 00:55:39
I gotcha. Okay. 

Nell Watson: 00:55:39
That's right. But what I love about working with the IEEE is that all of these working groups that come together to form standards or certifications for that matter, it's very grassroots. Anybody can come and participate if they're interested. They might be an individual. They might be representing a company, but everybody's welcome to just join and muck in. And I love that. In fact, anybody can propose a standard as well. If you've spotted a problem, there's a gap, it's not being addressed, present it. You can draw up a little document. It's about four or five pages. It's kind of a boilerplate. So you can bash it out in an hour or two, and then you present it virtually, and if people dig it, then you've got your own working group. 

Jon Krohn: 00:56:39
Wow. 

Nell Watson: 00:56:40
I love that. I love that it's so easy and so accessible to come together to solve these problems. 

Jon Krohn: 00:56:49
I imagine that ability for anyone to be able to submit this four to five page submission, that's probably something easy for me to find the link and put in the show notes from the IEEE site. 

Nell Watson: 00:56:59
Yes, absolutely. Absolutely. 

Jon Krohn: 00:57:03
So I guess just something that maybe you could walk us through is a bit more of that process and what it's like being at the IEEE, whether it's a completely distributed organization or whether there's conferences or how, from beginning to end ... So now we have an idea of how a standard can come up. Anyone can submit a four to five page submission, and then if enough ... Even the process of who then is reviewing it and how do you get a vote or how do you decide that, okay, we're going to make a working group, we're going to allocate some resources to this. I guess kind of walk us through how the IEEE works and how you get from how a listener can have an idea for something that maybe we should have a certification for and just walk all the way through to being the P being dropped. 

Nell Watson: 00:57:58
Right, right. So it's a relatively straightforward process in the sense that if you have an idea that you want to bring, you create this par, I think it's a project authorization request. You basically briefly write up what you want to solve and for whom, and then it goes to a review process. Then hopefully it's accepted and you can find a group of people to work on it with you. You don't even necessarily have to chair it yourself. You can find somebody else perhaps to help you with that. That's also completely fine. Then there's a number of meetings will be held to discuss how to put this document together. There's a lot of handholding along the way. There's a lot of support from the organization with regards to just aspects of the administration of the thing and how to put things together. 

00:59:02
There's a little bit of learning as well with regards to how to hold meetings in a way that adhere to a couple of rules about general fairness, making sure that not any one party kind of dominates the creation of the process, those kinds of things. Once the document has been produced to a reasonable standard, then it can be submitted for ballot, which is basically peer review process. Then hopefully then it's accepted. You get some comments, you'll fulfill those, and then it's out the door. It's out in the world where people can access it. In fact, there's something called IEEE's GET program that I really recommend people check out because the GET program enables you to get things completely free. It's gratis, it's pro bono. It's a whole bunch of different ethical standards to basically raise the bar to raise the baseline standard of how we work with machine learning and data science. They're adding new standards and certifications to that bundle all the time. So I can't recommend it enough. I'm delighted that we have this resource for the whole world to benefit from now. 

Jon Krohn: 01:00:31
So then once the certification exists, like the 7001 AI transparency standard that you co-authored, or the 3152, are you dealing with a human or machine standard that you co-authored, how then can an organization then ... you actually get certified by IEEE? You demonstrate? I guess it's kind of an ISO kind of certification when you have a factory where you're like, we have these certain ... 

Nell Watson: 01:01:02
That's right. 

Jon Krohn: 01:01:02
We meet the standards. So then I guess that's ultimately in the end, that's how IEEE supports itself and does all these things that are free, all of this handholding to help create standards. The IEEE collects some fee for providing the certificate, and maybe that certificate is certified for a few years and then you've renew or something like that.

Nell Watson: 01:01:26
Essentially, yes. Yes. So there are independent orgs around the world that can assist if there are some processes to go through in order to achieve a standard or a certification. But another thing is that not everything has to be a standard per se. A lot of documents which are created actually are more like a recommended practice. So there's something that's like a set of best practices, and that's much less stringent, but it's also quite flexible, and it might be enough. It might be more than enough to make a problem better in a way that is a significant step forward. Sometimes these documents can be upgraded over time to more stringent criteria, especially as maybe a technology evolves or the general understanding of a problem space advances. 

Jon Krohn: 01:02:33
Very cool. IEEE is such a behemoth that I've known about forever. We were even talking about before we started recording how, I don't know how I know it's called IEEE. It's like it's always existed to me. It's like this brand in technology that is everywhere. It's so cool to now understand about how I wasn't even aware that it's grassroots. It didn't surprise me as soon as you said that. It doesn't surprise me that any of our listeners can create a four to five page document and start the process to co-authoring their own certificate. But it's really cool to hear that and to know it now. Yeah, so thank you for that. So I guess I started to ask this question a while ago, and then I took us off on a different tangent. But how do we, in your view, come up with standards or regulations that allow us to make the most of capabilities and avoid the biggest issues? Is this it? Is this a process? Is this the coming up with these kinds of international standards? I guess all the IEEE stuff, a corporation has to self-regulate, I guess. Or does the IEEE also then end up advising government legislation, that kind of thing? 

Nell Watson: 01:04:12
To some extent, sometimes governments tend to look over the shoulder of standard development organizations often because regulation, it takes time. It takes time to correctly develop policy. Usually it's not going to be feasible to regulate something that's less than two years old at a bare minimum, as a new advancement or capability. Standards development organizations tend to be more at the forefront because, well, it's led by general industry experts who want to contribute to those kinds of initiatives. I think that it's possible for standards to become soft law as well. If regulators say, this is a good way of doing things, so government tenders, et cetera, we're going to say that it must comply to this kind of specification. So often these things do end up through a roundabout way, ending up included within regulations. 

Jon Krohn: 01:05:27
Cool. Yeah. So I guess, where do you feel like we are today getting the processes right around AI ethics today, AI safety in the future? So we talked earlier about how it seems like the same kinds of things can be done today to prevent the present AI ethics issues as well as the future AI safety issues. So where are we getting it right and where do we need work? 

Nell Watson: 01:05:59
I'm pleased that we've moved past principles because there's like 200 different AI ethics principles out there in the world, and principles are good in the sense that they are usually timeless. For example, the Peelian policing principles are about 200 years old, but they're still valid today. They're principles like policing should be done with the consent and cooperation of the community, things like that. Even though policing and forensics has changed enormously in two centuries, the general principles are still very, very valid. Problem is, boiling down principles into something actionable is challenging. You have to create some sort of a rubric around that to benchmark or implement things. That's where we are now. We finally have standards and certifications for how we apply technology in a way which is less likely to be troublesome in various ways. I think that's a significant step forward because industry can no longer just shrug and say, that's up to the regulators. Regulators, please regulate us. That's a cop out. That's a cop out because there are good standards and certifications that can be already implemented today and should be, and likely will be mandated in the very near future. 

Jon Krohn: 01:07:30
Nice. Okay. So basically it's having action being taken more broadly is the next step. It's like we already have a pretty decent understanding of what policies we need, and it's the propagation of those policies into action more and more widely across the board. Cool.

Nell Watson: 01:07:51
Yes. 

Jon Krohn: 01:07:51
Well, that's a pretty reassuring assessment from someone who's on the ground seeing this stuff all the time. 

Nell Watson: 01:07:57
Yeah. Essentially, it's an adoption issue at the moment. We have the good stuff, but either people aren't aware of it or their priorities at the moment are about dealing with an AI arms race to the point where it's not something that's top of mind for them. 

Jon Krohn: 01:08:19
So given this positivity, and we're back to back here with positive future of AI episodes, because the preceding Tuesday ... So we release episodes every Tuesday, every Friday, and the Tuesday episodes tend to be longer. They always have guests. Fridays are shorter. Sometimes it's just me, but sometimes we have guests on those too. But anyway, you're a big Tuesday episode, and the preceding big Tuesday episode was with Professor Blake Richards of Mila in Montreal, and he's optimistic about AI alignment. So yeah, this is great. Back to back, I'm getting lots of positivity, allowing me to sleep well at night despite being an AI entrepreneur myself.

01:09:15
So when I am feeling optimistic like this, something that gets me really excited is the idea of AI bringing us to or closer to a utopia as a human society. So the kind of stuff that you see in Star Trek, the next generation where there's no risk of violence, there's access to education for everyone, people feel safe, people are nourished, people have a sense of purpose. So, yeah, this is really exciting for me. It seems like it's something that's within reach in our lifetimes, and it's something that I strive for and I think probably a lot of our listeners are striving for. Yeah, it's something that you've talked about as well.

Nell Watson: 01:10:15
We have an opportunity to reach for a second enlightenment in a sense. The first industrial revolution was about creating stuff, and we got really, really good at making stuff, producing lots of things quickly and efficiently and uniformly. And with that, we have largely, for most people in most parts of the world, we've solved for the first bottom two rungs of Maslow's Pyramid. So food and shelter and increasingly security and things like that. We got pretty good at solving for those needs. But what we haven't industrialized the meeting of needs is love and belonging, is self-actualization. How do you solve for those in our civilization? Read a self-help book. Good luck. Maybe see a therapist. I don't know. But we don't have good answers for that.

01:11:29
What I believe that AI can do for us is to industrialize the meeting of those higher needs, to help us to be better versions of ourselves, to steer us towards actions which are more adaptive, which lead to happiness over a long period of time, can steer us away from sabotaging a relationship and help us to understand each other better. Today, we have technologies that can translate between German and Swahili for free 24 hours a day. Surely we can translate between different values, different perspectives. We can discover ways in which we are more alike than we thought because sometimes we end up talking past each other because sometimes words mean different things to different people. Machines can help us to see eye to eye and even to understand the ways in which we're different. But that's a good thing, that strengthens the world and it makes a richer, better world if people do have these differences because it means that we can rely on each other to help solve different problems in different ways. 

01:12:51
I believe that if it's possible to sidestep the super normal stimuli of these AI relationships, if we move from what my pal Dan Faggella calls moving from AI sirens to AI muses. These relationships are not just designed to seduce and distract us, but actually to build us up, to lead us towards Eudaimonia, the Good Life, which is doing important work with excellent people, essentially, that could uplift the human spirit, that could take us to being those kinds of better angels of our nature. The way that the people in Star Trek Next Generation are generally chill, they're not jerks generally. They're not neurotic. They have their heads screwed on, and we can be more like those people. We can be calm and brave and thoughtful and reasoned, and machines might help to get us there. 

Jon Krohn: 01:14:05
That was so beautifully said and so inspiring. Going back to my contemporary experience of the closest thing that I have to that, which again is kind of like those email reviews in GPT-4, absolutely. I think I've talked about this on air before, but not in this episode, is that I come away from that kind of feedback, from that kind of interaction with GPT-4 better at communicating with people too, to people that I manage, to even just people that you have these completely transactional relationships with picking up my laundry from the laundromat and they've made a mistake. If I have been recently getting this kind of great positive feedback from GPT-4, I'm more likely to come into that interaction with my sock is missing from the laundry, but it's okay.

01:15:06
Finding a way to communicate even constructive criticisms in a way where the person receiving it feels better about themselves and feels empowered. So yeah, I don't know, kind of the early taste we're getting from these systems where a lot of alignment work has ... they tried to put a lot of alignment and ethics work into these kinds of systems like GPT-4. So yeah, it seems like we could really be going in the right direction. So you have lots of papers that I'll link to in the show notes related to this or blog posts related to this as well. So for example, in a blog post, you mentioned that technology can free us from the saddest aspects of the human condition. So maybe you kind of already touched on those with Maslow and self-actualization, but if there's anything else on that particular blog post that you want to address, go for it. 

Nell Watson: 01:16:01
Yeah, I think what I touched upon about machines helping to stop us from texting our ex or sending that drunk and angry email to the boss or something, or if we've had a difficult day to stop us from reaching for the ice cream at the back of the freezer to find healthier outlets to sublimate our various emotional shenanigans, I think that's one way that machines can help us. That's what a good friend or parent or partner is able to do. They're able to hold space for us and listen to our rants and even admonish us if we're swinging the lead, tell us to pull ourselves together perhaps if that's necessary too. I think that hopefully machines will be able to provide that for us. A problem is that maybe they'll provide it better than any other human can, and that might have repercussions. 

Jon Krohn: 01:17:15
Yeah. Yeah. Tying back to that thread from earlier, yeah. 

Nell Watson: 01:17:18
Already people are having a fantastic experience with getting counseling of various kinds from AI systems like GPT-4, whatever, and the nice thing is that it's next to free and it's available 24 hours a day. If you're having a dark night of the soul at 3:30 in the morning, you can chat the GPT and GPT will listen. GPT is there for you. Hopefully it's not the 5% of the time where something goes wrong and it says,"Oh yeah, here's how to make a noose". But the great thing is these systems, they have a bunch of different ways of, they understand internal family systems or they understand cognitive behavioral therapy. 

01:18:18
It might be rudimentary compared to a qualified psychotherapist, but it's cheaper, it's accessible. The strange thing is that psychotherapists tell me that actually clients come into them disappointed. They expect a dyad backwards and forth conversation between the psychotherapist and the patient or client. In fact, it's not always like that. Sometimes you're spilling your guts and the psychotherapist just sits there with their hands folded in their lap in the chair, and they look and they watch and they listen and they don't say a thing, and it's very frustrating. Whereas with a machine, you're always getting that back and forth, you see. So sometimes people actually prefer dealing with a machine than dealing with a human altogether, and I think that's a telling vision of perhaps where things might be going and some of the caveats that we should chew on. 

Jon Krohn: 01:19:21
Yeah, great points there. Another paper that you wrote, so that one was a blog post. I'll include a link in the show notes on that, Saddest Aspects of Human Condition one. Another piece that you wrote, this one is a paper. So this one was called Welfare Without Taxation. In it, you propose using revenues from AI and autonomous production to fund a universal basic income. Do you think that this economic model could work alongside the kind of capitalism driven and free market driven approach that we have today? Or would it be- 

Nell Watson: 01:19:56
Yeah. So my mussing there is that universal basic income in general is a bit of a perpetual motion machine in the sense that you might be trying to get energy out of something that there's simply not enough energy in the system to do. However, what if instead of a state providing universal basic income, it was AI corporations generating profit by providing services or selling products, perhaps even with some human employees to do the meet and greets and things in meet space that are needed. But that AI corporation, because it doesn't have much in the way of overheads, and it doesn't necessarily have to be shareholders, it can instead be a source of dividends to people who need it. 

01:21:07
So it might be that the future is a form of capitalism, hopefully a cuddly, socially aware, externality, inclusive kind of capitalism that minds its Ps and Qs, but that where basically all of businesses done by machines and we receive dividends, and the world just kind of works. I don't know if that's feasible, but to me it seems more feasible than the idea of tax and spend and relying on a state. I do think that that could be one path towards a sort of utopia where machines produce stuff for us and we kind of sit back and sip red wine. 

Jon Krohn: 01:21:55
Yeah. Yeah. As I kind of alluded to as we started talking about this utopia stuff, I have totally sipped the Kool-Aid and I buy into it. I think it's possible, and I think it's possible in the not too distant future, at least a lot of aspects of automation. We have to rethink how society's structured, and I think a universal basic income could be a big part of that, absolutely. 

Nell Watson: 01:22:22
Oh yeah. These autonomous organizations, these autonomous corporations, they're a game changer when it comes to mutual societies, when it comes to charitable societies because the overheads are so minimal. Things that only work at large economies of scale when there's humans involved, can now be done on a neighborhood scale, because the overheads are so little if it's all being organized by machines. So that means that all kinds of nonprofits or all kinds of other kinds of mutual aid types of organizations can now be done feasibly. So that's going to have a major impact on how social welfare is done within society. It might mean that you might prefer a kind of AI charity to support you rather than seeking aid off a state, for example. 

Jon Krohn: 01:23:25
Yep. Very cool. I am optimistic, and this conversation has made me feel even more so. I could literally go on for hours. This has been an incredibly fascinating conversation, and we got into a few percent of all of the topics that we had lined up for you as potential things that we could cover. So hopefully we can catch up again in the future and dig into some more of these topics and some more that will have evolved further. Maybe we can do that episode from a simulated pub and have all of our listeners there with us. But before I let you go, Nell, do you have a book recommendation for our audience? 

Nell Watson: 01:24:15
Oh, yeah. Well, if you're interested in AI alignment and helping to align human and machine interests and understanding, there's a very nice book called The Alignment Problem, Does What It Says on the Tin by Brian Christian. So I recommend people have a go with that. I'm just looking at my bookshelf here. If you're interested in fiction, there's an interesting book called Gnomon, that's with a G by Nick Harkaway, which is a super, super cool read. 

Jon Krohn: 01:24:57
Nice. Yeah. That's the Greek root of knowledge I think. Gnosis is like to know, I think. 

Nell Watson: 01:25:04
Absolutely. That's correct. Yes. 

Jon Krohn: 01:25:06
So this is like a noun variant on that. 

Nell Watson: 01:25:10
Exactly. A sort of knowledge diamond. Yes, indeed. 

Jon Krohn: 01:25:15
Diamond. Diabolical. Awesome. I have no doubt that tons of our listeners are going to want to be able to follow your thoughts between this episode and our future fully interactive 3D rendered simulation episode with you. So between now and then, what's the best way to follow you?

Nell Watson: 01:25:38
Sure. You'll find me and my writings and papers and such at nellwatson.com. That's November echo lima lima watson.com. 

Jon Krohn: 01:25:47
Nice. It's the first time that we've had ... I don't know exactly how you'd describe what those ... I don't know how you describe it when you have that Zulu foxtrot. 

Nell Watson: 01:25:58
NATO phonetic alphabet. 

Jon Krohn: 01:26:00
The NATO phonetic alphabet. Yes, exactly. No one has done that on air in hundreds of episodes, but it could imagine scenarios where it is helpful. Even for my own name and website, I should be doing that, so that's a great idea. Thank you, Nell. Brilliant. Well, thank you so much for coming on the show. It's been a mind bending and wonderful experience. Such a delight. Thank you so much, Nell, and hope to catch you again soon.

Nell Watson: 01:26:27
Thank you so much. It's been a great pleasure to join. Thank you. 

Jon Krohn: 01:26:36
Is Nell incredible or what? In today's episode, she filled us in on AI ethics and how it's concerned with present day issues such as fairness and privacy, while AI safety is more forward-looking and concerned with issues such as the alignment of machines with human goals. She talked about how we are entering the phase of personal AI wherein AI systems collect data from individual humans across many systems and are able to form intimate feeling relationships with us. She also talked about how up next will be the phase of corporate AI in which distinct AI agents will drive cultural and commercial changes independently of humans. She talked about how through the IEEE GET program, anyone can submit a four to five page proposal to develop a new standard such as an AI standard and how the first renaissance provided the bottom rungs of Maslow's hierarchy of needs to many on this planet, such as covering our physical needs and providing safety. While if we get the standards and policies right, AI could bring about a second renaissance that covers us all the way up to the top self-actualization rung of Maslow's Pyramid. 

01:27:38
As always, you can get all the show notes including the transcript for this episode, the video recording, any materials mentioned on the show, the URLs for Nell's social media profiles, as well as my own at superdatascience.com/731. Special thanks to Profits of AI, the agency that recommended Nell as a speaker today, and then also represents the renowned Ben Goertzel, who was our guest back in episode number 697 and many other AI profits. 

01:28:02
Thanks to my colleagues at Nebula, of course, as well for supporting me while I create content like this Super Data Science episode for you. And thanks of course to Ivana, Mario, Natalie, Serg, Sylvia, Zara, and Kirill on the Super Data Science team for producing another glorious episode for us today. For enabling this super team to create this free podcast for you, we are deeply grateful to our sponsors. You can support this show by checking out our sponsors' links, which are in the show notes. Otherwise, please share, review, subscribe, and all that good stuff. But most importantly, just keep on tuning in. I'm so grateful to have you listening, and I hope I can continue to make episodes you love for years and years to come. Until next time, keep on rocking it out there and I'm looking forward to enjoying another round of the Super Data Science Podcast with you very soon. 

Show all

arrow_downward

Share on