Podcastskeyboard_arrow_rightSDS 565: AGI: The Apocalypse Machine

125 minutes

Data ScienceArtificial Intelligence

SDS 565: AGI: The Apocalypse Machine

Podcast Guest: Jeremie Harris

Tuesday Apr 12, 2022

Subscribe on Website, Apple Podcasts, Spotify, Stitcher Radio or TuneIn


Mercurius co-founder Jeremie Harris joins Jon Krohn for a lengthy discussion that explores the potential dangers that may arise as Artificial General Intelligence progresses at faster rates than ever before.


Thanks to our Sponsors






About Jeremie Harris
Jeremie is the former co-founder of SharpestMinds, a data science mentorship marketplace where you can learn one-on-one from professional machine learning engineers and data scientists for free until you get a job. Today, he works on AI safety at Mercurius, a company he recently co-founded. He's briefed senior political and policy leaders around the world on long-term risks from AI, including senior members of Canadian Cabinet, the U.K. Cabinet Office, as well as the U.S. Departments of State, Homeland Security and Defense. 

Overview
What happens when AI systems become as intelligent as humans? That's the topic of the hour in this week's episode, where Jeremie Harris and Jon Krohn discuss how AI could potentially destroy the world.

After covering the differences between the Canadian and American VC ecosystems, Jeremie and Jon dove straight into Artificial General Intelligence (AGI), which is largely defined as the ability for a machine to understand or learn any intellectual task that a human being can. Jeremie pinpoints GPT-3, in particular, as a turning point in the progression of AGI, mainly due to its ability to perform a range of different capabilities and it being the first time we've seen general reasoning of a specific system.

Through his new AI safety startup, Mercurius, Jeremie has briefed senior political and policy leaders around the world on long-term risks from AI, including senior members of the U.K. Cabinet Office, the Canadian Cabinet, as well as the U.S. Departments of State, Homeland Security and Defense. And alongside his co-founder and brother, Ed, they primarily ask themselves where AI is going when systems become 10x bigger every year. At one point does it start to pose a risk to society? And when do these systems start acting human-like? Jeremie thinks it's very likely that AGI will have been developed by 2030 or 2035.

Ultimately, Jeremie says that the main problem is two-fold: technical AI alignment and policy. First, how can you make sure that you build an agent that's more intelligent than humans but doesn't maximize on elements that could easily be destructive? And as for the policy problem, Jeremie suggests that we approach this aspect by ensuring global coordination, especially with countries that don't care as much about safety. "How do we ensure that they don't develop AI in an unsafe context?" he highlights.

In the long-term, the world will potentially think about AI belonging to a risk class similar to nuclear and bioweapons. But how do we get there? At Mercurius, Jeremie and his team discovered that effective communication begins by connecting these potential risks to issues that are currently happening.

Despite all of this doom and gloom, Jeremie remains hopeful about the future of AI. At some point, he says that AGI will be the only approach that has a robust chance of stopping potential catastrophes if done correctly and ultimately becoming the panacea for humankind and life on the planet.

If you're finding AI safety a compelling topic, Jeremie suggests exploring this topic in greater depth by reading books like "The Alignment Problem" and "Superintelligence," and also by reviewing some of the more skeptical literature on the topic. For more on Jeremie's day-to-day work at Mercurius (hint: it involves a lot more cold emails than you'd think), and to hear him shed light on the value of a data science mentor, tune in to the episode.

In this episode you will learn:
  • Why mentorship is crucial in a data science career development [15:45]
  • Canadian vs American start-up ecosystems [24:18]
  • What is Artificial General Intelligence (AGI)? [38:50]
  • How Artificial Superintelligence could destroy the world [1:04:00]
  • How AGI could prove to be a panacea for humankind and life on the planet. [1:27:31]
  • How to become an AI safety expert [1:30:07]
  • Jeremie's day-to-day work life at Mercurius [1:35:39] 

Items mentioned in this podcast: 
 
Jon Krohn: 00:00:00
This is episode number 565 with Jeremie Harris, Co-founder of Mercurius. This episode is brought to you by Neptune Labs, the metadata store for MLOps and by Einblick.ai, the collaborative way to explore data. 

Jon Krohn: 00:00:18
Welcome to the SuperDataScience podcast, the most listened-to podcast in the data science industry. Each week, we bring you inspiring people and ideas to help you build a successful career in data science. I'm your host, Jon Krohn. Thanks for joining me today. And now let's make the complex simple. 

Jon Krohn: 00:00:50
Welcome back to the SuperDataScience podcast. Today, we've got Jeremie Harris on the show. He's one of the sharpest and most interesting people I've ever met. And I do not say that lightly. His thoughts and work on AI could dramatically alter your perspective on the field of data science and be bewildering, perhaps even frightening impact you and AI could make together on the world. Jeremie is Co-founder of Mercurius, an AI safety company he recently co-founded. He's briefed senior political and policy leaders around the world on long-term risks from AI, including senior members of the UK Cabinet Office, the Canadian Cabinet, as well as the US Department of State Homeland Security and Defense. He's host of the excellent Towards Data Science podcast. He previously co-founded SharpestMinds, a YCombinator-backed mentorship marketplace for data scientists. And he dropped out of his quantum mechanics PhD to found SharpestMinds, but he does hold the masters in biological physics from the University of Toronto. 

Jon Krohn: 00:01:49
Today's episode is deep and intense, but as usual, it does still have a lot of laughs and it should appeal broadly, no matter whether you're a technical data science expert already or not. In this episode, Jeremie details what artificial general intelligence is. That's AGI for short. He talks about how the development of AGI could happen in our lifetime and could present an existential risk to humans, perhaps even to all life on the planet. He also talked about how alternatively, if engineered properly, AGI could herald a moment called the singularity that brings with it a level of prosperity that is not even imaginable today. He talks about what it takes to become an AI safety expert yourself in order to help align AGI with benevolent human goals. He talks about his forthcoming book on quantum mechanics and he lets us know why almost nobody should do a PhD in his opinion. All right, you ready for this epic mind blowing episode? Let's go. 

Jon Krohn: 00:02:54
Jeremie Harris, welcome to the SuperDataScience podcast. It's awesome to have you here. Where in the world are you calling in from? 

Jeremie Harris: 00:03:02
Well, I'm guessing it's Ottawa. Yeah. So Ottawa, Canada. Sunny Ottawa, Canada, slightly cold out. We were just talking about this, but Ottawa is actually warmer than New York city right now. So I feel like I have one rare one on you here today. 

Jon Krohn: 00:03:14
It is an unusually cold and snowy day here after a couple of brilliantly warm days. Jeremie, why did you say that you think you're in Ottawa?

Jeremie Harris: 00:03:23
I'm just tired, confused. I think we all are at this point. 

Jon Krohn: 00:03:27
Experimenting with hallucinogens again. 

Jeremie Harris: 00:03:29
It's either too much caffeine or not enough caffeine. And I find it when you're in that gray zone, it's easy to fall off the rails. 

Jon Krohn: 00:03:39
Got you. So the way that we know each other is that I have been aware of you for a long time. So I came across a Towards Data Science podcast episode that you host, and I was blown away by your eloquence, by your depth of knowledge on a range of topics, and I made a note. I was like, "I've got to find a way to get introduced to Jeremie Harris and get him on the podcast. The audience would love him." And then fortuitously, in conversation with Ken Jee, whose episode came out about a month ago, episode number 555. God, he got a good episode number didn't he? 

Jeremie Harris: 00:04:19
He did. We got to talk about that by the way, I have a clause in my contract. 

Jon Krohn: 00:04:27
You're going to have to wait for 666 next year. 

Jeremie Harris: 00:04:30
Oh God. 

Jon Krohn: 00:04:34
Audience members will figure out why later. It'll be self evident. So Ken organically brought you up as somebody that would be great to have as a guest on the show because he has his own podcast, Ken's Nearest Neighbors, and you were a guest on Ken's Nearest Neighbors. And yeah, he highly recommended you as somebody that I should definitely talk to. And I was like, "Oh wow, great." But then I just felt like I could reach out to you myself and I did and you responded right away. And now we're here recording almost less than a week later. 

Jeremie Harris: 00:05:03
I'm so glad you did. I mean, Ken's amazing. You're amazing. I'm really looking forward to this chat. There's so many, I don't know. We had our pre-chat earlier and the topics just seemed so cool. 

Jon Krohn: 00:05:13
It was ridiculous. It was the longest pre-chat. So for listeners, you probably aren't aware of this. It's probably not surprising that before the episode, I chat a bit with the guest before we start recording. So, I find out that it's warmer in Ottawa and important details like that. 

Jeremie Harris: 00:05:29
So we can script it into the intro, which is very important. 

Jon Krohn: 00:05:32
Right. And yeah, I mean, we come up with a rough conversation structure just to make sure we're aligned and that I'm covering all the interesting topics that are interesting in the guest's life right now and in so doing, we blocked off... So typically I block off 90 minutes on somebody's calendar. I knew that I was going to have a lot to talk to Jeremie about, so we blocked off two hours, but it took us 90 minutes to press the record button because the conversation was so gripping. So hopefully we didn't waste it all. 

Jeremie Harris: 00:06:05
I'm sure we'll cash it out here. 

Jon Krohn: 00:06:08
No, I wouldn't let you talk. That's something I do with guests. Jeremie's really good because Jeremie's a podcast host so he knows not to be spoiling the good material. But with a lot of guests, they start explaining to me everything they're going to be saying on air. I'm like, "I'm just trying to get to the record button." 

Jeremie Harris: 00:06:28
And sometimes it's best when it's fresh or it's often best when it's fresh like that. So yeah, the next time it's like this they've rehearsed a bit and it's not quite, yeah. 

Jon Krohn: 00:06:37
That's when you get the audible yawns from me. 

Jeremie Harris: 00:06:43
I'm glad to hear you don't edit those out. Authenticity is still important. 

Jon Krohn: 00:06:48
I don't think that's ever happened on air. I hope it's never happened on air. All right. So as we mentioned, you are the host of Towards Data Science. Towards Data Science has had some amazing guests over the years, Ben Goertzel, who is one of the people who define artificial general intelligence, this idea of an algorithm that could have all the learning capacities of an adult human. And you've also had Jeremy Howard, who's famous for creating the fast.ai library that makes it fast to create deep learning models in PyTorch. And yeah, you release episodes every week on Wednesdays. And that Towards Data Science name is well known probably to a lot of listeners because when you Google almost any data science topic, you find that a Towards Data Science blog post is top of the results. So yeah, so we don't need to talk too much about Towards Data Science, but I wanted to make sure that listeners were aware of it. If you love listening to Jeremie today, which I'm sure you will, you can get him every Wednesday. Yeah. Anything else about Towards Data Science? 

Jeremie Harris: 00:07:53
No, I think that's a fantastic pitch. I wish I could take credit for it by the way, but it's a brand that predates me quite a bit. Yeah, I just started recording episodes from the last two years or so, and hope you guys enjoy listening to them too. 

Jon Krohn: 00:08:07
Well, but you're underselling it there because yes, the Towards Data Science brand from their Medium blog is not your brand, but the podcast and the podcast audience has always been yours. So that's a little bit that we can talk about is that you started it as the SharpestMinds podcast. 

Jeremie Harris: 00:08:25
Yeah, actually that's true. So we actually had this name that I still really like for it called Mind the Machines. I was like, "Ooh, Mind the Machines. That's a good name." So anyway, so we started with five episodes under that. They were all about helping people transition into data science and analytics roles and short interviewing people in those fields, getting career advice, that sort of thing. And then Ludo, who's the Founder of Towards Data Science and who I'd known through the Toronto universe came up to me. He's like, "Hey, why don't we just join these two together and under the TDS brand. And that was where we started to do it as a co-exercise, which we're really happy about. 

Jon Krohn: 00:09:02
Very cool. Well, it is a great podcast. It is a data science podcaster's podcast, if I dare say so myself as a data science podcaster. You do such a great job so I definitely recommend listeners check that out if you're looking for another podcast to listen to in the industry. So I mentioned that I thought it might have been called the SharpestMinds podcast before it became Towards Data Science. And I thought that because you co-founded a company called SharpestMinds five years ago, and I thought maybe the podcast had the same name. And so SharpestMinds offers a unique training model for data scientists and engineers. It was YCombinator backed so clearly other people thought that there were a lot of legs in this approach. What is this unique training approach, Jeremie? 

Jeremie Harris: 00:09:49
Yeah, it's a great question. There's this trend in EdTech in the last two years towards these things called income share agreements. So the idea here is you don't pay upfront for your education and you don't take out a loan. Instead, you work for free with a school or some educational program. And then in exchange, you repay them a percentage of your salary after you get hired. And so this idea is meant to align the incentives between the educational institution and the student. They don't get paid until you actually get hired. And if that's your end goal, then they're investing in that outcome. So we liked this model, but one of the problems with it, it suffers from the same problem that a lot of startup investing suffers from. So you place a lot of bets on a lot of different people, and that's really cool for you as an investor, but unfortunately means you stop caring about every individual student or every individual startup, right?
 
Jeremie Harris: 00:10:47
So if one fails, you're like, "Eh, whatever, I'll make it up in bulk." And we saw this happening with bootcamps. There were a lot of high profile embarrassing outcome for some of these income share based bootcamps. And we started to see the direction this was taking, we went, okay, there's got to be a better way to do this. Because income share agreements, it just sounds like such a good idea. Don't pay for your education until you actually get the thing your education's meant to tee you up for. And so we were wondering how do you do this in a way that doesn't suffer from this aggregation effect? And we landed on mentorship. So one-on-one mentorship where your mentor will get paid if you personally get hired and they won't get paid if you don't. 

Jeremie Harris: 00:11:26
And so it promotes a cool, shared struggle, intimacy thing where you're working one-on-one with somebody in the trenches and they know all your specific ins and outs. There are professional data scientist, a professional machine learning engineer data analyst, and it gets around another one of the really fundamental challenges with education in EdTech, which is that there's strong incentives that push those who can't do to teach. So if you go to a boot camp or something and you're watching the instructor yammer on about whatever they're talking about, in the back of your mind, you have to think if you were really genuinely this amazing at what you do, why would you not get paid three times the salary plus stock by working at Google or Facebook or Meta or whatever. 

Jeremie Harris: 00:12:11
And so this is an actual, I mean, it's just an economic reality and it's embarrassing and awkward to talk about, but that was another thing that had us thinking, okay, how do we find a way to motivate people who currently work at Facebook, who currently work at Netflix to work with real aspiring data scientists and so on? And that was part of the idea. So give them an income share arrangement, obviously you're looking for a job you can't afford that kind of talent right now. But if you bet on your future success through income share, that's something that unlocks that door. So that was the philosophy that brought all those pieces together. 

Jon Krohn: 00:12:43
So cool. I love it. What was it like getting that off the ground? Was it easy? 

Jeremie Harris: 00:12:47
No. In a word, no. 

Jon Krohn: 00:12:50
Was founding your startup up easy? Did you get rich quick? 

Jeremie Harris: 00:12:54
We had a tremendous startup from the very beginning. It was great. There's so much demand. I wish that were the case. I mean, I wish that had happened or actually, I don't know that I wish that because we learned so much by getting the (beep) kicked out of us. Before we went in that direction, we tried so many ideas, many of which had nothing to do with EdTech at all. So we started, for example, this company called [Yazabi 00:13:20], which was after I dropped out of grad school and my brother left after finishing his PhD. So we were these two clueless physicists and we were like, "What can we do with these skills?" And we knew how to code so we were like, "Let's use that to do some shit." And decided to make the world's most elaborate and overdeveloped restaurant recommendation engine, which if you're wondering about whether you should make a restaurant recommendation engine, a good tip is to not do that. 

Jeremie Harris: 00:13:46
It's a product that no one actually wants. We were thinking like, "Oh man, I keep having these long discussions with my girlfriend or whatever it about where we should go for dinner. I bet everybody else has this problem." And they do, but our app was not the solution to that problem and it took us way too long to figure that out. So we went through a lot of cycles of embarrassing failure after embarrassing failure, eventually learned to talk to users. And that's what brought us more and more into this space of AI because so many of our friends in grad school were looking for upskilling, were looking for jobs, physicists don't have the best job prospects innately. They need something more to get to the point where they're employable. And anyway, so that was what we ended up doing to get to EdTech and in that space. 

Jon Krohn: 00:14:33
So you're having a conversation with one of your early adopters of your restaurant platform and you're like, "What else do you need in here? What else do you need in your restaurant discovery journey?" They're like, "You know what? To eat at these restaurants, I'm going to need a high paying AI job." 

Jeremie Harris: 00:14:49
Yep. That's exactly how it happened. 

Jon Krohn: 00:14:54
99% of machine learning teams are doing awesome things at a reasonable scale with say about four people and two production machine learning models. But most of the industry best practices that we hear about are from a small handful of company operating models at hyperscale, the folks over at neptune.ai care about the 99%. And so they are changing the status quo by sharing insights, tool stacks and real life stories from practitioners doing ML and MLOps at a reasonable scale. Neptune have even built a flexible tool for experiment tracking and model registry that will fit your work workflow at no scale, reasonable scale, and beyond. To learn more, check them out at neptune.ai. That's neptune.ai. 

Jon Krohn: 00:15:43
Cool. Well, is mentorship crucial do you think? I mean, I'm leading the witness here with this question, because I have a feeling that I know what the answer is, but you can elaborate. So is mentorship crucial for career growth or career entry into data science? Or a better way of phrasing that would be, is that more so the case in data science or software engineering than in other kinds of fields? 

Jeremie Harris: 00:16:13
That's a really good question. I think honestly the answer is no mentorship isn't crucial period. I know that's not the full question you were asking, but some people can just have the personality where the way they motivate themselves, they can look up blog posts and do personal projects without any kind of oversight, power through all the obstacles in their way and that's great. If that's you, you have the right to recognize that and you do not need necessarily a mentor. You might not even need a boot camp. So different solutions work for different people for sure. I think one of the things that makes data science so conducive to mentorship is how fast it moves. So imagine even if you did a PhD in machine learning or masters in data science, which is a really common thing nowadays, and you graduate from this program. At the time you started the program, there are libraries that did not exist when you graduate or will not exist when you graduate. 

Jeremie Harris: 00:17:08
And quite often, these libraries end up being really important. All of a sudden, Streamlit, everybody's using Streamlit. This becomes just the bar and so you need to keep up with this stuff. You need to know what's relevant, what isn't, and you need to find a way to figure out as well in this giant space of so many different tools that are constantly evolving and disappearing like which ones should you actually focus on? And so I think having the advice of somebody who's actually in the industry to help you navigate at landscape is really important in a way that's not in... If you think of a field like nursing or something, this is something that's the way that nursing works hasn't evolved quite as fast as software development or data science. So for those more technical fields, I think you really do benefit a lot or disproportionately from mentorship. Not to say that it's not valuable in all contexts, but I would say the scope of it is much bigger. The scope of the value you can create through mentorship is much bigger in a space where things move quickly where you can ping people who are actually living and breathing this stuff professionally. 

Jon Krohn: 00:18:08
Nice. Yeah. It makes sense to me and yeah, I certainly am the kind of person who is motivated by a deadline or by knowing that a call is coming up. I too easily procrastinate on dealing. I don't have the energy for that and I'm saving my energy to do that perfectly. But if I know that I'm going to have to be interviewing Jeremie Harris at 1:00 PM on a Wednesday, then I've got to be ready for it or almost ready. 

Jeremie Harris: 00:18:35
Yeah, no, that's true. But the accountability piece is really important as well. I think this is one of the biggest reasons why bootcamps are even a thing. If you think about like what you learn in a bootcamp, it's nothing that you couldn't learn through blog posts most of time. There are sequences of blog posts that you could follow or you could do something on DataCamp or Dataquest. There are all these great asynchronous things. The reason you do a bootcamp quite often is for the accountability. And to me, at least for my personality type, a mentor is like the ultimate form of accountability because they know what your goals are specific. They can tell you, "Hey look, the pace you're going right now, your goals are just unrealistic. You cannot continue at this pace and expect to get where you need to go." Whereas to get that attention, customization in a boot camp context, a little bit tougher. 

Jon Krohn: 00:19:24
Yeah. That's absolutely right. You can't get that from, "Okay, I've got this series of blog posts and at the end of it, it says that I'm going to be an AI engineer." That is vastly different from an actual AI engineer's perspective. Somebody could have just titled their GitHub repo that gave you all the blog posts that it doesn't necessarily correspond to reality. So even if you were timing it correctly and you were like, "Okay, there's 52 articles that I need to read and then I'll be an AI engineer. So I'll do one a week and at the end of that, I'll be an AI engineer." But that might not correspond to the reality in a way that a mentor can let you know about. 

Jeremie Harris: 00:20:03
Well, and there's also this trickle down phase. So every time you complete a bootcamp, you pay this tax at the end. They'll always tell you something like, "Oh, now is the beginning of your journey in a way. It's not the end." And you're like, "Wait, wait a minute. When I gave you $20,000 upfront to begin this thing, I distinctly recall the pitch's sounding a little different." 

Jon Krohn: 00:20:25
That's exactly right. 

Jeremie Harris: 00:20:26
Undergrad is a little bit like this too, but that gap is a function of the fact that the education lags reality. And so, the more you talk to people who are actually doing this stuff and building real systems in the real world, the smaller that gap becomes. And so when you're trying to make that last leg of the journey, often that's where people get stuck, they spin their wheels because they're like, "I have data science skills that were relevant 18 months ago. Nobody wants to talk to me what's the problem." And anyway, that's part of the challenge too. 

Jon Krohn: 00:21:00
Yeah. That's a really good point. It is interesting how many of these full-time bootcamps are out there that are eight weeks, 12 weeks. And the idea being that when you get to the end, you're ready for a data science career, even if you didn't have any programming or math skills before. And yeah, the reality is that learning the appropriate programming, your math skills, as you say, even if you did an undergrad in data science where for four years, except for maybe your summers, you were living and breathing the requisite, linear algebra, probability theory, calculus, algorithms and data structures, Python, coding skills. If you spent four years doing that, you'd graduate from the undergrad. Most people would still have a lot of learning on the job to do. 

Jeremie Harris: 00:21:47
Yeah. And it's tough. Right? So this is one challenge that we ran into at SharpestMinds that I think is the whole EdTech ecosystem suffers from this challenge, but nobody wants to admit it. So it's this issue that if a student fails to succeed, who's to blame? Right? So if a student doesn't get hired, they go through your program and it doesn't work for them, whose fault is that? And this is not as straightforward of questions as it might seem. Right. Right. So, so you have here's the thing that's going to really suck to hear, but in some ways, because I'm not officially SharpestMinds anymore I can say this, but SharpestMinds actually very open about this sort of thing. We had a culture of talking about this openly. Some students don't actually put in the work. And so when you look at what are graduation rates from different schools, bootcamps or whatever, and these people aren't actually getting hired, you have to factor that in. It's inescapable that the school and the student have to work together, it's a partnership, to succeed. And placing all the burden on the school is unfair and placing all the burden on the student is unfair. But the ambiguity between who owns what in that ecosystem, that's where profiteering happens. That's where people sell expensive courses that don't get you anywhere. It's a moral ambiguity and I think it's substantive. 

Jon Krohn: 00:23:06
Yeah. Because looking at an individual data point at a particular student on a particular program, there is no way to know whether it was the student or the program that didn't provide the opportunity for this person to get a job say in data science or both. Perhaps it could be both. And so, yeah. So you're right. So that does leave a lot of room for profiteering. Well, I completely empathize with the situation that you're describing. It sounds like SharpestMinds came up with a really cool solution and it's probably reassuring to hear for anybody out there who has started their own startup that in the beginning, this wasn't necessarily the idea that you had and that it organically developed into this. 

Jon Krohn: 00:23:57
Now as a part of your startup growth, you participated in not one but two startup incubators. In 2016, 2017, you were in the Creative Destruction Lab. And then in 2018, you were admitted into what is the world's best known startup accelerator, YCombinator in Mountain View, California. So having experienced both of these, would you say that there is a big difference between the Canadian and the US startup ecosystems? And related to that, I guess, country that you found a startup in or ecosystem that you found a startup in as well as particular accelerators, what advice would you give to your younger self if you could do it all over again? 

Jeremie Harris: 00:24:44
Oh man, there's a lot to say here. So I just start by saying the Canadian and American ecosystems are wildly different. The way in which this manifests probably the most obvious way is the behavior of investors. So Canada has a population of very risk averse investors who don't actually understand how value is created in startups at a high level. This isn't all of them. There's some great investors in Canada. I really should flag that. But the default situation in Canada, if you're raising money from Canadian VCs is objectively worse than in the US. And this is something that became crystal clear when we transitioned from the Toronto startup ecosystem to the Silicon Valley startup ecosystem, as you mentioned at YCombinator. Just to give you an example. So the process of when you apply to YC, you do not get questions like how much money are you making right now? I mean, you might get these questions, but they're secondary. The main questions you get at YC are things like, what do you know about your users that nobody else knows? 

Jeremie Harris: 00:25:53
Why are you so passionate about this? How long have you guys known each other? What's the worst fight you've had? These are very foundational questions that speak to founder dynamics, relationships, mission focus. They do not speak to bottom line. That's a totally secondary thing. And it's something that we've not only absorbed through YCombinator, but when we started to do angel investing in YC startups after, you find yourself asking the same questions because they are obviously the right questions to ask. In Canada, the first question you will almost always get is what is your MRR? Your monthly recurring revenue. So there were investors in Canada, at the time we went through this was pre-inflation pre-market hysteria. People were like, "Yeah, I don't invest in companies that are making less than $10,000 of MRR." 

Jeremie Harris: 00:26:42
Now this is their way of basically removing risk from the equation. They're being more careful. The problem is though, when you take that mindset, you can ask that question, Sequoia can ask that question, Andreessen Horowitz can ask that, every single investor on the planet can ask how much money a startup is making. There is zero differentiation in terms of your ability to tell what's a great startup and what's a bad startup based on that metric. The entire value that you offer as a capital allocator, as a person who bets on outcomes as an investor, is your ability to do the intangible analysis. The stuff is like, who are these founders? What do they want? Where do they get meaning from? Are they going to quit? What's their dynamic like? Are they getting going to break up? And so on. 

Jeremie Harris: 00:27:25
It does not come from saying like, how much money are you making? What's your MRR? I don't know, what's your growth rate been? This and that. These are good secondary questions to add context, but they don't speak to the core of what the business is. And so the best startup founders are ones who think in terms of the relationship between the founders and the mission of the company, the best investors are the ones who ask questions about that sort of thing. And Canadian investors simply don't tend to do that and we certainly saw that. I think the Creative Destruction Lab is a great program. I think it could be tweaked in really valuable ways to make it even better. But you certainly saw that mentality manifest with those investors who are all asking the same questions and one tell as well of the noobish Canadian style investors, they'll try to tell you what to do with your company. 

Jeremie Harris: 00:28:14
So they'll tell you things like, "Oh, you should go after this customer or this ecosystem." The philosophy though when you're investing in a company is you don't know what to do with your money. That's why you are giving your money away. That's the point of being an investor. I've got a giant like Scrooge McDuck vault filled with cash and no idea. If I knew where to deploy it, if I knew what a good investment was, I would fricking put in that investment. I don't, I trust the startup founders to allocate that money in a way that turns it into more money better than I can. So why would I ever bother telling them what to do you with their own company? This is insanity. So that reality is recognized in the Valley in a deep and visceral way, because that ecosystem is built on top of founders who build great companies that have that experience. They exit those companies, and then they start to reinvest knowing what they know about how to build companies. Because Canada doesn't have that same concentration of talent and experience, you tend to see more of the armchair investing stuff and that's getting better. Toronto's starting to have really successful exits. We're seeing companies like Shopify, for example, infuse the whole ecosystem with a lot more wisdom and sharper investing practices. But I would say we still have a long ways to go before we quite get to that point. 

Jon Krohn: 00:29:27
Really, really, really great points. I loved that entire segment. Thank you. I mean, that was so much viable information for me and for listeners on starting a startup because you took that question about say geographic differences and made that a much broader, much more useful answer about how to invest and how to evaluate your own startup. So thank you so much, Jeremie. That was awesome. 

Jon Krohn: 00:29:59
Einblick is a fast and more collaborative way to explore your data and build models. It was developed at MIT and showed to reduce time to insight by 30 to 50%. Einblick is based on a novel progressive engine. So no matter the data size, your analysis won't slow down. And Einblick's novel interface supports the seamless combination of no code operations with Python code. This makes Einblick the go-to data science platform for the entire organization. Sign up for free today at einblick.ai. That's E-I-N-B-L-I-C-K.ai, E-I-N-B-L-I-C-K.ai. 

Jon Krohn: 00:30:41
And as a stat there, which this might be a few years old, but to give people a sense just of the size of the differences between the Canadian and the American ecosystems and who knows? Maybe Shopify has changed this thing in recent years, but the amount of venture capital that's available in the US is a hundred times larger than the amount of venture capital in Canada. And yes, the US is bigger, but on a population perspective, it's about 10 times bigger. So that's 10 times more venture capital money per capita. And actually I once said that the same thing to somebody at an Oxford alumni event. And so this was an Oxford alumna and she decided to go back to Toronto afterward. I decided not to go back to Toronto. I decided to go to New York. And I was like, "Oh yeah, one of the reasons is because there's a lot more capital here." And I said what I just said the thing about 10 times per capita and she was like, "Well, I think you might need to adjust the number of people in [inaudible 00:31:44]." 

Jeremie Harris: 00:31:43
No, I want a number in per capita, per capita. 

Jon Krohn: 00:31:50
Per capita squared. 

Jeremie Harris: 00:31:52
I wanted people per capita ideally would be the... 

Jon Krohn: 00:31:57
That might even things up then. Oh, it would almost get worse is you have to be... After the inverse of geographical area, so we'll take per capita divided by land, by square miles of land. That's fair. 

Jeremie Harris: 00:32:17
I think we're coming up with a good formula here. But, by the way, I should mention too, there's one story, I think, that really kind of brought this to a head in my mind. It made me realize, "Oh shit, these are different universes". And do you mind if I just go off on this tangent very briefly? 

Jon Krohn: 00:32:32
Please. Yeah. 

Jeremie Harris: 00:32:35
Okay. Because this is the ultimate Canadian versus American startup pictures. So, we had this investor in Canada. I will absolutely not mention names. I think just as a preface, everybody makes stupid mistakes investing. I have made a bunch of stupid mistakes in investing. My philosophy around it has changed and all that. And I strongly suspect that this person's will have as well. However, this is one of the kind of famous 10K MRR's in the Canadian ecosystem. Who's like, "Hey, come to me when you're making 10k MRR, $10,000 a month of recurring revenue". And we hadn't at the time that we got to know him and he sort of kept buzzing around. One very common thing investors will say is like, "Oh, let me know how I can be helpful. Let's go for coffee, let's grab coffee". They're they're basically trying to just keep the relationship going so the moment that you get any kind of traction, they're there to tell you, "Oh, but we have a relationship that goes way back, come on, let me invest this and that". So it's this very kind of arms length, weird date kind of thing. Anyway, it doesn't matter. 

Jon Krohn: 00:33:34
Right. 

Jeremie Harris: 00:33:35
He kept telling us like, "Oh yeah, 10k MRR and I'll invest, blah, blah, blah". So we applied to YCombinator, we get in and then we announce that we got in. And then this guy... Once you get into YC, you're basically kind of set in terms of investments, you're one of the bells of the ball and deservedly or not, investors will look upon you much more favorably. And so immediately this guy who effectively just got scooped by YCombinator comes back to us and he says, "Hey, can I invest in you? We'll take you up for a tour. We'll do this thing around". And I don't think investors recognize that investing with conviction is the only thing that gives you the big opportunities, because everything looks like it's a diamond in the rough before it looks its worth a hundred million dollars or something. And you cannot be using benchmarks like 10k MRR. It ends up just kind of pissing people off or making them less likely to take your money down the road if you don't show that conviction and believe in them. But anyway. 

Jon Krohn: 00:34:36
Nice. Well, so after YCombinator, maybe even before YCombinator, but certainly after YCombinator, you experienced a lot of growth. SharpestMinds continues to experience a lot of growth today, but you've left. You've gone off and co-founded something else, what's up with that? 

Jeremie Harris: 00:35:01
Yeah. So this was not done in error. It did indeed happen. It was intentional. So the story [crosstalk 00:35:08] 

Jon Krohn: 00:35:07
Didn't have enough coffee that day. 

Jeremie Harris: 00:35:09
Yeah, that's true. It's always something about the coffee. The story here goes back to 2020. So in 2020, actually it goes back even further cause my brother and I have always been interested in AI safety. So AI safety broadly is this area of concern where [crosstalk 00:35:25] 

Jon Krohn: 00:35:24
Since we were babies. Our parents have always said AI safety has been... We just couldn't stop talking about it but the first words out of our mouths were [crosstalk 00:35:35] 

Jeremie Harris: 00:35:34
AI safety. 

Jon Krohn: 00:35:36
Yeah. I was trying to come up with something more clever, but yeah. 

Jeremie Harris: 00:35:38
Yeah. So you're calling my comment done, is what you're technically [crosstalk 00:35:45] Well. 

Jon Krohn: 00:35:47
Already storming off. 

Jeremie Harris: 00:35:48
With that generous intro... Yeah. No, but it has been in our veins in a visceral deep way. Our family is a long line of AI safety people going back generations. 

Jeremie Harris: 00:36:01
And when we first started tracking this stuff it was 2015 or something. We literally read this blog on Wait But Why, this really great blog on AI. 

Jon Krohn: 00:36:14
Oh yeah. 

Jeremie Harris: 00:36:15
You might have seen it. 

Jon Krohn: 00:36:17
Yeah. So Tim Urban, who writes Wait But Why, I'd been super into him for many years and just because he writes really interesting, generally, scientific and he wrote this, you're going to talk about it too, I'm kind of scooping what you were going to say. 

Jeremie Harris: 00:36:35
No, this is great. 

Jon Krohn: 00:36:35
He wrote a two part blog post on AI. And at that time I was already a data scientist. So I've been a data scientist for about a year. The stuff I was doing was definitely not AI data science. It was some relatively simple regressions on some marketing data or it wasn't going to change the world. But reading those blog posts, I was like, "Oh my goodness. There's something really big happening here". And that's how I ended up joining a machine learning startup. So I left a very comfortable job where things were going really well. And reading that two part blog post series by Tim Urban on Wait But Why, but a big part of me being like, "I've got to get out of this comfortable corporate job and do something where I'm building real AI", whatever that means. Real AI, we were not working on real AI. And then a couple years later I actually met him and he extremely kindly provided a review of my book that is on the back of my book, Deep Learning Illustrated. 

Jeremie Harris: 00:37:46
Oh that's so cool. 

Jon Krohn: 00:37:47
Really kind. He still responds to my emails. Really nice guy. 

Jeremie Harris: 00:37:53
Wow. 

Jon Krohn: 00:37:54
Hi Tim. There's no way he's listening, but Tim, if somehow you are listening, you always have an invite on this podcast. I know that you were recently on Lex Fridman. And yeah, we're basically as big as him. I mean it's a data science podcast [crosstalk 00:38:10] Jeremie too. 

Jeremie Harris: 00:38:10
It is the same thing. 

Jon Krohn: 00:38:12
Towards data science, SuperDataScience, Lex Fridman they're always mentioned in the same breath. We're basically the same level of popularity. So, you should just be on all three of our shows now. 

Jeremie Harris: 00:38:20
Yeah. I haven't had time to stop by Lex's podcast yet, but I think it's just a scheduling thing. 

Jon Krohn: 00:38:27
Yeah. You've gotten a bit big for it probably, really. 

Jeremie Harris: 00:38:32
Yeah. That blog post though, the Wait But Why post is actually a great read, first off, but it's fascinating to hear that it was kind of transformational for you too. It was the big waking up moment for me and for my brother to this idea that AI... It's not just that it might transform industry and blah, blah, blah. And people to throw around the word disruption and okay, cool, cool. But that it might represent a genuine threat or a genuine risk, catastrophic risk even to human civilization. And it's easy to hear that and think, "Oh, Terminator, this and that". But it's important to recognize that the leading AI labs on planet earth today, the DeepMinds, the OpenAI's, the Anthropic's have significant contingent of people on their staff who stay up at night to work on problems related to catastrophic risk from AI. And DeepMind actually was founded explicitly for that reason, essentially to get a head start on artificial general intelligence research in the hopes of having a safety minded organization, get there first. And so anytime you have a scenario like that, even if you find yourself going, "Oh, this sounds ridiculous", it's worth asking, "Okay yeah, but why are some of the brightest minds in this space looking so intently at this problem class? What is it about this that animates them and gets them to work on this all day?". 

Jeremie Harris: 00:39:57
And anyway, in 2020, we kind of got a... I can get into kind of the risks associated with this, but I think it's helpful to frame in historical terms, there was a big breakthrough. So for a long time, AI systems were narrow. So you trained an AI system to do one task like recognize images and faces, sorry, faces and images, and it would only be able to do that one task. Look, if face tagging AI could not do your taxes for you. And as long as that was true, everyone was like, "Okay, AI has this crushing, crippling, embarrassing vulnerability. It can't do general reasoning, its super myopic, super narrow". And what happens in 2020 while we get through OpenAI, we get the system GPT-3 and GPT-3 is of course a glorified auto complete AI system. Exactly like the auto complete that runs on our phones. It predicts the next word that a person might write in a series of words. And it does it super, super well though. And it turned out [crosstalk 00:40:53] 

Jon Krohn: 00:40:53
To interrupt you, very briefly, if people want hear a ton about GPT-3, we had a nearly two hour episode recently, number 559 with Melanie Subbiah, who was one of the first authors on the GPT-3 paper and it gives you a really fascinating deep dive into this algorithm, which you... Just continue, Jeremie, continue explaining. 

Jeremie Harris: 00:41:15
I'm actually glad that you're flagging that because it is a really important one to understand. I'll go so far as to say it's an important algorithm to understand from the perspective of human history and what it really represents in terms of this pivotal moment in the history of AI and the history of human technology. So you have these narrow systems, right. They can recommend ads for you to click on, recommend movies for you to watch on Netflix, but it's very narrow capabilities. And then GPT-3 comes along, it's an auto complete AI, but it turns out to be capable of way more than just auto complete. This one system that was trying to do auto complete can translate between languages, it can write coherent multi-paragraph essays that are so human-like that a human cannot tell them apart from human writing. It can answer questions, it can do basic web design, it can code, all these capabilities that would literally [crosstalk 00:42:10] 

Jon Krohn: 00:42:09
Arithmetic was another one, simple arithmetic. 

Jeremie Harris: 00:42:12
Yeah. And actually that one's really interesting philosophically too, to kind of double click on, but yeah, exactly. So you have all this tools but panacea of different capabilities this is one system has, and it's the first time really that we have this kind of general reasoning from a single system. And how did this happen? I mean, just to kind of give the brief overview, I know that other podcasts will have, I'm sure a lot more interesting information about GPT-3 but one way to tell the story of AI is to look at the three different ingredients that go into AI. So you've got data, which is kind of the information that a system learns from, it's the textbook that we learn from. If you want to learn calculus, you need a textbook, you need the data. But even if you have a textbook, you can still fail to learn calculus if you don't actually study it. And that's where processing power is important. That's the effort that the AI expends to crunch through that data and actually learn from it. But even if you have a textbook and even if you have the work ethic to crunch through it, you can still fail to learn calculus if you're a bird. And you have a brain that's just too small to accommodate all the information in that textbook, all those insights. And so what happened with GPT-3 was for a long, long time the history of AI was the history of increasing processing power. Thanks to Moore's Law, computers were getting cheaper and cheaper. Processing was getting cheaper and cheaper. Fixed academic budget could buy you twice as much compute power every two years, everything was fine and dandy. In 2012 deep learning increased that even more. It basically gave companies a reason to throw tons of money on top of that at processing power. 

Jeremie Harris: 00:43:45
But OpenAI realized, "Hey, wait a minute, this whole time we've basically been training bird brains. What if we try scaling this up massively? What if we build a system that has way more processing power, we've trained on way more data using way, way more parameters in our mural network and what will we get?". And they didn't really know going in, I mean, this was known as the scaling hypothesis and it was super fringe before GPT-3 or at least before GPT-2, the idea that you could just scale things and that somehow would solve this problem of narrow AI was laughable. I mean, it was a fringe weirdo thing to believe. All of a sudden 2020 comes along, and now we have a system with about 2% of the number of parameters as there are synapses in the human brain. 

Jeremie Harris: 00:44:30
So in a way, I mean, this is a really shitty comparison, but something like 2% of the scale of the human brain, if you wanted a really quick baby title, the massive system, all of a sudden, it unlocks all these capabilities, general purpose reasoning. And so since then, we've seen a scaling race across the industry and AI systems are being built 10 times bigger every year or so. So we're very quickly approaching very, very powerful thresholds. And [Ed 00:45:00] and I were watching this and we were kind of saying like, "Well, GPT-3 can do some crazy impressive things. We also know that systems are getting 10 times bigger every year. Where does this lead? At what point do we get to something so impressive that it starts to pose a risk? It's essentially a system that has general purpose reasoning abilities that are effectively human-like and we need to start thinking about what risks that might entail". So that was the reason GPT-3 was such a catalyst. We were like, "Whoa, scaling seems to work. There's no obvious reason that it needs to end. So you could keep making theses some bigger and bigger and GPT-3 already has these crazy capabilities". That was kind of our "Aha" moment, where we're like, "Okay, we've thought about AI safety a lot. That was really our life's goal and SharpestMinds was a way of kind of getting some practice with the technology, learning how to build startups and initiatives. But now is probably the time to take this seriously because things seem to be moving very fast". 

Jon Krohn: 00:45:58
That is super cool. And the way that you were able to describe all of that was amazing. I particularly loved the way that you gave that calculus example with, "You need to have the data, you need to have the processing power and then the size". Now I did quickly while you were checking, because it is something that I talked about in that episode, that 2% figure I double checked, it's 0.02%. 

Jeremie Harris: 00:46:18
0.2%. Yeah, definitely smaller than 2%, but yeah sorry. So got my number [crosstalk 00:46:26] 

Jon Krohn: 00:46:28
For me, I was suddenly very concerned because I was 2% is getting really close in capacity. I was like, "I'm going to need to start getting my dooms day prepper kit together a lot faster". 

Jeremie Harris: 00:46:40
Right. So I think one key ingredient though is exponentials have this funny way of making an order of magnitude like that not matter all that much. So it's 0.2% one year. But because these systems are getting 10 times bigger every year, it becomes 2% the next year. So you basically push back your freak out by 12 months and depending on who you are, that might make you feel better or worse. But the reality is that we're entering a regime of processing power and scale and capabilities that is obviously like we've never seen before, but we have reasons to suspect that this regime just keeps going. So OpenAI published in late 2019, this paper on scaling laws for language models, and there have been a bunch of scaling laws papers since, that just seem to show these power curves that don't bend, they just keep going. And it's an interesting question, philosophically is this a principle of nature, a law of nature almost, that when you scale these three things together, you just get more nuance, more learning ability, more general purpose reasoning. Right now it's not that that's going to lead us to AGI in and of itself, but you start to wonder with a couple of hacks with a couple of extra things, self play, reinforcement learning piled on top, how much do you need before you start unlocking some real things? And there's a lot of different views on this. 

Jeremie Harris: 00:48:06
Some people think it's imminent in the next three years. I would say those views are probably on the alarmist side, but it's pretty mainstream to talk about 10 years. That's not a lot of time. By 2030 some say, by 2035. But in the scheme of things, from the standpoint of human civilization, we're talking about something that is effectively imminent, whether you say imminent on the order of a couple months or a couple years or decades, we haven't been around that long and this is going to hit us sooner rather than later on that time scale. 

Jon Krohn: 00:48:39
Yeah, it's not a certainty because there could end up being some... It could be something that we're missing, but yeah, I agree with you that it seems likely AGI is possible in our lifetimes, if not soon. 

Jeremie Harris: 00:48:59
Yeah. And actually that question of whether or not it's possible is a big source of contention. I think more and more people are sort of buying into the thesis that, "You know what, if we can just literally replicate the human brain, we would have an AGI". If you assume that the interesting computations that happen in the human brain are just pure physical, it's materialism, there's nothing magical going on. Surely we can replicate that. And if we replicate it on a silicon substrate, then we can run those computations way faster. A human brain, even if it's thinking 10,000 times faster than a regular brain, is something to think about. 

Jon Krohn: 00:49:40
That's right. I guess in our lifetimes, we don't know for sure that the way that we're approaching it with the kinds of model parameters that we have. So the way that, although artificial neurons in a deep learning system, loosely mimic the way that biological brain cells are connected. There's a lot more nuance in the way that biological brain cells connect and even beyond just the action potentials the electrical signals that go between the neurons themselves. There's also lots of support tissue around that appears to be increasingly instrumental in the way that a brain works and how memories are formed, for example. And so there could end up being some pieces in the way that we design the systems today. That mean that in order to get closer to something that is structured, the way that a human brain processes information could require a re-architecting from scratch that just scaling up to the right number of parameters might not resolve. 

Jeremie Harris: 00:50:48
Yeah, that's very true. And actually I think this brings up a really important distinction. One of the reasons I don't spend much time really thinking much about the human brain as an analog, is that I do expect us to approach AI from a direction that's completely foreign to us. So when we think about intelligence, we've approached intelligence historically from two different angles, where humans have intuitions about intelligence that come from two different sources. The first one is evolution. So we all have this instinct about the level of intelligence of an amoeba compared to an ant, compared to a fly or a cat and a human. And we have the sense that the kinds of mistakes that creatures make along that direct action are very distinct. You see a cat for example, failing to plan ahead in the same way that a human does. 

Jeremie Harris: 00:51:38
And the ability to plan ahead seems to kind of increase generally as you climb that ladder towards intelligence in the evolutionary direction. There are a couple of other interesting features that seem to arise like dexterity, things like that, ability to sense your environment and so on. So that's interesting, but then we have another direction that we approach intelligence from, besides evolution, and that's child rear. So we have a sense of how intelligence evolves between the formation of an embryo and then an infant, a toddler, a teenager and so on. So we have this experience with the kinds of mistakes that get made by system as they climb that ladder too. And low and behold, those two kinds of mistakes, the evolutionary and the child rearing mistakes are totally different. If you took the lessons from child rearing and tried to apply them to predicting what intelligence evolution would look like, you would make a lot of mistakes. You would start to think, "Okay well, the stupider things, probably some species if they were babies, they wouldn't even be able to feed themselves at any point. This is just not a classic mistake that you see in evolutionary context". And so these two different approach vectors, lead to very different paths and trajectories. You can't really use one to inform the other. 

Jeremie Harris: 00:52:51
When we look at approaching intelligence through artificial means, we're looking at applying a kind of optimization pressure on a system that looks nothing like the optimization pressure that leads babies to become adults or that leads species to become more intelligent. And so we should expect to be surprised. We should expect that we will have no fire alarm for AGI. We'll have no way of saying, "Oh, we're about to do it, we're about to do it". How close is GPT-3 to AGI? How close would GPT-4 be? I don't know. Nobody can tell us that any more than they could tell us how close a chipmunk was to human level intelligence, if all they'd ever seen was child rearing. I hope that makes sense, but that's sort of part of what informed my [crosstalk 00:53:34]. 

Jon Krohn: 00:53:33
Definitely makes sense. Yeah. You have done a lot more thinking about this than I have, and I'm really enjoying this conversation and it makes me feel I have a lot more learning to do. And so I think we defined this earlier, but, yeah we did define AGI earlier. So this is the kind of the key thing is sometimes in this conversation, Jeremie or I say AI, and what we mean is AGI. This is actually in the Tim Urban blog post, which I'm going to make sure to include in the show notes. It's a brilliant introduction to this idea that artificial intelligence can be broken up into three categories. And we've talked about two of those categories already in the program. So artificial narrow intelligence is what most machine learning algorithms or AI approaches have been doing in past decades where it can only do some very narrowly defined tasks like identifying that a dog picture is a dog picture, not a cat picture. 

Jon Krohn: 00:54:42
And so the point that Jeremie has already been making is that AGI, Artificial general intelligence, is something that we're getting closer to in recent years. And we've talked a lot about the GPT-3 example, which had a lot of emergent generalization capabilities that were not anticipated, but there are other research groups that have been working very explicitly on creating more and more general intelligence. Google DeepMind being perhaps the most foremost example that I can think of. And so they, for example, have been increasingly creating a single algorithm that can play more and more games and more and more styles of games. So at first the state of the art was a single algorithm that can play many Atari video games well, but now it's a single algorithm that can play a lot of Atari video games extremely well, as well as board games extremely well, even though these are completely different kinds of paradigms. So they are kind of methodically moving in that direction. 

Jeremie Harris: 00:55:45
And we're seeing this as well with StarCraft 2 and Dota 2, more and more kind of going off board and into these more complex games that involve navigating resources and planning very far ahead. That sort of thing. Sorry. Yeah, I just... 

Jon Krohn: 00:55:59
Yeah, exactly. And I have been doing research and having conversations with somebody whose episode we are recording in a few weeks. So the episode won't be live for about a month from when this episode airs, but we're having an interview with Alexander Holden Miller, who is a research engineer at Facebook AI Research. And he is going to be in an episode that we film live at the Machine Learning conference in New York, which unfortunately if you're hearing this for the first time, by the time his airs that has already happened. We've already done the recording, so you won't be able to check it out live, but you will be able to check out the recording when it comes out, and his research group at Facebook AI Research in New York is focused on a board game called Diplomacy. Jeremie, have you ever heard of Diplomacy? 

Jeremie Harris: 00:56:55
I have actually, I think in this context. 

Jon Krohn: 00:56:57
Okay. So I hadn't heard of it. It looked vaguely familiar to me, but what Diplomacy is, is it's kind of like Risk. So you have a map of Europe and you're trying to conquest Europe. And a big difference between Risk and Diplomacy is that Diplomacy can involve a lot of communication and strategy alliances form in a very explicit way that can only happen in conversation with other players in the game. And so right now that FAIR team, Facebook AI Research, FAIR team is explicitly focused only on a version of the game where there is no conversation. It's called the No-Press version, but their objective is to use that as a stepping stone towards having an algorithm that can play the Press version. And that is actually a really crazy step towards AGI, because if you can have a board game playing agent that can engage in negotiations with other players around the table and not understand, well, maybe understand, maybe or maybe not understand in the way that you and I can seem to understand, what other players' intentions are. That kind of capability is a really staggering step towards AGI. 

Jeremie Harris: 00:58:28
Yeah. And that actually, there's a reason that language is so interesting from this standpoint. And this is part of what animated OpenAI thesis here. We're going to do massive scaling, we're going to spend 10 million dollars building a giant system. What are we going to train it to do? So why did they pick language? They picked language because language is the way that humans encode everything that we know or most of the things we know. Most of the concepts that we think about can be expressed in language and there's actually a really interesting tie into the sort of history of linguistics and the study linguistics. If you go back to Jacque Derrida, who is one of these post modern thinkers, one of the things that he talked about was the idea that you can't really define a word without reference to other words. So every word, if you take an apple, tell me what an apple is? 

Jeremie Harris: 00:59:15
You can only describe to me what an apple is with reference to other words. And so the sense in which words or a vocabulary or a vocabulary language is the structure that's kind of embedded in nothing. It's sort of all these different connections, they're not grounded in anything, but they're all related to each other. And that structure encodes everything that we know about the world. And so there's a sense in which when you look at an AI system that's mastered language, if you have a system that's able to fill in the blanks really, really, really well. And you write Jack and Jill went up the blank, in order to fill in that blank, that AI system must have learned a whole bunch of stuff, not just about grammar or syntax, but about culture and about logic. What are things that people tend to go up? 

Jeremie Harris: 01:00:00
"Oh, a hill is one of those things". What does this pattern match to? "Oh, its kind of aphorism, it's a saying, okay". So there are all these kinds of bits of knowledge that the system needs to develop. And if it can master that, there's a sense in which you can say it has come to understand something. You can argue about whether GPT-3 deeply understands, I don't know, arithmetic, as you mentioned earlier, or how to code this or that. But as certain point, this just is a word game. It can certainly perform these tasks. What does it mean to understand something? Well, I don't know that we necessarily even need a definition of it. Let alone, are we going to come up with a definition of it in time to inform our thinking about general intelligence? 

Jon Krohn: 01:00:41
Yeah. And because of our inability to define scientifically concepts like consciousness, which I think are tied intimately with the concept of understanding. So our inability to define consciousness and certainly our complete inability to define how consciousness arises despite everybody experiencing consciousness, as far as I know. We could all be zombies and I'm the only one, but that's a discussion for another day. Kind of lost my train of thought there. 

Jeremie Harris: 01:01:25
No, but I think you're actually right to bring up... So there's this whole ecosystem of words that are really fuzzy, like consciousness to some degree even free will and experience. And there are questions of Ilya Sutskever, one of the co-founders of OpenAI, got into this Twitter splat by saying like, "Hey, I think some of our AI systems might be just a little bit conscious", and everyone went like, "Yo, you can't just say that. What do you mean conscious?" and then Twitter just exploded. I think this is one of those classic issues, if you're going to use the term conscious, you got to be really clear about how you're defining it. It's not obvious that... So I have this hot take on consciousness, which is that we simply lack the physical laws to explain it right now, no amount of [equationing 01:02:18], no amount of differential equations or whatever is going to make subjective experience arise out of math. 

Jeremie Harris: 01:02:25
I think this is actually a philosophical hard boundary, like the boundary between what things are true and what we ought to do about those things. I think this is just one of those hard boundaries. You can't get consciousness or subjective experience out of math. And I think there's just another set of rules that we're missing that you have to stack on top of physics in order to get there. Do we really want to let our lack of knowledge about what that set of rules might be prevent us from reasoning about the dangers of these systems, whether or not they're conscious? To me, I think this is a dangerous proposition. I think we very often find ourselves seduced into having these kinds of questions when we should be thinking more of the kind of the object level like, "What are these systems capable of doing and what are the risks associated with them?". But that's sort of more of an AI hot take. 

Jon Krohn: 01:03:17
Right. So I hope that we are starting to instill in the listener an appropriate level of fear around the uncertainty of AGI developments. And so with Mercurius, your new company, your AI safety startup, you have been going around in scaring the leaders of various governments. 

Jeremie Harris: 01:03:43
Yeah. I think [crosstalk 01:03:44] 

Jon Krohn: 01:03:44
Reasonably so. So for example, you've advised the Canadian federal cabinet, the British cabinet and the US Department of State, Homeland Security and Defense. Has anybody pooped themself? 

Jeremie Harris: 01:03:59
Well, so I should probably explain what the risk is. I think I've gesturing at it without actually kind of pinning this thing down. 

Jon Krohn: 01:04:08
Oh, right. I just thought it was nebulously scary. You can define it better? Great, please. 

Jeremie Harris: 01:04:10
Yeah, absolutely. No, I think there's very specific reasons to be concerned about what happens when AI systems broadly become as intelligent as a human. So one of the things that a human being can do is AI research, right? So, humans can obviously build AI systems. If you build an AI system that is as capable, broadly, as a human being, then it will be as capable as a human being at AI research. 

Jon Krohn: 01:04:41
Oh, yeah. 

Jeremie Harris: 01:04:41
So- 

Jon Krohn: 01:04:41
When I started defining AI and AGI, I didn't talk about ASI. So, now's a good time for you to do that [crosstalk 01:04:46]. 

Jeremie Harris: 01:04:47
Yeah. I don't find it terribly useful to even distinguish between the two, because I suspect that the moment- 

Jon Krohn: 01:04:52
Right. 

Jeremie Harris: 01:04:52
We have AGI, we will have ASI. 

Jon Krohn: 01:04:54
Right, exactly. 

Jeremie Harris: 01:04:55
Yeah. 

Jon Krohn: 01:04:55
Exactly. 

Jeremie Harris: 01:04:56
This seems to me to be like a relic of old debates that focus more on, "Well, what if we simulate the brain really well? Okay, we'll have human-level intelligence. But then, how do we do better than that?" I think the path we're on with scaling is such that if you have an AGI with $10 million of compute or a 100 million, then you'll have an ASI at 200 million. It's pretty straightforward. You just scale it more and you get a more powerful thing, right? So- 

Jon Krohn: 01:05:24
Right. 

Jeremie Harris: 01:05:24
I- 

Jon Krohn: 01:05:25
It stands for by the way, I'm not telling Jeremie this, the listener if you're not aware- 

Jeremie Harris: 01:05:29
No, sure. 

Jon Krohn: 01:05:30
It's artificial superintelligence. So going back to those definitions that I started on and then I got distracted, artificial narrow intelligence is the ability to complete some specific narrow task. AGI, artificial general intelligence, is the ability to perform, to be able to learn the same kinds of things as an adult human can learn. And then, ASI, artificial superintelligence, is beyond that. And I don't even know if you can describe... Maybe, you can, Jeremie, because you have a better understanding of the space than I do, but what is an example of an ASI capability? 

Jeremie Harris: 01:06:08
Yeah, I can't. Now, partly the issue is that nobody can define even what intelligence is. You'll actually see this is just a complete... People are not aligned on this. But yeah, I mean, artificial superintelligence is a murky... The boundary between that and general intelligence is just murky, which is why I don't expect it to really matter in practice. So one of the problems that you run into with any kind of AI development is we generally train AI systems with a loss function and a thing that they try to optimize, an objective, basically, a number that they try to make go up. And that's the whole training process. You give them this number and then they try to f*** around, make the number go up. If they f*** around the right way, the number goes up and they go, "Great! Okay, I'm going to keep f****** around in that direction." And they keep f****** up and then eventually, the number goes up and good. 

Jeremie Harris: 01:07:00
That's kind of what AI training is. Now, the problem with this is it has to do with something called Goodhart's law. Okay, so if you pick any number and you make it go up super high, you will eventually destroy the world. There is no number that you can make go up super high without destroying the world. So, let me try to make that more concrete. Back in the early 1900s, someone might have said, "Okay, well, I think that a really good number to make go up is the value of the stock market, because the stock market tells us how well off the average American is. And therefore, we should make it go up and we'll have a happier society." The problem is the moment you define a metric, people find ways to hack that metric. And so, when the stock market became this focus for politicians and bureaucrats and just all of society started orienting itself around optimizing for the stock market, you get into things that are not necessarily great things like printing money, things like government buying certain kinds of bonds, things like inflation result and inequality. 

Jeremie Harris: 01:08:15
It's not to say that the stock market is a bad thing in and of itself. It's just that when you myopically optimize for this one number, you start to generate side effects as people find clever ways to hack the system. And you can see that, I mean, in standardized testing, teachers teaching to a test. So we have this intuition that, "Okay, we'll just use test scores to measure how well our teachers are doing. And if that number goes up, then we'll call our education system a good education system." But then, teachers realize, "Oh, shit. If I just teach to the test, if I teach my students how to handle the specific questions that are going to be on the exam, rather than questions that make them good at general thinking and better people, then I'll be better off." And so there again, you have this kind of collusion by an intelligent system to kind of hack that metric. 

Jeremie Harris: 01:09:03
So, the same happens in AI. You give an AI system any reward function to optimize any objective. And as it optimizes that function, it will have side effects. It'll find dangerously creative solutions to make that number go up. We see this all the time in computer vision. So classic example is you have a bunch of images of cows in green pastures and this AI system is trained to recognize the animal, "Oh, it's a cow. Oh, it's a horse," or whatever. But the AI system learns that, "Oh no, green pastures are always associated with cows. And it's easier to optimize by learning to recognize a green pasture than a cow." And so now, you have this system that is a green pasture detector rather than a cow detector. 

Jeremie Harris: 01:09:46
And you deploy that in the real world and if it's a self-driving car or something, that could cause some real problems. So, this dangerously creative solutioneering that AI systems engage in is really what you want to look out for. If you start to build a system that is so good at optimizing a metric, it will find clever hacks that you never thought of that can make that number go up. If you just naively said, "Oh, I think the world would be better if everybody was happy and smiling," I mean, expect a super-intelligent AI to graft a smile on everyone's faces in a super terrifying, dystopic setup. The ultimate example of this is something called a paperclip maximizer which you may have heard of, yeah? 

Jon Krohn: 01:10:29
[crosstalk 01:10:29]. 

Jeremie Harris: 01:10:29
Yeah, I don't know if it's useful for me to mention that or... 

Jon Krohn: 01:10:32
No, please because I'm sure probably most people haven't heard of it. Yeah. 

Jeremie Harris: 01:10:38
Yeah. No, of course. So we're invited in this scenario to imagine a world, let's say, 15 years from now when there's this artificial general intelligence that's developed at a paperclip factory. And the paperclip factory people go, "Oh, good. An artificial general intelligence? We'll be rich. We'll get it to optimize for the number of paperclips we produce." And so, they do. And this paperclip generator ends up realizing a couple things quite quickly. So first off, it can only generate paperclips if it's still around, if it's still turned on. And as long as there are annoying little humans running around who might risk wanting to turn it off for whatever reason at some point, it has a damn good reason to prevent those humans from ever turning it off. If it gets turned off, no paperclips. The reward function doesn't go up. 

Jeremie Harris: 01:11:25
So right away, it has an incentive to stop, or kill, or disable somehow all the humans in its immediate area. It also has incentives to get smarter, because no matter what you're trying to accomplish, you're always better off if you're smarter at accomplishing that task. And so, it has reasons to convince people to give it access to more processing power, more GPUs, more and more everything, to get more competent at what it does. It also has an incentive to gather resources. So paperclips take iron presumably to make, I'm not a paperclip expert, maybe it's steel or whatever, but the bottom line is you need raw materials. There's iron in the ground, there's iron in the moon, there's iron in people's blood. Oops. And now, you have this myopic optimization, very competent optimization. That's the point. It's super competent optimization. The problem is it was for a number that makes you go, "Oh, f*** no! I didn't mean that." And by the time you say that, everything is over. 

Jeremie Harris: 01:12:17
So once you start building these systems that are optimizing for narrow metrics like that, you are inviting this category of risk. These little goals by the way, that I've mentioned, the AI wanting to make sure that it continues to exist, accumulating resources, making itself more capable and intelligent, these are known as instrumental goals. So, there's this concept of highly powerful AI systems always converging on the same set of goals. You can expect a superintelligent AI never to want itself to be turned off, because no matter what its reward is, it's always going to be better off being turned on, it's always going to be better off being more intelligent, having more resources, right? We all do this. Money is an instrumental goal for many humans. I may not know why I want a million dollars right now, but I can tell you I'd be happier if I had it. So, there's a sense in which these instrumental goals are the real risk class. And preventing the instrumental goals from getting in the way of human flourishing is the central goal of AI alignment research. 

Jon Krohn: 01:13:15
Mm-hmm (affirmative), that was incredible. Hearing you say things like you just said makes me feel like I should make the podcast only about AI safety, but then that's what you've already done. 

Jeremie Harris: 01:13:34
No, I honestly think more people should be talking about this. And I think you'd be in a great position to do it too because I think there are a lot of different arguments and angles, and people are convinced by different things. So, there are some people who think that AGI isn't possible. There's some people who think that if AGI happens, it's not going to be a concern because instrumental convergence, for whatever reason, isn't a real risk. There's so many different perspectives on this, and I think having people tackling this question from different angles with different prior beliefs is super valuable. Yeah, I think it'd be great. 

Jon Krohn: 01:14:10
Yeah. Once you kind of frame... So that episode that we had a few episodes ago, episode 559, where we're talking about GPT-3 and we're talking about what's capable next. As somebody who can understand what's going on to some extent, I'm not a GPT-3 expert like Melanie is, but it does seem like AGI in our lifetimes is likely. And then, given that it's likely and given that this thing that you've described... So as soon as we have AGI, that AGI can maybe trivially create artificial superintelligence. And because we don't know, because we've never had an artificial superintelligence algorithm, and because we, in our individual brain, don't have enough processing power to imagine what could happen, we should start to be trying to build some kinds of safeguards around that. And again, maybe it's a futile exercise, but we might as well try. And a really good way, going back to that Tim Urban blog post series on AI, that I always come back to as a way of explaining, while even though we can't imagine what ASI is capable of, we should definitely be wary of it. 

Jon Krohn: 01:15:43
He imagines it or he describes it as a staircase, where each step of the staircase is a different level of intelligence. And so, you've got worms at the bottom of the staircase, bugs. And as you climb up the staircase, then you've got your chipmunk that you talked about earlier, you've got your cat, you've got your monkeys, and you've got great apes. You've got chimpanzees and Bonobos. And then right now, of all of the species that are alive on the planet today, we've got humans at the top of this ladder. And we know that humans are really big dicks to everyone lower on the ladder. We step on bugs, almost everyone. Seems like in parts of India, they don't step on bugs. But most of the world, we're just stepping on bugs, we don't even care about them or their intelligence for a second. 

Jon Krohn: 01:16:37
Most of us don't just kill dogs or monkeys, but some people do. And some people will even capture, torture other humans who just happen to be not as strong or as smart as them, or didn't have some kind of informational advantage at some point in time, or some geographic advantage. And humans have been doing that to other humans in really sick ways for millennia. And thankfully on a per capita basis, it happens less and less. Yeah, as we see in countless salient events in the world today, humans are still trying to attack each other and imprison each other, and change ideologies and, "Now, you're not thinking about your government the right way. You got to do it our way. And if you don't listen, I've got bombs and stuff." Why would then something that's even smarter than us also just happened to be benevolent? 

Jeremie Harris: 01:17:41
Right. And actually, I think there's an important aspect to this too, which is humans are probably a more optimistic case. I hate to be doom and gloom about this, but a large part of the selective pressure that evolution has exerted on humans has had to do with forcing us to cooperate with each other. So if you think about great human civilizations, right? There's a story of genetic selection, but there's also a story of cultural selection. Cultures and societies that were able to foster coordination and cooperation between individuals within them tended to become dominant. That's what we see. If you want to tell the story of the West through that lens, you certainly can. You can say like, "Oh, well, the reason that the US is so dominant today is because it figured out how to manage power transitions in a peaceful way, get people to cooperate at a massive scale." 

Jeremie Harris: 01:18:29
That's great. But what are the selective pressures? Again, back to this idea of how are we approaching AI or how are we approaching intelligence through AI as contrasted by evolution? Well, we're certainly not approaching it through a vector that overtly requires cooperation. As far as I can tell, what we're doing is we're cranking the knob as hard as we can on scaling with objectives that are really detached from anything to do with cooperation. There's interesting research being done at DeepMind, for example, on cooperation between reinforcement learning agents, that sort of thing. That's great. But it's not a question of will we be able to make a safe one? It's a question of will we be able to prevent a bad one from being made? And these two questions are distinct. And to some degree, the ability to influence what kind of AI we end up with is both a function of how well you can solve the technical alignment problem, right? 

Jeremie Harris: 01:19:23
How can you make sure that you build an agent that's more intelligent than you, but that doesn't do something like maximize paperclips at some point? That's a technical problem, AI alignment, and then there's a separate policy problem. How do you ensure global coordination? How do you make sure that countries that maybe don't care as much about safety, or groups that don't care as much about safety, don't get an edge on AI development, don't end up developing something in an unsafe context, even if we have the technical solution to this alignment problem? So that's where policy and technical AI safety- 

Jon Krohn: 01:19:58
Wow. 

Jeremie Harris: 01:19:58
Are two important sides of this equation. 

Jon Krohn: 01:20:02
Whew. All right, that's heavy, man. But I can see now why... This is going back. I mean, it was an hour ago? I don't know. A long time ago, I asked you why you left SharpestMinds, even though things were going well. And now, people are starting to understand. It's like the more that you think about this problem and how massive it is to the future of us, our kids, our grandkids, of our planet, huge. But nonetheless, you founded it as a company. So, what's the commercial angle for Mercurius? 

Jeremie Harris: 01:20:41
Yeah. No, that's a really good question actually. This is one of the key things, right? Because we left SharpestMinds with this sense of mission like we got to do something about this and if we keep naval gazing about this stuff and don't do anything, we can expect default outcomes to materialize and we're not sure that we like the path that, that might lead to. So at first, it was really just how can we leverage our network? We're startup founders, so we're wired to... 

Jon Krohn: 01:21:06
You've got to get to 10K MRR, right? 

Jeremie Harris: 01:21:09
That's right. How do you get to 10K MRR with this stuff? So, a lot of this was leveraging networks to be how can we be helpful? How can we figure out what's missing in this ecosystem? And really, we led it on somehow we're going to have to get to the point where global governments are aware of this problem. That's a really big slice of this pie, this AI policy problem of making sure that decisions are being made with a view to this risk class. Even if you think it's a 1% risk, even if you think it's a 10% risk, we're talking about something so significant that it's worth looking at, it's worth thinking about very deeply, and it's worth establishing institutions to deal with. 

Jeremie Harris: 01:21:50
So we started to look at, "Okay, what are some things that would have to happen in order for there to be one day, an international conference on the risk of AI alignment-related catastrophes, for example, where world leaders gathered to talk about this thing?" At some point, that's going to have to happen. We're going to have to start thinking about these things as, potentially at least, belonging to a risk class similar to nuclear weapons, similar to bioweapons, that's all kind of in the same constellation of things. So, how do we get there? You can't just grab a sitting minister and be, "Dude, paperclips and shit!" And get them to go, "Whoa, paperclips? We..." No. You might be able to convince them, but then they have to [crosstalk 01:22:36]- 

Jon Krohn: 01:22:36
Oh, I know what we need to do. We need to build an AGI that kidnaps their families and shows them that they really need to be concerned. That's- 

Jeremie Harris: 01:22:48
And- 

Jon Krohn: 01:22:48
That's how we do it. 

Jeremie Harris: 01:22:48
And that's the business model. 

Jon Krohn: 01:22:53
Extortion. 

Jeremie Harris: 01:22:56
I mean, you got to convince people. Ideally not by doing that, but who knows? I'm not going to write anything off. But yeah, so what we ended up doing is realizing, "Okay, we got to find a way to connect this to stuff that's happening today." So we're not facing AI alignment risk right now, but we are facing risk from malicious use of AI and we're facing risk from AI accidents. And you can think of AI alignment as just an extension of accidents. I mean, if a paperclip generator is really just... It's a factory on autopilot that just went, "Oops, I did something bad." What we find is we start a conversation about malicious use of AI, which by the way, is this massive field that just exploded largely with GPT-3. And then, you transition, segue into accident risk. And from there, you can start to set up institutions that are able to handle that broader risk class. 

Jeremie Harris: 01:23:53
So it's really about climbing that ladder of present-day worries and then moving your way to alignment over time. And then, you can actually charge for the stuff, so you can actually afford to, for example, put together a dashboard that tracks AI capabilities with malicious use potential. We put that together and that is, just to show, it's aitracker.org. So if you're interested in that, you can kind of see how we're thinking about some of the more recent models and how they could be misused. And then, there's AI accident risk as well that we fold into that as a risk class. You can get companies, you can get governments to pay attention to these things and therefore to pay you to work on them. And then, you can use that money to subsidize AI alignment research, which is also what we're doing, and make sure that you're kind of plugged into that as well. So anyway, long-winded way of answering the question, but basically, it's about translating things into today's risks in a way that extends naturally. 

Jon Krohn: 01:24:47
Really great answer. And yeah, I can see how you could make the case clear. It's about making the case clear that there are risks today. It's not some future hypothetical paperclip risk. It's today we have this problem, these systems are being misused. And also, you could probably rattle up a bit of great power conflict and say, "Look at what those guys are doing over there. And what are we doing over here?" 

Jeremie Harris: 01:25:15
Now, here's a twist that you kind of want to avoid. So the great power rivalry when it comes to AI capabilities when you point to another country and you say, "Hey, look at what they're doing with AI. You need to be tracking this," all this stuff gets bucketed in people's heads. It's like, "Oh, we need to keep up. We need to spend more." Now, the problem is the AI alignment problem remains unsolved. And so, if you're not very careful, what you end up doing is encouraging people to race even harder and faster to build out these- 

Jon Krohn: 01:25:46
Oh... 

Jeremie Harris: 01:25:46
Capabilities. So- 

Jon Krohn: 01:25:47
Damn. 

Jeremie Harris: 01:25:48
It becomes a very delicate balance. This is really where I think malicious use and public safety is an important part of the equation. The things that Russia and Canada and New Zealand and China and the United States all have in common is they don't want one psychopath using an augmented version of GPT-4, maybe one that EleutherAI eventually makes available in the open-source, to carry out massive-scale spear phishing attacks that are super customized, but that get sent out to a million people. They don't want people generating super-viral fake news with fabricated images of Donald Trump choking Jeffrey Epstein to death. They do not want these things. We all agree that we want some level of prevention of public safety risk from AI. And that's a place where countries can actually start to coordinate and collaborate on these things, start to thaw a little bit some of the competitiveness around this technology and do it in a way that creates institutions that can be helpful for this longer-term class of risk. So, sort of trying to thread the needle there a little bit. Obviously, it's a multifaceted problem, but this is the solution that we've come up with so far. 

Jon Krohn: 01:26:58
Okay, cool. I don't know why I say cool. Cool isn't a great transition, but we're going to stick with that anyway. Okay, cool. So despite all of these potential risks, something else that I've known since reading those Tim Urban blog posts all those years ago is that there is the capacity for the development of AGI to be a really positive thing in the history of humankind, maybe in a way that we can't even understand today. 

Jeremie Harris: 01:27:30
Yep. 

Jon Krohn: 01:27:30
So, are you optimistic that AI could be used more responsibly in the future? Do you have some optimism yourself that we could end up having really great abundance and people don't have to go hungry, and somehow we can live forever without using all the resources on the planet, and we're just happy and have inner peace and such? 

Jeremie Harris: 01:27:53
A 100%. I mean, I think that is the promise of AGI. It literally solves all our problems as long as we can tell it what problems to solve, as long as we can... It's kind of like it'll answer any question that you ask it. It's Aladdin's genie. You got to be careful what you wish for. But if you can absolutely hit the bullseye, then you do have a panacea. I mean, you have the solution to every problem that humanity's ever faced. And one other angle, I mean, you could take the hard kind of AI positive approach and say, "Well, we have no shortage of catastrophic and existential risk sources for human civilization." Eventually, one of these things is going to get us. If it's a leaked virus that's super engineered with DNA function, if it's an asteroid, if it's a gamma-ray burst, if it's some crazy, climate change compounding catastrophe that vaporize through some weird effect, eventually somebody is going to do something really stupid, or nature is going to throw us an absolute curveball. 

Jon Krohn: 01:28:58
Yeah. Even if not, eventually, entropy just gets us. 

Jeremie Harris: 01:29:01
Right, yeah. If you're looking at the heat death of the universe, eventually, entropy gets us. That's true, the sun engulfs us or whatever. So at some point, we're going to need an out, and AGI actually offers us that out. It is the only way I can think of that you could guarantee that you're not going to get some kind of pathogen that's released. You think about the amount of money that it costs to destroy the world is just decreasing incredibly rapidly with every technological advance. And this is a wildly unsustainable process. So at some point, we're going to need to stop that process. And AGI is, again I think, the only approach that I'm aware of that has a robust chance of actually doing that. It's got a lot of risks associated with it as well. And so, I think it's a complicated risk calculus. It's not obvious where things should fall, but I certainly think that we can't lose by investing more, by being more cautious about AI alignment-type risks, and being aware of the risk class to begin with. 

Jon Krohn: 01:30:06
Wow. So if like me, lots of listeners out there are thinking, "Holy crap, I've got to do something about this right now," how does one become an AI safety expert? 

Jeremie Harris: 01:30:19
Yeah, I definitely wouldn't call myself an AI safety expert. And I think that's one thing that- 

Jon Krohn: 01:30:23
Just how do you get started? What do you recommend? 

Jeremie Harris: 01:30:26
Yeah. No, absolutely. I guess I'm more just giving a vague signal of how the community thinks about itself. There's so much uncertainty in the space, right? That everything is hedged all the time. A lot of what I've said is hedged. "I don't know if AGI is possible. I don't know if it leads to this risk or whatever. 10%, 1%, blah, blah, blah." But I think the first thing would be to not trust, but verify. So, if you find my story compelling... 

Jon Krohn: 01:30:55
Ah, the old KGB slogan. 

Jeremie Harris: 01:30:56
There you go. Yes, exactly. You might just want to pick up a book called The Alignment Problem by Brian Christian. This is a really good book to read to get a kind of high-level sense of this risk class. Jon, you mentioned Superintelligence by Nick Bostrom, a really good read as a primer. It's slightly out of date now, but it gives you all the concepts that you need. And then, take a look at some of the more skeptical literature on this as well. I've tried to make a point of interviewing skeptics on my podcast. Melanie Mitchell was one, Oren Etzioni is another sort of high profile people who say, "Well, maybe we ought not worry about this quite so much." 

Jeremie Harris: 01:31:40
I'm unconvinced by their arguments, but that doesn't mean that you will be too. I think you should listen to those and think, "Hmm, do I find this actually credible? Do I think they have satisfactory answers to the arguments?" Having done that, you'll probably have a pretty good thesis about what you think the risk comes from. And once you have that thesis in mind, you can start looking at organizations that are working on the thesis that you like, the risk class that you find compelling. So, there are a bunch of organizations. Maybe, we can link to some in the episode when it's published. Oh, sorry. 

Jon Krohn: 01:32:15
Yeah, you could tell I was inhaling like that. I didn't mean to cut you off with my inhale. All I was going to say is a great resource potentially to get started with is the 80,000 Hours post by Ben Todd on being an AI safety researcher. 

Jeremie Harris: 01:32:29
Perfect. 

Jon Krohn: 01:32:29
So Ben Todd was in episode number 497 of this program, and it was one of the most popular episodes of 2021. Now, Ben Todd is not a data scientist. He's an expert in cultivating an impactful career. So, 80,000 hours is the number of hours that you have in a typical career. And so, the 80,000 Hours company that he co-founded is interested in helping you find some meaningful career, and they are big proponents of AI safety research as being perhaps the single most impactful career that you could choose to have today. And they think that to such a great extent that associated groups like the Effective Altruism forum have blog posts, which I'll also post to, with titles like, Does 80,000 Hours focus too much on AI risk? So yeah, they are very bullish on AI risk as a career, and Ben Todd, incredible writer and speaker. And so, I'll be sure to include this in the show notes. But you were going to refer listeners to something else before I inhaled rudely interrupting. 

Jeremie Harris: 01:33:39
I think that was overly sensitive on my inhale detector there. Yeah, my recommendation actually. So, 80,000 Hours is a great place to start your journey. You might be interested to know about a variety of different organizations that are working on related projects. So obviously, OpenAI and DeepMind, they both have very active AI alignment programs. And then, there's also Anthropic, which was recently founded by former OpenAI VP, Dario Amodei, and his sister. Anyway, they're a really great organization. They're focusing almost entirely on safety-related things. And they have, I think already, a couple of safety-related papers out. Besides that though, I think it's worth checking out organizations like MIRI, The Machine Intelligence Research Institute. They're one of the most hawkish. If you're looking for a really pessimistic take on AGI, that keeps you up at night, you may or may not want to check out MIRI. They very much go straight for the jugular in terms of, "We don't have a solution to the alignment problem," doom and gloom, that sort of thing. So that's a bit of a darker path, but there are other organizations like Ought. So, Ought is this startup that's doing AI safety-related stuff. I think they're trying to productize some things as well. Anyway, there's a whole ecosystem around this space and a lot to dig into. I would recommend slowly working your way through AI alignment literature if you really want to contribute to the technical safety stuff. And a good paper to start here is called, Concrete Problems in AI Safety. And this was in, I think, 2016 it was written, but it's a really good starting point for this sort of deep dive. 

Jon Krohn: 01:35:26
Awesome. Really great resources there. Thank you so much. I hope to check as many of them out while I still can. So, what's the day-to-day like as the co-founder of an AI safety startup? What's that like? Do you think it's different from other kinds of startups? 

Jeremie Harris: 01:35:46
Yeah, it's definitely weird. A lot of my time is focused on figuring out how to explain these things to people who are non-technical. So, I do a lot of cold emailing. You'd be surprised. I'm just constantly cold emailing politicians and people in the bureaucracy and public safety and national security-minded orgs, and it's, I don't know, standard founder, schlep stuff. You're just constantly asking people to talk to you. And that's one of the weirder things, because while we are building product, I don't build the product. I'm more educate, that's my function, and that means I'm sending a lot of, as it happens, cold emails and it's pretty exhausting, but I- 

Jon Krohn: 01:36:30
"My machine's got your family." 

Jeremie Harris: 01:36:32
That's right. That'll be my next one. Hey, I'll definitely get sales- 

Jon Krohn: 01:36:36
[crosstalk 01:36:36] subject line. And then the body can be, "Not yet, but maybe soon." 

Jeremie Harris: 01:36:44
Yeah. That's right. You leave them wanting more. 

Jon Krohn: 01:36:51
Yeah. Cool. Well, that makes a lot of sense. And then, so we haven't talked about it, but your co-founder is your brother, Edouard. And so he's doing a bit more of the technical stuff. You're doing a bit more of communication stuff. And this is your second company now co-founding together. So obviously not going too badly. You guys get along reasonably well. The YCombinator questions, they must have given you five stars when they were reviewing the compatibility. How long have you known each other? How long have you been working on the [crosstalk 01:37:21]. 

Jeremie Harris: 01:37:20
Actually. You know what, though? In that ... Back to the Canadian versus American thing, we were told in Canada that being brothers was a detriment. People would look at- 

Jon Krohn: 01:37:31
Oh. 

Jeremie Harris: 01:37:32
And then went to YC and they were, "Oh no, we love sibling co-founders." 

Jeremie Harris: 01:37:35
And so, anyway, even little things like that everybody overfits to their own experience so much in investing that it's just- 

Jon Krohn: 01:37:42
Right. 

Jeremie Harris: 01:37:43
But, I mean, actually just to round that out, so what we do is, essentially, Ed does, as you say, do the technical stuff. He does some ai-one work so he collaborates with researchers at a lot of top labs, and we're building out a tool called AI Tracker. I mentioned aitracker.org. That's the public-facing the version of it. We build this out as a tracker for AI risk for organizations and governments and stuff. So it's a way of us starting to educate people about this risk class in a way that's software-driven and scalable. So- 

Jon Krohn: 01:38:19
Cool. Yeah. Really nice. I love that you're doing this. I'm really inspired and it's nice to hear that you guys are starting to get it off the ground. So, clearly, Jeremie, you're capable of doing a ton proficiently. So hosting a great podcast, Towards Data Science,- 

Jeremie Harris: 01:38:38
That's what my mom says. 

Jon Krohn: 01:38:43
... now onto co-founding your second company. And so I was wondering if you had any particular tools or tips for people being productive or effective? And I have a specific question because I noticed one of your emails that you sent to me, it had this little message at the bottom that was sent from Superhuman. 

Jeremie Harris: 01:39:02
Oh, yeah. 

Jon Krohn: 01:39:03
And I was, "What is that?" And so I clicked on Superhuman and I noticed that it was an email client, email inbox. And I was wondering, did you ever use Google Inbox? 

Jeremie Harris: 01:39:18
Is that Gmail? That's different from Gmail? 

Jon Krohn: 01:39:20
No, it's different. So Google Inbox, when it existed, were the greatest days of my life. And so approximately from 2015 to 2017, it was a product that they offered. And so it worked with your existing Gmail email address, but it was a completely different user interface. And so it was an extremely simple UI, which was nice, and it looks like Superhuman has that, but it also did some things which I have not been able to find in any other email client since. And I have tried desperately to find some that would do this. So, for example, one of the things that I really loved was that it would, like I see Superhuman does, it uses some AI algorithm, probably AGI, to predict which of your messages are high priority or not. 

Jeremie Harris: 01:40:27
Oh. 

Jon Krohn: 01:40:27
And so Inbox, it would only show you in real-time your high-priority emails. Everything else you could set to have them be sent to be notified about them on some delay. So I'd have this priority inbox that I really only get half a dozen emails a day, maybe a dozen, where somebody's actually written me an email and I need to really take action on it and I need to be aware of it soon, but most emails are updates, things that I don't need to deal with urgently. And so everything else, I had it set so that they would be sent to me in a separate inbox on a 24 hour cycle. So at 8:00 a.m. every day, this secondary inbox would be populated with these other emails, but for the rest of the 24 hours of my life, I could just happily have my inbox open and totally unaware that these messages- 

Jeremie Harris: 01:41:30
Right. 

Jon Krohn: 01:41:31
... have been coming in, which I can ignore anyway. So it created this level of Zen because it meant that I could, once a day, open up my email inbox, look into that non-urgent folder. And then there was another thing you could you that I've never seen anywhere else. You could select just the messages that you'd like to actually read. And then you press this button, sweep, and it would sweep all the other ones away. It would just archive them. Maybe you'll need them again someday. And so I'm left with then, from all these low priority ones, I'm left with just a handful that I actually want to read. And then there is "Snooze" functionality, which Gmail has picked up. And even the web version of Outlook does this now. We use Outlook at work. And you can "Snooze" messages, which is great. That's a great way for me to manage because then I don't need to create it to do. I can just "Snooze" it to Monday or "Snooze" it to later in the week or "Snooze" it to the weekend. It'll show up again and I can deal with it then. I don't have to separately be managing "To Dos." 

Jon Krohn: 01:42:26
Now, so Gmail does this. Outlook does this, at least in the web app. But one thing that none of these clients do today that Inbox used to do, is it saves that "Snooze" time period that you selected. So if I snoozed to next Monday and after I'd snoozed the person writes me another email, now with any of the existing clients, that's just all of a sudden it's in my inbox again, like another email. But with the old Inbox, Google Inbox, it would say, "Do you just want to re-snooze this back to the date that you snoozed it to?" And so I just press that and, boom, it's gone again. I don't have to worry about it again until next Monday at 8:00 a.m. or whatever. And you could also, you could attach notes to yourself on each of these. So I could get a priority email that I'm, "Oh, I don't need to worry about this until Monday, but I don't want to forget that, when I reply, I need to make sure I mention this." And so I could attach a note and then "Snooze" it. Anyway, so all these kinds of things. Does Superhuman replicate any or all of these, please? 

Jeremie Harris: 01:43:33
So that's interesting. I would say Superhuman has a very opinionated philosophy on how to do email and their view is that you need to ... Your goal in email is to empty your inbox every day. 

Jon Krohn: 01:43:45
Right. 

Jeremie Harris: 01:43:45
You, literally, send a response and then you mark an email as done. And it's like a "Task". Emails are just like a "Task" in Superhuman. 

Jon Krohn: 01:43:54
Okay. 

Jeremie Harris: 01:43:55
The reason that I like Superhuman actually is tied very closely to the Gmail story. So Paul Buchheit, who, basically, invented Gmail, he's one of the partners at Ycombinator, gave us this talk ... Or did he give us a talk? Somebody did. Anyway, about his philosophy. In the early days of Gmail, there's this view that, okay, every screen change has to take 0.1 seconds or less. Now, if you're familiar with today's Gmail you know that is not how Gmail works today. It's this heavy, slow, clunky, painful, exhausting process, a lot like Google Drive. You open that tab and you see that little thing loading or whatever your browser shows and it just takes forever and it's painful. Superhuman goes back to that philosophy and says, "Everything should be super fast. Everything should be done from your keyboard. You should not have to use your mouse or your trackpad." 

Jon Krohn: 01:44:49
A hundred percent. Yeah. 

Jeremie Harris: 01:44:50
So that's really where it comes from. It's about speed and- 

Jon Krohn: 01:44:56
And mice. That's a whole other thing. Mice are unbelievably cognitively taxing. 

Jeremie Harris: 01:45:01
Yeah. It's a context switch every time. You move from your keyboard to your mouse and back. 

Jon Krohn: 01:45:07
And it's also just the act, the precision that's required, the motor skill required. I don't have a study to back me up here. This is just personal experience, but the cognitive load of needing to navigate and accurately click on that spot relative to just being able to tap away on the keyboard. So that all sounds really good to me. And that is also something that Inbox was really big on was mouseless operation. 

Jeremie Harris: 01:45:37
Now, by the way, in terms of tools, the general question, one thing I actually would do is tie that into the AI conversation. OpenAI's playground for GPT-3 is super useful. So when I do writing, especially for longer projects, I actually have used GPT-3 with Prompt Engineering to help me write and it's a super useful tool. So if you're looking for- 

Jon Krohn: 01:46:02
Wow. 

Jeremie Harris: 01:46:02
... a really nice hack ... Actually I did this recently. Hopefully, none of my wedding guests see this, but with my fiance, we were wondering, "Oh, man, we got to have party favors at our wedding." And we're, "What's an original party favor idea?" And you can plug this into Google and get a bunch of really [inaudible 01:46:18] blog posts. Or you go to GPT-3 and write a sentence like, "I just went to the best wedding. They had the most original party favors. What they did is blank." And then you let it complete it for you. So just with some practice- 

Jon Krohn: 01:46:31
Wow. 

Jeremie Harris: 01:46:31
... at Prompt Engineering, you can really get a lot of value from these systems and I highly recommend it. I mean, it's better than a thesaurus or a Google in many cases. 

Jon Krohn: 01:46:41
That is super cool. So you called it GPT-3 Playground? 

Jeremie Harris: 01:46:45
Yeah. 

Jon Krohn: 01:46:47
So that is a specific URL. So how is that different from ... So to get full access to the GPT-3 API, you need to apply and you might need to wait a long time to get approved? 

Jeremie Harris: 01:46:57
So that has changed actually. 

Jon Krohn: 01:46:59
Oh. 

Jeremie Harris: 01:47:00
Yeah, this ties into, by the way, what we were talking about earlier with the race dynamics here. So OpenAI comes out with GPT-3 and, at first, as you say, they're, "Yoh, only carefully screened people can access this thing and we will check the queries that you give it and we're not ..." There's all kinds of safety stuff. And then a bunch of people started to replicate GPT-3 and this company, AI21 Labs in Israel, for example, came up with Jurassic-1 Jumbo, direct competitor to GPT-3. They offer an API, as well. And now all of a sudden OpenAI is, "Oh, shit, they're offering it to people with no screening. People are bleeding off our platform and the cynical take here is it's a race to the bottom. We have no choice but to do this." Fortunately, OpenAI is really security and safety-minded. So they do have, I'm sure, a bunch of algorithmic checks and things like that. They feel comfortable releasing it in this context, but it's something to keep in mind. They're now releasing this so you can actually go to OpenAI Playground and you can use this tool to generate completions. You don't have to pay 70 bucks a month or whatever to access a GPT-3 tool. 

Jon Krohn: 01:48:05
That is big news for me. 

Jeremie Harris: 01:48:09
Yeah. It's a nice lifestyle upgrade. I mean, any time you're thinking about an original idea ... One thing that I really was curious about was can GPT-3 write jokes? So I tried this and it got some things that sounded like jokes. Some of them were slightly funny. There was one that was really good,- 

Jon Krohn: 01:48:31
[inaudible 01:48:31]. 

Jeremie Harris: 01:48:31
... and I realized it just had plagiarized from someone. So it's a bit choppy,- 

Jon Krohn: 01:48:36
Oh. 

Jeremie Harris: 01:48:36
... but someday soon that'll happen. 

Jon Krohn: 01:48:39
Over fitting? 

Jeremie Harris: 01:48:40
Yeah. 

Jon Krohn: 01:48:41
This is super great. I happen to have access to the API. All that effort that went into my application for nothing. But this is really great because, in the past, even in Melanie Subbiah's episode, it was a shame to be talking about all this functionality and not be able to let listeners know that they can be accessing it. So this is a huge change. It's great to now be able to just say, "Yeah, go to OpenAI Playground and use it, try it out. Do it right now." Going to kidnap your local politician. 

Jeremie Harris: 01:49:13
[inaudible 01:49:13], hopefully. 

Jon Krohn: 01:49:13
Hopefully. Fingers crossed. Okay. So you mentioned earlier in the show how you'd been doing a PhD. So you'd been doing a PhD in quantum mechanics at the University of Ottawa and, specifically, you were working on paradoxes in quantum mechanics. So you ended up not finishing it. You said your brother did finish his PhD and so your mother loves him more. 

Jeremie Harris: 01:49:48
[crosstalk 01:49:48].
 
Jon Krohn: 01:49:50
But so you left early. It doesn't seem to have hindered you in any way. It seems you were certain that founding a startup was the right thing for you. And so what guidance do you have for people on whether they should do a PhD or not? I guess, particularly, if they're thinking, "I want to get really deep into, say AI, and there's two different ways I could do that: One way is I could try to do a PhD in AI somewhere or I could found a startup." And so what are the trade-offs of choosing one path or the other to make a big impact with AI? 

Jeremie Harris: 01:50:26
Yeah, that's a really good question and I have some pretty spicy takes about this, so I apologize in advance. So when it comes to grad school, this is almost always the wrong choice. There's a very narrow range of cases where it does make sense, but the reason that this is, generally, a bad idea is that university programs take many years and as we alluded to earlier, over that time, the entire field can change. So if you placed a really big bet on ... Even if you placed a really big bet on convolutional networks in 2015, which would've seemed an obvious bet to place, here we are, whatever it is, six, seven years later, and now Allvision is being done by ... Anyway, we're heading towards a world where plausibly Allvision will be done by transformers and your niche, technical expertise in [inaudible 01:51:16], impressive though it may be and hard-earned though it may be, is not irrelevant, but much less relevant. 

Jeremie Harris: 01:51:23
Versions of this happen in all PhD programs. I would say if you're going to do a PhD in almost anything, don't do it unless you're just crazy passionate, you're so into it that you would do this for no money because you probably will end up doing just that. So it's not just ... And it's also the ... I mean, the academic environment is frustrating to different degrees for different people. I had a personality where it just wasn't a good fit. But certainly when it comes to staying relevant, if it's a career move, really second guess that. It's not at all obvious that certainly, a PhD is going to be a good career move, unless you are taking a PhD at MIT, the University of Toronto, Berkeley, Stanford, Carnegie Mellon, these kinds of institutions. If you're not there, then you're going to get a PhD that does not give you value for money. And if you can get in there, by the way, you probably already have a whole bunch of publications in [inaudible 01:52:21] and ICLR and you don't need to hear me say this. 

Jeremie Harris: 01:52:23
So if you're thinking about it, it's probably the wrong idea and masters are roughly the same. Think of a master’s degree as a two-year Bootcamp. You could probably do a Bootcamp regardless of what you're into. I, generally, see master's students struggle about as much with postgraduate employment as Bootcamp grads, candidly. There isn't a ton of difference. They all have that same gap that we talked about. It's all got to be overcome somehow. 

Jon Krohn: 01:52:52
Right. 

Jeremie Harris: 01:52:53
So, I'm, generally, more bearish, I would say, on grad school than most people. 

Jon Krohn: 01:53:01
Well, and given your experience with SharpestMinds, that might not be unwarranted guidance. So definitely same sobering advice for people thinking about doing PhDs there. Okay. So here's another specific question related to your experience with quantum mechanics and also tying it into your experience with AI. So Moore's Law is expected to peak in a few years so this, potentially, limits advancements with AI. So this means that conventional transistor-based computing can only get a little bit smaller and more powerful, at least in terms of how much size it takes up. We could still have more and more and more compute. But, anyway, Moore's Law is going to peak in a few years. So quantum computing is expected to overcome some of these challenges. So what hardware advancements do you think will keep up with the massive computing demands of AI moving forward? 

Jeremie Harris: 01:54:00
Yeah, that is a really good question. So one thing I would say is Moore's Law ... So Moore's Law talks about the number of transistors you can place on a circuit. So, basically, how densely can you pack compute onto a chip? There are a lot of ways to do this, besides just making transistors smaller. We're at the point now we're making three-nanometer transistors. And, just for context, a hydrogen atom has a size that's around one angstrom or a 10th of a nanometer. So, basically, we're at the point where we're making things that are 30 hydrogen atoms with that feature size anyway. Sorry. It's not that the transistors are that size, the feature size, the resolution you're able to get when you manufacture these things, is that. So it's pretty small. We're definitely starting to push quantum limits where, for various reasons, things get fuzzy and you, literally, cannot push it further. 

Jeremie Harris: 01:54:54
So one strategy is, of course, to keep going smaller, that's going to give diminishing returns. Another is to start to think about how you stack things together, to start to think about how you organize, for example, not the 2D structure, but the 3D structure of these transistors to make these chips, to make them more efficient. So that's actually one direction that people are going in as you see more and more focus on specialized computing. TPUs are a great example of this. I mean, this is really where people go, "Okay, we're making custom circuits now for AI hardware. We're going to focus on the application, not trying to make general-purpose computing, but computing that's focused for this deep learning application." So once you have that prior, once you're able to say, "Okay, we're just going to focus on computing for deep learning," all of a sudden you realize like, "Oh, shit, all these things we've been doing to make general-purpose computing work, we don't need to worry about all these details and we can start to pack even more efficiency into our systems." 

Jeremie Harris: 01:55:51
So I think that's actually an underrated source of improvement. There are a lot of companies that are innovating in that direction. And, frankly, in the absence of quantum computing, I would still think that we could push scaling much, much further, just purely based on this. I think we might be able to push scaling and similar trends as far as they need to go for all of the relevant outcomes to materialize. Quantum computing is an interesting wild card. You can't use quantum computing for every computation. I think that's a really important thing to flag. There's certain problems you can solve with quantum computers really, really well. In fact, the classic example's the traveling salesman problem. So if you are a salesman, the idea is, and you have to hit up 20 different locations to sell your widgets, you've got to think about, "Okay, what is the most efficient map route that I can track through the city to hit all 20 locations and end up back at the starting point?" 

Jeremie Harris: 01:56:49
So there's no easy algorithmic solution to this. You can't just do a bunch of math and be, "Oh, I've got to go this and this and this." You actually have to just do trial and error in a way. And so there are a whole bunch of traveling salesman algorithms that work with different levels of efficiency. But in quantum computing, you can think of it as you can simultaneously activate all of these solutions at the same time. Activate them all in a bag, close your eyes, reach into the bag and you can pull out the optimal solution in one shot. That's roughly what quantum computers let you do. They only let you do that for certain problems, problems like the traveling- 

Jon Krohn: 01:57:24
Right. 

Jeremie Harris: 01:57:24
... salesman problem. 

Jon Krohn: 01:57:25
Right. 

Jeremie Harris: 01:57:26
So if you can find a way to frame your machine learning problem so that it looks like that problem, then you can drive a quantum advantage. But this does not happen for free. There's some algorithms that are easier or harder to make work with quantum hardware. And I suspect we'll be able to find these hacks. I mean, I really think humans are very clever and I don't think we're going to be limited by that problem in a big way, but it's not the case that you can simply throw quantum computing at any old neural network in the standard way and get the result you want out. 

Jon Krohn: 01:57:59
Right. 

Jeremie Harris: 01:57:59
So a bit of a wild card, I would say. 

Jon Krohn: 01:58:03
Cool. Well, great answer. And another really thoughtful response. We're clearly demonstrating a lot of depth of knowledge in a particular area. I've really appreciated having you on the show, Jeremie. So we would love to have a book recommendation from you, though you've already had some in the show. So you talked about The Alignment Problem, talked about Super Intelligence. So I don't know if you have a book recommendation beyond that, but I have one for the audience. So, Jeremie, a few years ago, wrote a blog post called You Probably Live in a Parallel Universe: Here's why. And this blog post was extremely popular and it got the attention of mainstream publishing. And so now that has led to you publishing your first book, which sounds super cool. Tell us about it. 

Jeremie Harris: 01:59:02
Sure. Yeah, no. Thanks for bringing that up. Yeah. Available at fine bookstores everywhere soon, I'm sure. So, essentially, this is a book about quantum mechanics, and, let's say, quantum mechanics is a theory, it makes a bunch of great predictions. We are sure that these predictions are really, really effective. They can predict numbers down to 10 decimal points of precision, but people disagree about what stories we should tell ourselves about what's going on under the hood in quantum systems. Some people think that quantum mechanics is somehow steeped in consciousness and that you need to ... That it's deeply linked to the idea of consciousness. Other people say that quantum mechanics predicts the existence of parallel universes. Other people think that it predicts that we live in a universe where the future is set in stone at the moment of the Big Bang and it's fully determined and there's no free will and blah, blah, blah. 

Jeremie Harris: 01:59:58
And so what this book is about is really just exploring what those different perspectives actually mean for a sense of self. And it's written in a bit of lively, comedic tone, I would say, because that's just how I tend to write, but it's designed to explore these pretty fundamental questions about if we take these ideas seriously, at face value, what do they imply about our society? What do they imply about our laws? And what was really cool in peeling the layers of the onion back here, is you start to realize, "Wow, so much of what we take for granted is based on a foundation of quicksand." If you look at certain theories of quantum mechanics, certain interpretations of the theory, you end up looking at a world where free will doesn't exist and if free will doesn't exist, the foundation, the justification, for a lot of our laws disappears. If you look at a model that predicts parallel universes, what does this mean for our understanding of identity and counterpart-hood? There are other versions of ourselves in the multiverse, what does that imply about the value of our life, the nature of mortality and so on? There are all kinds of interesting thought experiments you can do in that context, as well. And then the question of consciousness, of course, comes up a lot too. 

Jeremie Harris: 02:01:13
So, anyway, it's a big, high-level picture of what is in it for me? What does this theory say about me and how should I change the way I think about myself? The story of human self-understanding through the lens of this very powerful theory. 

Jon Krohn: 02:01:28
Super cool. Can't wait to check that out. So beyond that in your other book recommendations, I don't know if you're itching to share any other book recommendations with us? 

Jeremie Harris: 02:01:38
Depending on what mood you're in, I guess, Atomic Habits I think is always a good one. I'm sure your audience will be familiar with it. Atomic Habits is great. The Great CEO Within. If you're a startup founder, this is a really great one. It's a free resource. Check it out. It's just ... I don't know... Really eye opening for us. I mean, I would say that's, pretty much, what comes top of mind right now. 

Jon Krohn: 02:02:02
Yeah, those are great. All right. So if you want to be notified about Jeremie's book when it comes out, there is a signup link that I will be providing in the show notes so you can check that out. I signed up for it earlier today when I found out that this book was coming out. So that's one way to stay in touch with you on what you're doing, but it's really just about the book. So how else should people follow? You've got the Towards Sata Science podcast. Anything else? Social media? Anything like that? 

Jeremie Harris: 02:02:33
I'm on Twitter and now I'm going to have to quickly look at ... I'm pretty sure I'm @jeremiecharris. J-E-R-E-M-I-E-C. And then H-A-R-R-I-S. I'm pretty sure that's the one. But anyway, if you look me up, first name, last name. I'm on Twitter. You should find me. 

Jon Krohn: 02:02:49
And we'll be sure to include that in the show notes, as well. Jeremie, this has been an epic journey. I am a changed person I feel from a few hours ago when we started talking. My beard is longer, I'm balder and I'm much more scared about the algorithms that I work on every day. Thank you so much for being on the show, Jeremie. I absolutely loved it and I hope we'll have a pleasure of your company again soon. 

Jeremie Harris: 02:03:17
Oh, thanks so much. I really appreciate it. It was a ton of fun. 

Jon Krohn: 02:03:26
Wow. What a mind-bending episode. Told you Jeremie was a smart bugger. In today's episode, Jeremie filled us in on the income-sharing agreement structure of the SharpestMinds data science and engineering mentorship program that he founded. He talked about how founding team dynamics can play a larger role in startup success than monthly recurring revenue. He talked about how the artificial superintelligence that could arise immediately once we attain artificial general intelligence could end up destroying the world no matter what it's designed to optimize for. He talked about how, if we are, nevertheless, careful with the development of AGI and hit the bullseye, it could prove to be a panacea for humankind and life on the planet. And he filled us in on how you can make steps towards playing a role in aligning AI with human goals yourself by reading books like The Alignment Problem and Superintelligence. 

Jon Krohn: 02:04:14
As always, you can get all the show notes, including the transcript for this episode, the video recording, any materials mentioned on the show, the URLs for Jeremie's Twitter profile, as well as my own social media profiles, at superdatascience.com/565. That's superdatascience.com/565. If you enjoyed this episode, I'd greatly appreciate it if you left a review on your favorite podcasting app or on the SuperDataScience YouTube channel. I also encourage you to let me know your thoughts on this episode directly by adding me on LinkedIn or Twitter and then tagging me in a post about it. Your feedback is invaluable for helping us shape future episodes of the program. 

Jon Krohn: 02:04:49
Thanks to my colleagues at Nebula for supporting me while I create content for you and thanks, of course, to Ivana Zibert, Mario Pombo, Serg Masís, Sylvia Ogweng and Kirill Eremenko on the SuperDataScience team for managing, editing, researching, summarizing and producing another incredible episode for us today. Keep on rocking it out there, folks. And I'm looking forward to enjoying another round of the SuperDataScience podcast with you very soon. 

Show all

arrow_downward

Share on