Podcastskeyboard_arrow_rightSDS 852: In Case You Missed It in December 2024

40 minutes

Data ScienceArtificial Intelligence

SDS 852: In Case You Missed It in December 2024

Podcast Guest: Jon Krohn

Friday Jan 10, 2025

Subscribe on Website, Apple Podcasts, Spotify, Stitcher Radio or TuneIn


AI security, LLM engineering, how to choose the best LLM, and tech agnosticism: In our first “In Case You Missed It” of 2025, Jon Krohn starts the year with a round-up of our favorite recent interview moments. He selects from interviews with Andrew Ng, Ed Donner, Eiman Ebrahimi, Sadie St Lawrence, and Greg Epstein, covering the latest in AI development, touching on agentic workflows, promising new roles in AI, and what blew our minds last year.
 

It’s our first “In Case You Missed It” of 2025, and Jon Krohn starts us off with a round-up of our favorite clips from December 2024. In interviews with Andrew Ng, Ed Donner, Eiman Ebrahimi, Sadie St Lawrence, and Greg Epstein, we cover the very latest in AI development, touching on agentic workflows, promising new roles in AI, and what blew our minds last year.

This is a helpful episode for developers looking to navigate the new market in AI. Andrew Ng (Episode 841) walks us through choosing the best LLM for your projects and why you shouldn’t shy away from a hybrid approach. With Ed Donner (Episode 847), Jon explores new ground for AI recruitment, what a day in the life of an LLM engineer might look like, and how to reasonably manage a business’ security needs. In Episode 843, Eiman Ebrahimi carries this torch further, alighting on the realities of the current state of data security and how those working with data can manage security even with the sliding goalposts of open-source and cloud-based systems.

2024 was a prominent year for developments in AI, so our last two selected clips give us time to reflect on the year that has passed. In a clip from Episode 849, Jon and Sadie St. Lawrence share their picks for the most astonishing developments in AI in 2024. And finally, in Episode 845, Greg Epstein gets us thinking critically about the way we perceive technology and its enduring role in our lives. He notes there is a growing fundamentalist belief in technology and advocates for agnosticism concerning the imminence of artificial general intelligence (AGI) and beyond.

As always, remember that you can listen to or watch all of these episodes in full on the superdatascience.com website, or wherever you get your podcasts.

ITEMS MENTIONED IN THIS PODCAST:

DID YOU ENJOY THE PODCAST?
Jon Krohn: 00:00:00
This is episode number 852 our In Case You Missed It in December episode.

00:00:19
Welcome back to the SuperDataScience podcast, I'm your host, Jon Krohn. This is an "In Case You Missed It" episode that highlights the best parts of conversations we had on the show over the past month.

00:00:30
All right, let’s start with how AI developers might worry, understandably, about the costs that go into using an LLM. In my first clip from December, Andrew Ng explains why these concerns only mean that we risk missing out on building fantastic models. This clip is taken from episode 841.

00:00:50
If you're an enterprise, should you be thinking more about always trying to use the latest and greatest LLM, or be thinking about grabbing the best agentive workflows? It seems like there's kind of a trade-off there between cost and efficiency. Because yes, while costs have gone down dramatically, say by 80%, you could save a lot of money by working with GPT-4o mini, instead of GPT-4o. And so, if I can be using that cheaper GPT-4o mini and getting better results by leveraging a more effective agentive workflow, it seems like, do you think that's the way to go for the most part?

Andrew Ng: 00:01:25
You know, I would say, don't worry about, I feel like as a general suggestion, I would say don't worry about the price of the LLM, to get started. And I think, for development purposes, is actually, you know, it's not impossible, but honestly, so I still do a fair amount of coding myself, right? And, sometimes I would be spending all day on a weekend coding, for many hours, experimenting and then I find that at the end of the day, I just ran up like a $5 OpenAI bill. Right?

Jon Krohn: 00:01:54
Yeah, yeah, yeah.

Andrew Ng: 00:01:55
And now, it is possible. There are some agentive workflows that can get more expensive. It is possible to run up, you know, tens of dollars, maybe low hundreds of dollars. But is actually cheaper than you would think. And so, my advice would seem is the hardest thing is just building something that works. That's still pretty hard. So, use the best model, build something that works. And after you have something, if we're so lucky to build something so valuable that its usage is too expensive. That's a wonderful problem to have. A lot fewer peoplehave that problem than I wish. But when we have that problem, we then often have tools to lower the costs. But I think that a lot more people are worried about the price of using these generative AI APIs than, than is necessarily the case. And the most important thing is, I would say use the best model, use demo, use the latest best model, just build something that is valuable. And only after you succeed like that, and only if it turns out to be expensive, then work on the cost optimization after that.

Jon Krohn: 00:03:01
Okay, nice. And then, so if you are lucky enough to get to that stage, maybe there's a balance of experimenting both with lower cost options, like moving to GPT-4o mini say or experimenting with different agentive workflows, and just trying to see which gets you the best results for your use case?

Andrew Ng: 00:03:20
Yeah, yes. And, then just be clear, there are teams that have, you know, found that they were spending too much money on these, and they spend time optimizing it. So you can use cheaper models, you can take a smaller model and do something called supervised fine-tuning to optimize it for your own workflow. So, there are multiple tools. But I think using these other tools optimize costs before you've, you know, are first built something valuable, I think that will most likely be premature optimization. And I, I would shy away from that.

Jon Krohn: 00:03:49
Andrew’s advice is all about maintaining a balance between low-cost options and the more expensive agentive workflows and then finding that sweet spot between them. What’s amazing about GenAI is that we really can tailor it to our needs with the “supervised fine-tuning” that Andrew talks about. That being said, Andrew also notes how important it is to lay the groundwork first: Fine-tune only after you’ve got a model that works at all.

00:04:12
Andrew also makes it clear that the key to developing a working, marketable AI tool is to listen to the market, and we continue this thread with my guest from episode 847, that’s Ed Donner. The job market is mushrooming for AI and LLM engineers, but not everyone knows what the roles entail. I wanted to know what AI and LLM engineers are expected to do in their day to day. As it turns out, they need an ear for the market, as well.

00:04:39
You talked about how it's a hybrid of data science, software engineering, and ML engineering. What does it involve in terms of day-to-day tasks? What are the responsibilities of this AI or LLM engineer?

Ed Donner: 00:04:51
So the first thing that an AI engineer has to do is select which model, which LLM they're going to be using for a problem. It turns out, this is probably the most common question that I get asked, and you probably get asked a lot too, which is like, what's the best model? What's the best LLM? Of course, the answer is there isn't one best LLM. There's the right LLM to use for the task at hand. You have to start by really understanding the requirements. The first step is to drill into the business requirements and use that to guide your decision process. Usually at least there are three major categories of things that you're looking at. First of all, you're looking at the data. What's the quality and quantity of data? Is it structured? Is it unstructured?

00:05:41
You really get a sense of the data you're working with. Then you look at the evaluation criteria. What are you going to be using to decide whether or not this model is fit for purpose, if it's solving your problem. I'm not so much thinking there of model metrics like cross entropy loss. I'm thinking of business outcome metrics. In our case, are the right people being shortlisted for the right job for Nebula, but thinking about what you're trying to accomplish with your commercial solution and finding the metrics for that. Then the third category is the non-functional stuff, the budget, how much can you spend on training? How much can you spend on inference? What's your time to market? Do you need something next month or can you spend six months building this?

00:06:24
This really will help steer whether you're working with closed source or open source and help you make a lot of these decisions. But often the first step before you do any of that, before you build any LLMs is building a baseline model, something which often isn't an LLM at all. I don't know if you remember back in the day at Untapt, before we were working with deep neural networks at the very beginning, we actually started with a heuristic model that was just like janky code with a bunch of if statements, but it gave us a starting point. I don't think we ever put that in production, but it gave us something we could use and measure against our outcomes. Then after that, I remember you built a logistic regression model, if you remember that.

Jon Krohn: 00:07:10
Yeah, yeah. It's Interesting because the time at Untapt that we were building these natural language processing, these NLP models to figure out who was a great fit for a given role, that time at Untapt covered the same period of time that deep learning burst onto the scene and became easy to use. So, while it is a good idea potentially to be building a baseline model and testing that before using some big LLM, which might be overkill for some use case that you have, another constraint that often a lot of people have is it depends on how much data you have. Although LLMs have turned that on its head because LLMs can be quite performant with a small amount of data. You can even fine tune them with a relatively small amount of data.

00:07:57
But it used to be the case historically that if you had fewer data, you would use a simpler model. So, at the very beginning of the Untapt platform, before there were any users, you don't have any real data to work with, it makes sense to, okay, let's use a heuristic platform. Now today it's interesting because you could actually just ask an LLM write the code or rate this profile. It was feature engineering that you had to do at that time where you wrote functions to pass over whatever document was being passed into the model to pull out, okay, is software engineer is that character string mentioned in this description? Then okay, we'll give a binary yes into the software engineer column of this. So, obviously, that's really simplistic, but it did actually go some of the way, even that heuristic model.

Ed Donner: 00:08:45
It helps you, it gives you that baseline, it gives you the sense of this is the low bar, and then as you work towards building a more nuanced LLM, you can see the benefits. You can see the improvement you make on that baseline.

Jon Krohn: 00:08:58
I don't think today, you would recommend building a heuristic natural LLM. Ed Donner: 00:09:03 No, no, but a traditional machine learning model perhaps to start with. It's good to start with-

Jon Krohn: 00:09:07
Logistic regression model.

Ed Donner: 00:09:09
... logistic regression. Right, right. For sure.

Jon Krohn: 00:09:13
Cool. All right. So, yeah. So, you said, I interrupted you as you were saying before you even select an LLM, you have a baseline model that you test and see you give yourself an easy baseline and then selecting the LLM is the next step.

Ed Donner: 00:09:23
Well, so you first have to choose whether you're going to go the close source route or the open source route. That's a big decision point and I would say that almost always the first answer is start close source.

Jon Krohn: 00:09:38
Exactly.

Ed Donner: 00:09:39
Like begin of course with a model GPT-4o mini with something that-

Jon Krohn: 00:09:44
I would recommend beginning with the most expensive model to start to see because in the beginning, to do a bunch of prototyping on your own, the costs are going to be so trivial to be using the most beefy, use that full GPT-4o, see if you can do it there, and then maybe check once you're thinking about going into production, you think you're going to have a lot of users. But actually our interview with Andrew Ng recently, he said just leave it on that really expensive model because almost all proof of concept that you build even you deploy them into production, you are going to be very lucky if it costs you tens of dollars running in production initially. So, you don't need to be worried about spending dollars or tens of dollars by switching to GPT-4o mini. Anyway, it's just one perspective.

Ed Donner: 00:10:33
No, for sure. Of course, it makes total sense. I think there are some situations where you would move to open source and maybe somewhere you would start with open source. Obviously, the one that's most common and the one that guided us at Nebula is where you have a vast amount of proprietary data which has nuanced information captured in that data. We want to fine-tune a model that we believe will be able to outperform the frontier because we have this proprietary data set and that's obviously a great reason to do it. You would still probably start with GPT-4o, but then you would use that to train a model. Another situation that's very common is if you have private data.

00:11:17
You have data that's sensitive and that you are not comfortable sending that data to a third party. You're not comfortable for it to leave your infrastructure at all despite some of the guarantees that you might get from an OpenAI enterprise agreement. But in those cases, you would still want to use open source, keep the data locally, and run it on your models. There might be some situations where at inference time you are very focused on API costs and so you can reduce the costs by running an open source model. Then the final thing I can think of is if you're trying to build models to run on device or without a network connection, then again of course you would need to work with perhaps not with an LLM, but with an SLM, a small language model like a Lambda 3.2 or something like that.

Jon Krohn: 00:12:02
Ed touches on a real concern for users of GenAI tools: How safe are our data? How much do we trust the user agreements laid out by tech companies? How can a business use open-source models safely, and should we expect to settle for less secure structures if we want to keep using LLMs? In episode 843, I asked Eiman Ebrahimi what he thinks about this “trade-off” between security and efficiency.

00:12:28
Looking to the future with the next big trends that we have coming in AI as we make the shift from generative AI systems like LLMs being so effective, we are getting more and more into Agentic AI where we're trusting those generative systems to work independently as opposed to just be called upon by us to provide some information. So Agentic AI or what are the kinds of shifts do you see in the future and how does that relate to the security efficiency trade-offs that we've been talking about throughout the episode?

Eiman Ebrahimi 00:12:59
Yeah, I think is interesting to observe that the space of applications around LLMs and AI is very quickly not going to be, oh, there's an LLM, there's an application. We just have to protect that. That is a part of a broader system of potentially agents. It seems like more and more the narrative of how the market is evolving into making use of these models. And what that means from a data security perspective is again, we need to think differently about, oh, everything's just going to live on a system that's going to be right next to where the data lives. Because if you've got agents, those agents are dealing with different data sources. There are potentially different places. Some of them will be on-prem, some of them will be in a private cloud. Some of them may be in a public cloud served to you by an application provider that needs to run things multi-tenant in order to make their business model work.

00:14:00
So suddenly the thinking of data exposure among these systems will need to be different. And I think it's not just us that is innovating in this space. There's a lot of innovation actually happening in the homomorphic encryption space, and it needs to be considered, where is it applicable? In fact, I think it was just a few weeks ago, Apple announced some new homomorphic encrypted versions for things like information retrieval that they're embarking upon. And there are bits and pieces of the problem that may be solved with being able to do something in homomorphic encrypted mode. In fact, Stained Glass itself is a great, great application to run in homomorphic mode. Because you can imagine if you're taking plain text information and turning it into a transformed representation, doing that operation under complete encryption is fantastic. Because you do that in a completely homomorphic manner, but then you release the rest of the computation that is potentially very complicated and it's challenging to implement it in a homomorphic fashion to run on whatever hardware is accessible and most efficient to run that.

00:15:21
So when you ask the question of what will data security look like? I think data security will need to involve in these Agentic systems in more complicated use of models as components of a larger system solving a problem, we'll need to focus on where do those different components run? What are the acceptable exposure parameters of those systems in terms of the data you need to send it to, and how can you manage that in a programmatic manner? And Stained Glass, we believe is a big unlock for that sort of broader system and needs to and will combine with these other technologies.

Jon Krohn: 00:16:06
And so tying it to something that you discussed earlier. In the episode, you talked about when doing research, you want to be looking ahead to problems that are 5, 10 years away. And you just highlighted there again away that the kinds of solutions you're developing in Protopia will solve the problems of the future.

Eiman Ebrahimi 00:16:22
Yeah. And we look really deeply into partnerships in order to facilitate delivering these cutting edge solutions in the fastest manner. I think one of the things that we see across the ecosystem is that from the largest of the businesses that have made all of this possible, all the way to the startups that are very active in this space and building a lot of very important technology, partnering and being able to deliver broader solutions is really essential to actually delivering value. And so we've spent a lot of time, again, across the stack. From the provider to the builders of the foundation models themselves, to the application providers, building on top of that, finding ways that we can unlock the use of data from the topmost user all the way down to the infrastructure that needs to crunch on that data in order to create value. How do we plug in across that stack is a big portion of being able to again, deliver the larger value that the industry really does need to survive.

Jon Krohn: 00:17:37
For anyone who is in the long game of developing AI systems and products, it’s essential to find these partnerships. I’m sure we’ll be seeing more agile startups working with established tech powerhouses in the coming years.

00:17:48
In one of our final episodes of 2024, we share in episode 849 the biggest flop of the year in AI and also the biggest wow moment of the year. Surprisingly, between me and my guest Sadie St. Lawrence, one company was responsible for both the biggest flop and the biggest wow moment in 2024.

00:18:06
Let's move on now to our wow moment of the year. Maybe I'll go first on this one since you got the last one. And so, for me, I've already alluded to this, it's o1 from OpenAI. It's interesting that simultaneously that expectations were so high for them that they could both be the disappointment of the year and the provider of our biggest wow moment.

Sadie Lawrence: 00:18:28
Yeah, I think that just shows how high expectations were, are and they continue to be within AI. I think that all of us in AI now are almost TikTokified. I don't even know if that's a word, but in terms of wanting that quick dopamine hit of if something isn't happening this week or something that's not wowing us or blowing us away, we just write it off. So it's interesting that you have them as your wow moment when it's also my disappointment because I think it really just ties into expectations are high and we are looking for that next dopamine hit in AI every single week, if not every day.

Jon Krohn: 00:19:10
What's your wow moment?

Sadie Lawrence: 00:19:11
So my wow moment is not necessarily from an overall use, but just from a human level of when I just listened to it, and this should give you a key to what it is, but I was just truly impressed. And that was with the NotebookLM and the podcasting.

Jon Krohn: 00:19:29
That's number two. That's my number two.

Sadie Lawrence: 00:19:30
And the reason why it just was so human to me, and that's why it wowed me is their expressions, the way that they talked. It felt like you and I talking on a podcast. So just from a human level, is it going to change the world? I don't know, but I just thought it was cool. And so that was my wow moment.

Jon Krohn: 00:19:54
Absolutely. I almost had that as my number one as well. And we did an episode of this podcast, number 822, which came out in late September. In that episode I expressed how blown away I was by NotebookLM, and I also air in its entirety a 12-minute podcast episode about my PhD dissertation, which is so boring, but these fake podcast hosts did manage to make it seem exciting. And so I included it in full, in the episode and people were blown away. That must be one of my most commented posts of the year of a large number of people reaching out and saying, "Wow, I hadn't heard of this, or I hadn't used this, and now I have used it and it blew me away. Here's what I tried." So yeah, that was really cool. I think it was a wow moment for a lot of people.

Sadie Lawrence: 00:20:49
Yeah. I'll add one sub-wow moment in there which may not get talked about.

Jon Krohn: 00:20:53
Sub-wow.

Sadie Lawrence: 00:20:54
I hope we have a sound effect for that too, sub-wow, or maybe it's its own sound effect, but I recently got a Tesla and the full self-driving on that is incredible. And I was just blown away because as a kid, my mom was like, "Hey, you really need to learn to drive and do all these things." And I told her one day, "I will have somebody who drives me around." I did not think would be a robot in full self-driving, but here we are today. So just to have childhood memories of saying something and then to be living it today is truly incredible.

Jon Krohn: 00:21:29
That also, I've got to add my sub-wow moment, which is-

Sadie Lawrence: 00:21:32
Sub-wow.

Jon Krohn: 00:21:32
 ... which is Waymo. I had my first Waymo experience this Northern Hemisphere summer, and that was really cool, having a car, because I think that's another level of autonomy beyond Tesla's full self-driving, right? Where with Tesla's full self-driving, you need to have somebody sitting behind the wheel. But to have the Waymos now in San Francisco and at the time of recording also in Scottsdale, Arizona, I think, you can just use the Waymo app and a driverless car comes up, picks you up, you get in it and it drops you off. I almost want to make that my biggest wow moment of the year. I don't know how I didn't think of it right off the bat, but I mean, because that physical presence because that's...

00:22:22
I come back to the Waymo example a lot with when people ask me, when people find out I work in AI, as the quote, a lot of people completely outside of AI will say things like, "Oh, contentious," and I'm like, "Really? Oh, I wasn't aware it was so contentious." And they're like, "Well, yeah, I'm a creative or I have lots of friends who are creatives," and I can see that okay, yeah, I can see why it's so contentious. But for me, I guess I'm so often seeing big changes and benefits. But there are, you know, the Waymo example is one that I come to frequently to say, this is a... as opposed to something that's happening on your computer screen, this is a physical, very obvious manifestation of AI that when you experience that, when you call a Waymo car, get into it and drops you off somewhere, you see the steering wheel spinning all on its own and it's making great driving decisions, that makes it clear that in the future, in the not too distant future, we don't need drivers. We don't need human drivers.

00:23:42
In the United States, in most states, the number one job is truck driver. And there's tons of related jobs that support the truck driver, people working in cafes along the roadside and that kind of thing. You don't need that. Self-driving cars don't need cafes or motels. And so it's going to make a really big impact. And given the upheaval that will be caused by this, there's things that we need to be doing as a society in terms of retraining people because this AI shift should end up being, just all other automations in the past, it should provide people with more interesting work than ever before. And I mean, this time, there's talk about this time being different, but all past increases in automation have led to more employment and lower unemployment. So I don't know. I've touched on a lot of topics there, but I haven't let you speak for a long time. So, Sadie.

Sadie Lawrence: 00:24:45
No, I was really lucky this year to hear one of the co-CEOs of Waymo talk, and one of the things that she said was, "We are building the single best driver." And I found that really interesting because she talked about how they have over a hundred thousand fleet of cars out driving, but she talked about it as one driver. They talk about it as a single brain, a single driver brain, and they're only building one. And that just resonated with me so much because it really gives us perspective of what intelligence and machine intelligence can do at scale, right? You only need to build one of the best single drivers and you can change a whole industry. And so I think that's something just to think about. Get really specific in the models that you're building and the domain that you're building because when you do that at scale, it's incredible.

Jon Krohn: 00:25:46
From my and Sadie’s round-up of last year’s highs and lows in AI, we move to a novel perspective of tech’s future. In episode 845, I speak to Greg Epstein about his book, Tech Agnostic, in which he takes aim at “global technology worship” and advocates for a more moderate approach to emerging technologies. In this clip, Greg and I discuss the problems of adopting technology with the kind of fervour we might consider unenlightened or even dangerous in other contexts.

00:26:11
So again, that's Tech Agnostic is the book that we're talking about here. And so in 2003, the famous technologist and author Jaron Lanier wrote that, "Artificial intelligence is better understood as a belief system instead of a technology."

00:26:30
AI still hasn't delivered on sci-fi promises of autonomous robots with intelligence indistinguishable from ours. But in the last two years, for me, the watershed moment was GPT4's release in March of last year. That blew my mind. It made me think, okay, this might happen in our lifetime, and some people think it's going to happen soon, very soon. You mentioned how Ray Kurzweil said, "It's near, the singularity is near. Now, it's nearer." He predicts that we'll have artificial general intelligence before 2029. Sam Altman, OpenAI CEO thinks next year 2025.

00:27:08
So your book Tech Agnostic delves into the implications of portraying tech as a messianic force like this. Could you elaborate on these kinds of narratives, especially regarding the singularity as being this kind of religious event that's coming soon? I guess it's kind of like, what is it? It's like the end of days?

Greg Epstein: 00:27:29
Yeah. I mean, it is very much like the end of days in the revelation, which is the English word for the Greek word apocalypse.

Jon Krohn: 00:27:40
I didn't know that. Revelation means apocalypse. Wow

Greg Epstein: 00:27:44
Yeah, or apocalypse means revelation. The new world will be revealed. And for some that's a good, that's going to be fun and pleasurable, and for others that's going to be utter doom and disaster, right?

Jon Krohn: 00:28:02
So given what you've just said and how this singularity AGI coming can be perceived as this kind of end of days, why do these, why do these narratives exist? Why do we choose to believe them? And actually, I've got to say that that kind of thing for me, I don't know if we're going to have AGI next year, like Sam Altman thinks, or in 2029 like Ray Kurzweil thinks, but since GPT4 came out, I think it's probably likely in my lifetime. And I also do believe that whenever it arrives, and it isn't one event, because also that's, I think like an oversimplistic thing to say that it's one event.

00:28:44
But gradually over time, over the coming years or maybe over the coming decades, AI systems seem like they could become so much more intelligent than us that we might start to just trust them to run off with their own processes because we're like, "Well, you know what? They should just be governing because they're doing it, we've run all these simulations and they do it way better than us. We run these small pilots, they do it way better than us." And then AGI systems go off and maybe very rapidly create artificial super intelligences that are far beyond, they could theoretically have a kind of intelligence that is far beyond what could be explained to us in the same way that I can't explain, no matter how much time I spend trying to explain partial derivative calculus to a chimpanzee. A chimpanzee is almost as smart as us, but I have no chance of explaining partial derivative calculus. And in that same way, the artificial super intelligence that could come very hot on the heels of artificial general intelligence may have ways of understanding the world that just don't make sense to us.

00:29:56
But so for me, even I guess I, without really realizing it, until we had this conversation today, I suppose in my mind, I am kind of thinking of the singularity as kind of a religious event because it's so difficult to predict what's going to come past it. And I'm just kind of hopeful, I guess as an optimistic person that it's going to be good for most people on the planet, if not maybe even everyone.

Greg Epstein: 00:30:24
Let's just take a look at what you've just described and analyze it as though we were breaking down the content and the structure of what you just laid out in a divinity school. What you've laid out sounds very much like the structure of a kind of new religious vision for society. You're talking about, my words now. I'm paraphrasing you very loosely because I'm trying to make a point, but it sounds to me like a kind of slow-moving, multi- stage rapture where the people who are good, who should benefit from this new all-powerful technology, this force greater than us, than any of us, and all of us even will rise in some literal or metaphorical way to benefit from such a thing. Anybody that does not, I suppose, deserve to be benefiting from such a thing, maybe because their behavior is bad, they're mean, they're naughty, maybe they're not the right race or gender. I don't know. You know, in other words the all seeing, all knowing technology will meet out justice, will it not? I mean, if not, right, there's two choices here.

Jon Krohn: 00:31:57
I guess it has to. It has to.

Greg Epstein: 00:31:59
There's two choices here, either the thing has to be completely capricious and really nasty, or it has to be positive and healthy, and there's two choices for that. And so I guess the point that I want to make here is people have always wanted and needed a system of thinking about and interacting with the world so as to cope with the fact that the world is so uncertain. We don't know what is going to happen in the long-term future. We can tell ourselves we see trend lines and we might, but we don't have any proof of what the world is going to look like in 10 or 20 years. And so to a certain extent, if we want to say that we do, that's a faith proposition. I believe that the world is going to look this way. Based on evidence, but it's still a belief.

Jon Krohn: 00:33:33
Yeah, I guess am I tech agnostic if I am not confident about what tech will bring about? And also, I guess if my instinct would be that just like tech today ends up being positive and helpful to some people and unhelpful and negative to other people, that it will probably continue to do that in the future. It's not like some, it's not like we've reached some definitive utopia or dystopia. It's just that we kind of like the world today. There's some people that are benefiting a bit in some ways and benefiting less in other ways from change. Does that make me tech agnostic?

Greg Epstein: 00:34:15
Well, I mean, certainly I am advocating for people to be more tech agnostic, and I think some of what you're saying is in that category, it's this idea that we're not sure how things are going to go, but what I really want to emphasize about why I think tech agnosticism is an important concept in this day and age is I was describing earlier this of confidence or this strong faith that certain kinds of leaders like a Trump will have in this day and age where he's a great example of this because he's a 34 time convicted felon whose own former chief of staff, who by the way was a general in the Marine corp John Kelly, said, he compared him to fascism and fascists, and yet we voted for him and elected him to the highest office in the world because in large part, he comes across as really confident and in a very, very fundamentally uncertain world, that confidence has a lot of value to some people.

00:35:47
And so what I'm advocating for is an alternative model where we recognize that there's a beauty and a dignity and an honorableness in not knowing, in not being certain about what the future holds, and not being certain that we have to invest $7 trillion immediately as Sam Altman is begging us to do. He says because we've got to do all we can to bring about the tech and AI rapture, the coming of the AI God, the miraculous, abundant world. Those are, I'm quoting words like miraculous and abundance, that he himself uses biblical sounding terms.

00:36:41
I think he needs the 7 trillion because OpenAI is absolutely burning cash right now, and if he doesn't get that kind of investment, his baby, his company might legitimately go belly up because it has no way whatsoever to make a profit right now. So just push that timeline infinitely into the future with this investment that will make it too big to fail, right. That's why the SoftBank folks actually say that we need 9 trillion, don't they, in AI investment. But anyway. Or is it the Bitcoin people that say the 9 trillion, I lose track, somebody's calling for 9 trillion. Anyway, so in that world where Eric Schmidt is saying that we're falling desperately behind the Chinese in the AI arms race, and yet AI robots are running around ... How do we say this? The AI robots are running around ... What are they doing, the AI robots? The Chinese AI robots are now rebelling against their ... One robot actually convinced the other robots to run away from their rulers.

00:38:26
This is an actual story out of Chinese AI of late. And so, if that is what you get by moving quickly, maybe there's some real benefit to moving slowly. Maybe there's some real benefit to losing the arm's race, and gaining ourselves in the process.

00:38:51
Because maybe the ultimate goal of being human is not victory, and conquering, and colonizing the stars. Maybe it's looking at one another, and at ourselves, and appreciating humanness, appreciating the small, every day, slow process of loving one another, caring about one another, forming a more compassionate society, so that when we eventually project our digital consciousness into the far corners of the universe, trillions of us, AI robots out in space powered by unseen stars, and what have you, that when we eventually do that, maybe what we don't want to project is the jerks that we are now to one another all too often out there.

Jon Krohn: 00:39:51
All right, that's it for today's In Case You Missed It episode. To be sure not to miss any of our exciting upcoming episodes, I hope you'll subscribe to this podcast if you're not already. But most importantly, I hope you'll just keep on listening until next time keep on rocking it out there and I'm looking forward to enjoying another round at the super data science podcast with you very soon. 

Show all

arrow_downward

Share on