Podcastskeyboard_arrow_rightSDS 855: Exponential Views on AI and Humanity’s Greatest Challenges, with Azeem Azhar

85 minutes

Life PhilosophyData ScienceArtificial Intelligence

SDS 855: Exponential Views on AI and Humanity’s Greatest Challenges, with Azeem Azhar

Podcast Guest: Azeem Azhar

Tuesday Jan 21, 2025

Subscribe on Website, Apple Podcasts, Spotify, Stitcher Radio or TuneIn


How can we use AI to solve global problems like the environmental crisis, and how will future AI start to manage increasingly complex workflows? Famed futurist Azeem Azhar talks to Jon Krohn about the future of AI as a force for good, how we can stay mindful of an evolving job market, and Azeem’s favorite tools for automating his workflows.


Thanks to our Sponsors:



Interested in sponsoring a Super Data Science Podcast episode? Email natalie@superdatascience.com for sponsorship information.

About Azeem
Azeem Azhar is considered one of the world’s most influential thinkers on the impact of technology on humanity. The founder of the 'Exponential View' newsletter and podcast, he is also a researcher, investor and entrepreneur. His invaluable insight and original perspective means he regularly advises politicians, business leaders and the public notably on the challenges and opportunities presented by AI and other emerging technologies. Azeem’s first book "Exponential: Order and Chaos in an Age of Accelerating Technology" was met with widespread acclaim. He is a Visiting Fellow at the Oxford Martin School and an Executive Fellow at Harvard Business School. He is the Co-Chair on the Global Future Council on Complex Risks at the World Economic Forum. 

Overview
How can we use AI to solve global problems like the environmental crisis, and how will future AI start to manage increasingly complex workflows? Famed futurist Azeem Azhar talks to Jon Krohn about the future of AI as a force for good, how we can stay mindful of an evolving job market, and Azeem’s favorite tools for automating his workflows.

“We need data scientists because there is so much data,” says Azeem. As we use more smart devices and generate more data, we will need exponential technologies that help us find the patterns and pathways to managing and analyzing information. He and Jon note that these technologies will also continue to get cheaper, but processing costs need to continue on that downward trajectory to help engineers manage quantum effects, which is the next step in driving ingenuity. Azeem says people need to make sense of the world to prepare for these new technological horizons, which means thinking beyond a linear path and closer towards a logistic S-curve. 

As an investor in more than 50 tech startups, Jon was eager to ask Azeem for his views on how companies can keep pace and stay relevant. To keep up with AI-first companies, organizations can focus on prioritizing their learning and practice. AI-first companies emphasize experimentation with tools that might be a little rough around the edges, and that give you the space to play and develop your ideas without influencing you too much. Azeem is keen for us to get used to the idea of expanding our thinking beyond disciplinary silos because AI will soon act as a bridge between disciplines and industries to help us in myriad ways beyond our comprehension.

Listen to the episode to hear how Azeem optimizes his time with GenAI and Azeem’s vision for how AI might realistically tackle environmental breakdown.

In this episode you will learn:
  • (05:43) Azeem Azhar’s vision for AI’s future 
  • (14:16) How to prepare for technological shifts 
  • (20:35) How to be more like an AI-first company 
  • (38:46) The tools Azeem Azhar uses regularly 
  • (50:09) The benefits and risks of transitioning to renewable energy 
  • (1:09:28) Opportunities in the future workplace 

 Items mentioned in this podcast:
 
 Follow Azeem:


Episode Transcript:
Jon Krohn: 00:00:00
This is episode number 855 with the famed futurist, Azeem Azhar. Today’s episode is brought to you by ODSC, the Open Data Science Conference.

00:00:16
Welcome to the SuperDataScience podcast, the most listened to podcast in the data science industry. Each week, we bring you fun and inspiring people and ideas, exploring the cutting edge of machine learning, AI, and related technologies that are transforming our world for the better. I'm your host, Jon Krohn. Thanks for joining me today. And now, let's make the complex simple.

00:00:49
Welcome back to the SuperDataScience podcast. Today, I'm over the moon to have the famed futurist, Azeem Azhar, joining me on the show. Azeem is creator of the invaluable Exponential View newsletter, which has over 100,000 subscribers. He hosts the Exponential View podcast, which has had amazing guests, including people like Tony Blair and Andrew Ng. He hosted the Bloomberg TV show, Exponentially, where he had a guest like Sam Altman.

00:01:14
He holds fellowships at Stanford University and Harvard Business School. He was founder and CEO of PeerIndex, a venture capital-backed machine learning startup that was acquired in 2014. He holds an MA in PPE, which is politics, philosophy, and economics from the University of Oxford.

00:01:30
He also wrote the bestselling book, The Exponential Age. I will personally ship five physical copies of Azeem Azhar's Exponential Age book to people who comment or reshare the LinkedIn post that I publish about Azeem's episode from my personal LinkedIn account today. Simply mention in your comment or reshare that you'd like the book. I'll hold a draw to select the five book winners next week, so you have until Sunday, January 26th, to get involved with this book contest.

00:01:58
Today's episode should appeal to absolutely any listener. In today's episode, Azeem details the exponential forces that will overhaul society in the coming decades, why AI is essential for solving humanity's biggest challenges. He talks about his own cutting edge personal use of AI agents, LLMs, and automation, and he fills us in on why there's no solid ground in the future of work but how we can nevertheless adapt to the coming changes.

00:02:24
All right. You ready for this exponential episode? Let's go.

00:02:34
Azeem, welcome to the SuperDataScience podcast. It's surreal to have you on the show because I've been a huge fan of yours for nearly a decade now. I was a subscriber to your Exponential View newsletter nine years ago and now it's got over 105,000 subscribers. Azeem, how are you doing today?

Azeem Azhar: 00:02:51
I'm doing super well and excited to have one of the OGs with me. Thank you so much, Jon. Appreciate it.

Jon Krohn: 00:02:57
Nice. Yeah. Where are you calling in from today, Azeem?

Azeem Azhar: 00:02:59
I'm up in Hampstead in North London, which is where I live when I'm not on a plane visiting the US.

Jon Krohn: 00:03:06
Nice. Yeah.

Azeem Azhar: 00:03:07
Yeah.

Jon Krohn: 00:03:08
Well, in addition to the newsletter that you have, the Exponential View there, you also have a bestselling book called The Exponential Age and you've built a whole brand out of the word exponential such that you could even identify not just as a futurist but as an exponentialist. Could you define for us what this exponentialism is and elaborate on how this perspective shapes your analysis of technological trends and societal changes?

Azeem Azhar: 00:03:38
Yeah, absolutely. I mean, exponentialism is why we have data scientists. Exponential technologies are ones which get much, much better and much cheaper every year. The most important of those, of course, has been silicon chips and with silicon chips, hard drives, and data storage and bandwidth cheaper and cheaper every year. Because they get cheaper, they get more widely used in our economies and in our everyday lives.

00:04:07
The notion of an exponential technology didn't really make sense until the mid to late-seventies, but now we're at this strange period where lots of technologies have that characteristic where they get better and better and better and faster and faster and faster.

00:04:22
The reason I say exponentialism is why we have data scientists is because we need data scientists because there is so much data. On my home network, I send a terabyte of data around every couple of weeks because there are so many devices just talking to each other and that comes as a result of exponential technologies.

Jon Krohn: 00:04:45
Yeah, it is wild. It's something that I talk about on the show a lot. Things like dramatically cheaper compute, dramatically cheaper data storage, and exponentially more data being available across all those kinds of devices that you're describing, self-driving cars, in-home sensors, industrial sensors. For people like listeners to this show, this provides a wind at our back in terms of what the out-of-the-box foundation AI models we'll be able to use are, as well as the ones that we'll be able to fine tune for our specific purposes.

00:05:19
Yeah, it is a really exciting trend and something that I have... If people have been listening to the show for years, they would've noticed that when I first started hosting the show four years ago, a question that I saved for my really big guests that I thought would have the most mind-blowing answers, I'd say to them, "We have this wind at our back. We have all these exponential factors. It seems like the world will be dramatically different in the coming decades. What kind of vision do you have for the world?" The crazy thing was, Azeem, these people, some of the best known names in data science and AI, they often had no answer at all.

00:06:01
It was kind of, they were...it seems like, and this makes sense given their role... If they're an academic, they are thinking one to two years in the future in terms of their next grant application. What do they need for that grant application? It's exciting for me to have you on the show because that is the kind of timeline that you are thinking about all the time. It's 10 to 30 years in the future. So where do you think we're headed? Is this going to be a horrible dystopia? Are we going to have a utopia for maybe many people on the planet or maybe even theoretically the whole planet in our lifetime? Where do you see these exponential trends taking us in the coming decades?

Azeem Azhar: 00:06:49
I think it's really hard to answer the question that you asked, the academic data scientists, because it's just really hard to look out into that future and to extend the curve that we extend. I think, back to my last startup, which was a machine learning company, we looked at lots of data. We were acquired in 2014.

00:07:16
When I hired my first data scientist in early 2010, that job didn't really exist. There were a handful of people with that job title. We weren't sure what to call it. Do you call it computational statistics? Do you call it machine learning? Do you call it data cleaning? And if listeners go to Google Trends, Google News trends, which I'd encourage them to do, and set it to worldwide in 2004 to present and type in the word data science, you will see that in July 2010 when I hired my first data scientist, it says less than one. It's basically zero searches and today it's 94. Read into that what you will.

00:07:58
It's really difficult, I think, to look forward that 10 or 15 years to say these things will be true and the world will be that different. The way that I do this is I try to take people back to things that they know and recognize. I mean, do you remember the world when you could hold an entire set of customer records in a hundred megabytes of RAM? The answer is, depending on how old you are, you might over my shoulder. For people who are watching on the video, I've got my second computer which has 32 kilobytes of RAM on it.

00:08:38
When you ask me the question, "Where are things going over 10 to 30 years," I think that there are certain technology processes that we can have some confidence over. Those processes being that we will continue to develop, deliver more compute for lower cost, which will mean the amount of storage and data we generate will increase really, really significantly, and that we can draw those conclusions.

00:09:06
I did a calculation looking at the total number of, I guess you could call them FLOPS, computer instructions per second in the world from 1972, which is generally where I start because it was about the time the Intel 4004 was released. The F-14 Tomcat had the first computer, sort of CPU, integrated CPU, and it's also the year I was born. The amount of FLOPS in the world has grown by about 65% a year compounded for 52 or 53 years. The question to ask yourself is does that continue or does it stop? Which is a more radical assumption? I think it's more radical to assume it stops.

00:09:53
When you ask me where are we going to be in 10 to 30 years, I have a very, very simple... We can call it heuristic, which is let's just draw the line-up for a bit longer and use that as our baseline. What is 10 years? It's a 10 to the 6 increase in compute. It is vast increase in bandwidth. It's a vast increase in the amount of data that we're generating. Where are we going to generate it from? Well, ask yourself the question today. If you're someone sitting, listening to this podcast and you've got any number of Kafka queues and you've got Datadog running and there's billions of events coming through every hour, go and ask yourself. Well, 15 years ago, what number were you dealing with per hour? It may have been 100.

00:10:40
You've already lived through it. Let's extend that out and recognize that that's where we will likely go. We can then add some criticality to the question afterwards, but I think that's a good place to start.

Jon Krohn: 00:10:50
I think it's great. There's no question for me. You framed that actually. We had a pre-call before doing this interview and you made one of those points, the same points there, which is that what is the more likely thing that this curve that's been going on for decades is going to continue or that it's going to end. And of course, it seems more obvious that it's going to continue even though it is that dramatic. The 68%, it sounded, like you said, increase year over year in FLOPS. People will say things like, "Oh, well, Moore's law is coming to an end because electrons will start jumping from circuit to circuit if we try to make the gates any smaller on chips." But of course that doesn't matter because the processes that are creating the chips will continue to get cheaper and we'll be able to put more, have more and more operations happening on larger chips or you can have more and more GPUs running in parallel and come up with ways of having information flow quickly in parallel. This trend won't end just because we can't continue to shrink.

Azeem Azhar: 00:11:57
I completely agree with you. Now, there is a criticism, a challenge, which is it's 2025. The turkeys are going to feel the same way all the way up to the 26th of November 2025, which, as you know, is the day before Thanksgiving, and then the trend of being loved and looked after and fattened ends very abruptly. Of course, maybe that does exist, but I think you put your absolute finger on it there, which is that this isn't magic. This is a series of underlying processes, so we need to deliver processing more cheaply, and we're struggling with quantum effects. Instead, we scale out and we parallelize.

00:12:40
And then we have issues with interconnects between GPUs, so we build infinite band or whatever it is to increase the speed with which we move things across. The question is what drives that? What drives it is human ingenuity and financial incentive and growing markets. Soon, it will also be AI support to help us solve those problems more and more.

Jon Krohn: 00:13:06
Excited to announce, my friends, that the 10th annual ODSC East (Open Data Science Conference East), the one conference you don't want to miss in 2025, is returning to Boston from May 13th to 15th! And I'll be there leading a hands-on workshop on Agentic AI! Plus, you can kickstart your learning tomorrow! Your ODSC East pass includes the AI Builders Summit, running from January 15th to February 6th, where you can dive into LLMs, RAG, and AI Agents, no need to wait until May! No matter your skill level, ODSC East will help you gain the AI expertise to take your career to the next level. Don’t miss - the Early bird discount ends soon! Learn more at odsc.com/boston.

00:13:50
Following on from this idea of exponential growth, humans seem to be, and maybe turkeys as well, seem to be poor at being able to imagine that they're on this exponential curve and so, Ray Kurzweil, for example, another famous futurist, said that our intuition about the future is linear, but the reality of IT, as we've already been discussing in this episode, Azeem, is exponential.

00:14:21
You similarly in your book, you talked about, in chapter three, how, for example, the COVID pandemic, when that was unfurling in 2020 around the world, it was experiencing exponential growth. I experienced that in real time, looking at... I was like many times every day, probably a hundred times a day, refreshing how much in New York state, how many more infections there were.

00:14:49
It was very difficult for me even as somebody with a lot of statistical background, been a data scientist for a decade. Even for me, it was difficult to process how this exponential change was happening. Given the difficulties that even experts face in predicting exponential growth or being able to have intuitions about exponential growth, how can businesses, policymakers, our listeners better prepare for future technological shifts?

Azeem Azhar: 00:15:20
I agree, it's really difficult to normalize and rationalize in your head the speed of that change. I do think that it's quite commonplace. A very simple exponential process is compound interest. Virtually, all of us start saving for our pensions or 401(k)s or whatever it happens to be too late. The right time to start is when you are 23 and you just put 10 bucks a month away knowing it's going to compound. I think many of us are guilty of that. I am as well.

00:15:55
I think there are companies who have internalized this possibility and I think the technology industry as it comes out of the Bay Area has very much done that. They have relied on understanding that Moore's law keeps driving prices down and that you aren't really going to systemically run out of capacity or compute. You may have crunch periods where you can't onboard the machines or the hard drives or the storage fast enough, but in general you won't do that.

00:16:32 I mean, I think that one of the ways that you have to understand this is understand the processes and understand that these processes absolutely exist. I think it's really unhelpful for when you're trying to make sense of this world for people to think in linear terms. I still see it and I'm sure you may see it when you're helping clients or people at work and you see their business plan and it shows a fixed increment of growth and nothing grows that way. Everything follows a phase of a logistic S-curve where you have an exponential phase that tails off. Nothing is linear except for our birthdays, one to two to three to four.

00:17:15
I think a lot of the tools are all to hand but it is very difficult. And what you need to do at these moments is perhaps go back to first principles thinking and perhaps say, "Look, the heuristics we've used were just that they were really helpful in a world that doesn't move as quickly, but in a world that moves this quickly we have to go back to heuristics." Sorry, pardon me, first principle thinking.

00:17:40
The thing that's so funny, Jon, is that most people who are listening to this podcast will have... Beyond their experience with COVID, they will have lived through exponential technologies because they will have lived through upgrading their iPhone or their Android phone every two years and getting twice as much compute for the dollar they spend. They will have lived through, if their data scientists, their data array or their data lake going from a gigabyte to 100 gigabytes to 10 terabytes to a petabyte and beyond. They've literally witnessed it and yet it still becomes quite difficult. I think going back to first principles is a really helpful way of doing that.

Jon Krohn: 00:18:23
Yeah, yeah, yeah. And so... In terms of something that people could be doing, this idea of first principles in this instance here, that's literally thinking about sketching for yourself those kinds of changes and thinking about how you adapt it to those changes and making projections based on that.

Azeem Azhar: 00:18:42
Yeah, I think that's a really good way of doing it. I mean, when I do my own planning and build models of where the business might go or where usage might go, and I've done this for more than 20 years, I've never put in linear increases like, "Oh, it'll go up by 20. It'll go up by 20." I've always gone in and put in a dynamic percentage because a percentage compounds. One of the things that drives the exponentials is feedback loops.

00:19:17
The reason something accelerates... I mean, let's think about silicon chips. Why did chips during the '80s and the '90s and the 2000s get better and faster? It was because there was a feedback loop. When Intel came out with a new chip, it allowed Microsoft to deliver better tooling on Windows, which gave people an incentive to upgrade their computers, which put money in the system, which allowed Intel to develop a new chip, which allowed Microsoft to push out more features. That feedback loop accelerates.

00:19:50
Sometimes when I do my planning, I will also try to put those types of feedback loops in, because an outcome of a feedback loop will often be a curve that ultimately has that quality of taking off. In a lot of places, you end up with these linear forecasts and if you're sitting there and you're thinking, "Listen, I need to put in my budget request for next year for storage on S3 and I also need to give some indication of what's going to happen the year after and the year after that and the year after that." If it's growing linearly, I think you're making incredibly extreme assumptions based on what evidence has shown us. So you have to go back and start to say, "How do I put in more realistic assumptions even if it's going to freak the CFO out?" Because that's what history has shown us.

Jon Krohn: 00:20:42
One thing that's interesting that you mentioned there was corporations. Organizations like Intel and Microsoft in the '80s and '90s, they were unquestionably at the forefront of hardware and software. In recent years, something that you and I talked about in our pre-call was how AI-adopting companies have had much faster growth, maybe even exponential growth in recent years relative to companies that haven't adopted AI. I'd love to hear more about this exponential gap and the organizations that have been doing it right and everyone else who's being left behind and what we can do to close that gap. What kind of strategies we could employ as an organization, our listeners' organizations can employ to try to catch up to or eventually get on the same trajectory as AI-first companies?

Azeem Azhar: 00:21:34
Sure. Well, absolutely. I mean, I think the exponential gap is a really important concept. We'll talk about it conceptually and then I'll try to get practical. Conceptually, it's just that the technology races away faster than the norms and the rules and the processes that we have can handle it. I think the easiest example I can give is what are the rules about using a smartphone at dinner. Because when I grew up, there were no rules because there were no smartphones. And then 2007, smartphones arrived, and by 2014, parents are screaming at their children and we're trying to make up the rules as we go along.

00:22:14
That's the exponential gap in a really prosaic way that we all understand either as parents or as kids who've lived through it. What's happening with AI, I think, is really an interesting example. The data point you mentioned about AI companies growing faster is that if you look at fast-growing software as a service or SaaS companies and you compare them to fast-growing AI companies, SaaS companies took about 60 months to get to $30 million annualized revenues, whereas AI companies are taking about 20 months to get there. They're going much, much faster.

00:22:55
You think about a company like Anthropic, which is not even the biggest AI foundation model company. It got to a billion dollars in sales last year in 2024, which I think is its third or fourth year of operations, which is pretty remarkable. I think that there is a real opportunity when you apply AI to absolutely drive and change your business, but let's be clear. These companies are... On the one hand, they're either making the tools and so they're growing really quickly because everyone needs the tools. That's a case of OpenAI or Anthropic. Or on the other hand, they're brand new companies operating in CRM or data cleaning or sales automation that have been built from scratch using AI and are just able to deliver better products because they have a better technology.

00:23:53
I think that kind of lays out the ground, but I'm not sure if I directly tackled the question you asked me, which was about the gap. Do you want to put it back to me and I'll come back and try to answer that?

Jon Krohn: 00:24:04
The idea is, is there anything we can do? As a lot of our listeners will come from... Some of them probably do come from Anthropic, for example, but others will come from firms where you see that kind of exponential growth that these AI firms are having. I wonder if there's strategies that we can employ in our own organizations that would allow us to reap some of that exponential growth and catch up to Anthropic in some ways on trajectory.

Azeem Azhar: 00:24:35
Yeah. Well, I wish we could all. At least if your business and mine can catch up with Anthropic, you and I can meet in the Maldives next Christmas.

Jon Krohn: 00:24:43
Yeah. I think three years to a billion dollars in revenue

[inaudible 00:24:48]

Azeem Azhar: 00:24:48
Yeah, that would be fun. There's a lot we don't know about AI as a technology in the same way there was a lot that we didn't know about the internet back in the mid-90s, '93, '94, '95 when I started to work with it. The companies and the people who did well were the people who, by and large, started to get into it early and we learned what good looked like and what good could be like.

00:25:17
One of the things that we know about AI is that unlike, say, the Metaverse four years ago, it's a real, real thing and there's tons of evidence now that it is real and it's not going to disappear. But the other thing that's true is that it's changing really rapidly. I'm sure that the listeners have been thinking, "Should I be using Claude Sonnet to support my coding or ChatGPT or the new o1 model or DeepSeek or where do I go?" It's changing so rapidly, but the opportunity you have actually is to build the muscle of working with AI.

00:25:56
And the muscle of working with AI is both what is it to have these cognitive assistants that are getting better and better at what they do, but it's also how do you build systems that adapt to a world where the underlying tools change so significantly. And it's also about how do you learn about where they're going to be effective and where they're not. Finally, because they can be used in so many places, how do you prioritize?

00:26:26
The thing about that set of four questions is that it's not yet in an O'Reilly book. It may be one day but it isn't yet, and so you have to learn that yourselves. The way that you as an individual or as a team leader start to close that gap is to start to learn and experiment and practice.

00:26:51
I think that there's a right space to how close you get to the bare technology where this matters. I think that if you always live up at the level of the finished package product so you get workflows out of Salesforce, the amount of learning you do will be limited. Actually, Salesforce will do all the learning.

00:27:14
I don't think you need to get down as deep as building your own foundation models, because unless you've got 20 amazing scientists and $10 billion, you ain't going to get very far. Obviously, there are some exceptions to that, but the right spot is enough tools that are semi-finished or maybe are APIs that really allow you to build, play around, learn, develop your practice, and continually invest in that while you are delivering.

Jon Krohn: 00:27:46
I love that answer and we can... Later on in the episode, we're going to talk about the future of work and how our listeners can prepare for the future of work. We'll dig into that a bit more, this kind of idea of how the O'Reilly book doesn't exist yet for how we need to run our organizations in this AI world. Yeah, so we'll get to that later on.

Azeem Azhar: 00:28:05
But there are a few really great O'Reilly books about working with gen AI and large language models, which I should just say that I've glanced at one of them because they're all available online quite often and it was pretty impressive. So I don't want people to get the impression they shouldn't go to O'Reilly and play with these. There are some good

[inaudible 00:28:25]

Jon Krohn: 00:28:24
Oh, no, for sure. Oh, yeah. I don't know the names of them offhand. I create lots of content for the O'Reilly platform and host conferences there, do trainings there, and so also I've come across lots of these books. There's tons of books designed across the spectrum of users, whether you're a hands-on practitioner, like a software engineer or an AI engineer. There's books for you, of course, and also from other publishers and all the way through to click and point guides. I'm sure there are whole books.

00:29:00
I don't know specifically that this is from O'Reilly, but there's certainly books out there on prompt engineering, which also interestingly that... This is just a really quick aside. We don't need to spend much time talking about it, but something that seems obvious to me is when we were using GPT 3.5 in the original ChatGPT now more than two years ago, this kind of idea of prompt engineering and maybe even this supposedly even $400,000 a year job of prompt engineer that's perfect for a PhD in literature to take that... We don't hear as much about prompt engineering now and it's because, reinforcement learning from human feedback, companies like Anthropic and, of course, OpenAI as well have been so good at assimilating data and creating new data that allows these algorithms to just do what you want without you needing to engineer the prompt.

Azeem Azhar: 00:29:51
You're absolutely right. I mean, it's so fascinating to see how the better quality models that you see with Claude Sonnet 3.5 and the various ChatGPT examples go so much further even with minimal prompting, although I would still say that you can do quite well if your prompts do get better. But I just think that you get your 90% now without having to prompt engineer. If you want 95, you have to do a little bit of that. I do wonder what happened to that literature PhD who was [inaudible 00:30:31]. Hopefully they didn't take a mortgage out on that salary because that could be uncomfortable.

Jon Krohn: 00:30:36
Yeah, yeah, yeah. It's interesting to think. I was just trying to think exponentially there for a moment about how, if the things like RLHF and the underlying LLMs assuming exponential improvements over the next 10 years, and just how much the models' outputs given a prompt have improved over the last two years is interesting to think. You could wake up in the morning and say to your whatever... It's probably just in your home everywhere listening to you and you say, "I'd like to have a great day today."

Azeem Azhar: 00:31:10
Right. Well, okay, I'm going to let you in on a secret because I already do that. I use a load of these models and we can talk about why I use, which one, when, but I use Claude on my phone quite regularly. I will drop my daughters to school, and on the way back, Claude has an audio mode and it'll take up to 10 minutes of audio. I will hit the audio button and I'll drive off and I will download everything in my brain. There's not much in my brain, so I can do it in 10 minutes.

00:31:44
I will say, "Okay, Claude, I have to think about a speech I'm giving. Here are the ideas that I've got in my mind. Here's the audience." That's all I've got to say about it for two minutes. And then I might say, "Okay, Claude, also I've got to renew my car insurance. I've got to pay that parking ticket. I've got to do this. I've got to do that." I'll keep talking and it'll be like health staff, home staff, cognitively demanding stuff like presentations and speeches, it'll be operational stuff like, "Remind me to figure out what's happening with the Stripe refund, the issue that we've had."

00:32:26
And then at the end I'll say, "All right. Claude, I've given you this grab bag of things. Organize it sensibly for me," and it will go off and do that. And then when I get to my desk 15 minutes later, I'll just open the Claude app on my computer and I've got my to-do list that's been done. For a lot of those tasks, it will have written the letter to appeal the parking fine, it will have ordered my thoughts for the presentation or the speech and explained where the gaps are. I sort of do that and it's good enough for 3.5 new Sonnet to make a big difference to my day and hopefully now to everyone who's listening to the show today.

Jon Krohn: 00:33:06
So then I get that idea of dictating, but then how do you then have it flow forward into these kinds of things like reminders or individual tasks? Where does that surface?

Azeem Azhar: 00:33:17
Yeah. I mean, that's the gap at the moment. Right now, it's copy and paste and I'll just copy these things and paste them across. There hasn't been a great an example that I know that can grab the context of something as amorphous and confused as a speech through to a bunch of to-dos and make sense of how to break them apart. I mean, what Claude will do is it'll say... Okay, then you have a list of to-dos, and in markup it'll put the double hashes and it'll be big and bold, and then there'll be a list of to-dos and bullet list. And it generally gets that right, but, yeah, no, we're missing that step to action which is what all the AI companies are promising this year.

Jon Krohn: 00:34:06
Yeah. And I'm sure it's something a company like Apple, which has been slow relatively, and that is their MO. They haven't been at the forefront of LLMs but you can anticipate that they have people that are working on integrating all of their applications and allowing those kinds of flows to start to happen, to be that middle layer in connect applications to allow us to have, "Okay, the emails are drafted. Just press send when you've read them and your to-do list is in your Reminders app on your desktop and in your iPhone and so on."

Azeem Azhar: 00:34:46
Well, I think there's something that I wrote in my newsletter which was about the way in which this ecosystem might expand. It was about the fact that there would be lots of AI agents supporting us. The idea of an AI agent is that it's an AI system that's more than one shot. It has this data and it can potentially do something useful at the end of it. I use a few of these. I imagine we're going to have hundreds if not thousands of agents that are circling around us, for us working on our behalf in the same way that we have hundreds of apps on our phone and, within each app, lots of functions. But then I think we will need a supervisor agent, like a chief of staff that we can just blurt at, because one of the things that I have always been terrible at, because I'm super impatient, is switching from app to app to app.

00:35:43
In the end, I still just keep everything in Notebook. I know there's Notion and I know there's Trello and I know there's Jira and I know there's all these things. I've designed my life to never have to actually be disciplined to use those because I find it frustrating. I think that the thing that I would hope will end up happening is that we have our own personal supervisory AI system where we can be a bit unstructured and then it figures out which agents to send the task to, because I don't want to switch context to say, "Now I'm doing a to-do list and previously I was brainstorming." I just want to move seamlessly and I'm not sure that Apple will do that because it's never really been their MO. I'm expecting that somebody else might try and do that.

Jon Krohn: 00:36:34
Yeah, you could be right. Of course, Google with their Android phone and Chrome browsers, they could be well positioned to be maybe iPhone users at a few years will say, "Oh, man, I'm going to have to get into the Android ecosystem because everything is just so well interconnected and I have my AI agent chief of staff that can just order [inaudible 00:36:56] to use the American term all the aspects.

Azeem Azhar: 00:36:59
Are you using anything that you might call an AI agent?

Jon Krohn: 00:37:04
Yes. The main thing that I've been using for that is You.com, Y-O-U.

Azeem Azhar: 00:37:13
Yeah, Richard Socher's company. Yeah.

Jon Krohn: 00:37:15
Richard Socher's company, exactly. We had Richard's co-founder and CTO, Brian McCann, on the show a couple of months ago, episode 835.

Azeem Azhar: 00:37:25
Wow. You've got a good memory.

Jon Krohn: 00:37:28
I've got the spreadsheet open in front of me. It happens to be just recent enough that I didn't even have to scroll or search within it. You.com has some pretty cool functionality for allowing you to spin off research tasks. It has a research mode that could run for 10, 15 minutes for a typical task and it does a great job of kind of...in a way that... It sounds like you, Claude 3.5 Sonnet is my preferred LLM at the time of recording and I use it as my default go-to because I, more so than... Initially when ChatGPT first came out, that was my go-to.

00:38:14
And then Anthropic, just there's something... Not only is the user experience to me kind of friendlier and warmer, but the outputs are just so often I provide so little context and I'm blown away by how with just this ugly pair of words that I throw in, it somehow knew exactly what I want and gives me the response. It reduces a lot of effort for me in terms of prompting to go back there. But the disadvantage of a model like Claude 3.5 Sonnet is you're dependent on the model weights that are trained. You can't do real time lookups of things.

00:38:52
With You.com, that's been where I've been going to be able to kick off real-time research tasks where it pulls in information from lots of different resources in real time. It has a central agent that's figuring out how to break down the task into lots of small subtasks and then it spins up lots of sub-agents to go off and do the individual pieces of research.

Azeem Azhar: 00:39:12
That's super interesting in the spirit of how fast this world is changing at this exponential rate. You.com did not offer that when I last checked it a few months ago and, of course, now it does. Yeah, no, that's super fascinating. I'll go and give it a look-out. Yeah.

Jon Krohn: 00:39:30
Yeah. Yeah, it's brand new. So what other tools... You mentioned that you could tell us what other tools you use other than Claude and I'd love to know. What other kind of day-to-day tools are you using?

Azeem Azhar: 00:39:41
I mean, I use pretty much the Google Gemini, the research capability in Gemini Advanced. I use NotebookLM for certain classes of wide-scale research when I've got lots of academic papers that I need to go through. One of my... I use a tool called Fyxer, which is F-Y-X-E-R. I'm an investor in this company. That looks at my Gmail and it does its best to pre-can my responses, so that when I go through my Gmail, it's often given me two responses. "Yes, I can attend your party," and, "No, I'd love to. I'm really bummed that I can't, but I can't," and then I just choose the one I want. That saves me quite a bit of time.

00:40:27
But recently I've been playing around with workflows that involve a few agents. In this case, I might want to have some support for an idea that I'm thinking about. Say, for example, I'm thinking about parallels between the growth of capacity in compute and data storage with the growth of capacity in the railway industry in the 19th century where there was overcapacity. In the old days, I might've gone just to Claude. And the old days, I mean November of 2024. I would've gone to Claude and I would've said, "Claude, look at this from the perspective of a historian of technology." And then I'd say, "Now, Claude, look at it from the perspective of an investor and give me a view." I would manually go through this.

00:41:22
Now, I have this workflow where I can define three experts and one might be a historian of technology, one might be an investor, and the third might just be a cynic. And then I will put the question to this network of agents, which are all sitting on top of different large language models. A fourth agent will be an orchestrator. I will ask them to argue between themselves until they've got to a point where they either fundamentally agree or disagree and then it gives me my final result. So then I run that process and it takes about five or six minutes.

00:42:02
Like you said, You.com will take 5 to 10 minutes and it does cost something, so I'm burning through tokens. It'll cost me 20 cents, but after five or six minutes, about half the time, I've got a really, really good critical overview of my issue, which I can then go off and do more research on. Half the time, it fails. It just gives you complete pabulum rubbish. The way I look at that is when I've worked with teams and you give them these open-ended research tasks of teams of humans, about half the time they don't give you anything good. That's the nature of it and about half the time they do.

00:42:43
My team uses these tools, I use these tools, but that's one of my favorites. It's one of the key [inaudible 00:42:49] agentic workflows that I now use. We designed and we built it in one of these agent workflow platforms that are sprouting up everywhere.

Jon Krohn: 00:43:04
What API are you calling in there? You mentioned burning through tokens. Is that Claude or...

Azeem Azhar: 00:43:11
You can choose any one that you want. I mix and match. I tend to have o1 Pro... Sorry, o1 in there, which is the OpenAI reasoning model that's meant to be much, much more structured. I always have at least two Claude 3.5s because they're so good, but I think that you want to get... To my point, I spoke about this earlier that when you're learning, you want to get closer to the metal than be abstracted too much by the product layer.

00:43:45
What you want to be able to do is configure the models a little bit more, like play with their temperature score. How straight laced or wild is the temperature going to be? Temperature 0.1 is like your pastor at church and temperature 1.5 is like Van Wilder party liaison. You want a mix of those, but you want your evaluation model to be very quite sensible but still creative, so temperature of one.

00:44:17
So you need to... You can only access those parts of the API if you are talking to the API rather than having it interfaced by some third party tools. Increasingly, we try to get closer to that so we can learn through experimentation what works for each different context.

Jon Krohn: 00:44:35
Nice. Yeah, that makes a lot of sense. I don't think... Did you happen to mention the tool that you're using for orchestrating this? I don't know if you mentioned that.

Azeem Azhar: 00:44:42
Oh, I use one of two different tools. I use one called Wordware, which I'm an investor in, and another called Lindy.ai. They're sort of similar but different. I mean, again, these are all products that are finding their product market fit. I mean, Wordware, of course, I recommend people try first. Disclosure, I'm an investor.

Jon Krohn: 00:45:03
Nice. Yeah. Lindy.ai is L-I-N-D-Y.

Azeem Azhar: 00:45:06
D-Y. Yeah, that's right.

Jon Krohn: 00:45:08
Awesome. All right. That was a great example of one of the unstructured asides that can be [inaudible 00:45:15] we can go off on.

Azeem Azhar: 00:45:16
Yeah, sorry about this. Yeah. Yeah.

Jon Krohn: 00:45:16
No, I love it. But to kinda rack up, to quickly wrap up our conversation around this exponentiality and projecting forward decades into the future, I have two specific topic areas that I'd love your thoughts on. This first one can probably be quite short.

00:45:39 At the time of recording, I've just released today, so it's episode 851, an episode on quantum machine learning. In chapter one of your book, you talk about, as we near the physical limits of Moore's law, quantum computing could play a bigger role. I'd love your thoughts quickly on... It seems like you don't see quantum computing as just this tack on cute thing that's helpful in a relatively small number of scenarios but potentially something more transformative.

Azeem Azhar: 00:46:12
I think that quantum is quite hard to get a grip on. When I wrote the book, Google had just announced that breakthrough with their Sycamore quantum chip, which I think it was a Sycamore, where they had done that sort of toy test, lab test. We generate random numbers and it would take a normal computer quadrillions of years, much older than the life of the universe, and takes a quantum computer a minute. And then a few months ago, they announced the same breakthrough again with a big fanfare, but the people sitting around them had more certainty.

00:46:51
I think the thing that we have to hold into in our heads, Jon, is that people who are building quantum computers have come having been scientists. Scientists see, always see the world as it could be rather than as it has been. Their framing is... Their framing of timeframes, I think, is different to you or I or most of our listeners who probably have products to deliver.

00:47:20
That's a roundabout way of my saying I don't know when quantum computing will show up and be genuinely useful. If you have these quantum computers with the tens of thousands of logical cohered qubits or million logical cohered qubits that you need to do real quantum computing, I think really interesting things and amazing things will happen. But there was a while, and I'm sure you were familiar with this period of time, where for a couple of years people were saying, "Look, we're getting so many insights from quantum that we can start to simulate quantum on GPUs." That is allowing us, giving us algorithmic tools to do things we couldn't previously do and we would never have discovered that without quantum.

00:48:18
I mean, that might be the case, but the other problem that you've got is that the world is being eaten up by the transformer architecture, whether it's protein discovery, materials discovery, robotics. This is the way to do everything. It might just be that there is more, the intersection is not as big as we thought it was between quantum machine learning that we have to just wait for quantum computers to be developed and delivered at that scale where it really makes a difference.

00:48:50
In the meantime, we can just do really, really well applying this transformer architecture and all the amazing chips that Jensen Huang in NVIDIA are producing and that will knock down the doors. It's a really tough one. If I had to really, really summarize that in a sentence, it's continue to build quantum computers would be my view, but there's so much you can do with this AI wave that maybe there's a good reason to spend time there.

Jon Krohn: 00:49:25
Yeah, very nice. Great answer. If people want to dig into over an hour of discussion on quantum machine learning, what's possible today, what could be possible in the future, episode 851 is a great one to refer back to.

00:49:40
My final exponential view question for you before we get into some more data science specific stuff is I'm optimistic that one of the biggest challenges of our time, climate change, that AI can play a role in the transition away from carbon-based energy toward renewable energy. This could include fusion energy. We have commercial labs now with private investors that are expecting a return on not crazy long time horizons and there's a dozen different labs trying these commercial fusion approaches.

00:50:20
But even without fusion power, solar panel efficiency and the crazy exponential growth that solar panel installation has had in recent decades... If that kind of trend continues, I am hopeful, I'm optimistic but maybe it's just because I'm an optimistic person, that we will be able to tackle some of the worst effects of climate change and, in our lifetime, we may even be able to start reversing them. If you have abundant energy through fusion energy, for example, you can be pumping carbon back into the ground and storing it and reversing some of the effects that we've had.

00:50:58
Broadly speaking, I'd love to hear what you think the potential risks and benefits of transitioning from an economy dominated by oil to one driven by AI and renewables is.

Azeem Azhar: 00:51:07
Well, yes, I'll say to your last statement, I really agree with that. Let's distill that reasonably quickly. We know that AI requires energy build out, but in reality I've written about this a couple of times in The New York Times and in the Financial Times in the UK as well and elsewhere. I actually think that the build out, the demands will catalyze a greater discipline in how we build the energy system.

00:51:39
Then what the question is what's really happening with the energy system, and what is happening is that energy is going from being a commodity, where it's all about the oil price or the gas price. And by the way, over 200 years since we've had coal and 100 years or so that we've had oil and since we've had natural gas, the cost of energy from those three systems has not got cheaper.

00:52:00

It's got, in some cases, more expensive because it's dependent on the local dictator or autocrat and physical extraction. Whereas the cost of solar panel has dropped by... I mean, I'll forget the exact numbers, but 99% I would be safe by saying over the last 20 years. That's why you see an exponential growth in the amount of solar that's being installed worldwide. That pattern also, by the way, of exponential cost declines and growth happens with batteries because it's obvious to most people. The sun doesn't always shine and the wind turbines doesn't always blow, but that's actually a completely solvable design problem.

00:52:39
The way to understand what's going on is energy becomes a technology, technologies are things that gets cheaper. They get cheaper through learning rates but also through modularization and miniaturization. I think the best analogy that listeners will understand, especially those who are over 47, I guess, is the shift from the telecoms network to the internet.

00:53:04
The telecoms network was like the old fossil system, incredibly reliable, controlled by a few companies because it was really expensive to get into it. You needed like a billion dollars, which is what you need to build a coal or a gas plant. The internet comes along with these technologies, fiber optics, fiber optic switches, optical networking, chips, ram, routers, prices decline. What you see is a dramatic decentralization. Lots of internet service providers show up. Anybody can now run a call waiting or a voicemail service hooked onto the end of an IP system.

00:53:41
The internet today is much more reliable, it's much cheaper, and it is better than the phone network ever was. I think that that same parallel will happen with our transition fundamentally to solar plus batteries, but wind, traditional nuclear, and maybe one day fusion will also play a role. It will be a better, cheaper, more dynamic, and adaptable energy system where energy costs will be much, much lower, we'll have much more energy, and to your point about reversing some of the worst impacts of climate change, because most of these things that we say are expensive, what we actually mean is they take a lot of energy and energy is expensive.

00:54:23
Well, they'll still take a lot of energy, but energy will be super cheap so they will now be cheap. That would be carbon capture and desalination and other types of things that we could do, but we're talking about different timeframes. I think that latter part is decades. I think the former part, which is the fundamental transformation of the energy system, is probably measured in a couple of decades rather than multiple fives or sixes. That's where we are and I think that is a reason to be somewhat optimistic.

Jon Krohn: 00:54:56
Great. It's nice for my selection bias to allow me to get more optimism.

Azeem Azhar: 00:55:06
I can give the reverse argument and I'll just give the reverse argument for a second, right?

Jon Krohn: 00:55:10
Sure.

Azeem Azhar: 00:55:10
The reverse argument is very hard to make because it's not grounded in empiricism. The reverse argument is, well, there's too much materials required for solar panels and there's too much required for lithium. The truth is there's a lot of science and Nature magazine has had peer reviewed papers on this that show the materiality of solar panels is much lower than the fossil system. The truth is there's tons of lithium. We just haven't got round to extracting it because we didn't need to and now we need to.

00:55:44
The declining cost of batteries and the fact that we've only just starting to invest in battery research means prices will come down dramatically. There was a paper by Oxford University researchers about two years ago, which said that a fully renewable energy system would be cheaper to run than a fossil one. The faster we went towards a fully renewable one, the quicker we would start to save money. And the number they had was, I don't know, some trillions of dollars, more than you and I were planning to make in the next three years certainly.

Jon Krohn: 00:56:21
Oh, man. Yeah, I love how you-

Azeem Azhar: 00:56:24
Sorry to disappoint you.

Jon Krohn: 00:56:27
Yeah. Well, it's not too far. If we're able to get that exponential growth to a billion dollars of revenue in three years, I mean, how far are we off [inaudible 00:56:34]

Azeem Azhar: 00:56:33
Maybe six or seven. That's fine. I'll see you on Mars.

Jon Krohn: 00:56:39
I love how, in your most recent answer, you started off by saying, "I can make the counterpoint," but then you very quickly started getting into the tenets that were the [inaudible 00:56:48]

Azeem Azhar: 00:56:48
Well, because the foundations are so shaky. That's the problem. They're just not grounded in empiricism. What they're grounded in is... I'll share a couple of things, being a guy who's been around for a long time. I remember the CEO of one of the biggest mobile phone companies in the UK telling me he would never allow their customers to pay their bills over the internet. This is back in '99. It was true. He was fired the next year. But a lot of these challenges are based on holy cows, sacred tenants that you can't challenge. And if you go back to the first principles thinking that you brought up earlier in this discussion, you will realize that that's what you're contending with.

Jon Krohn: 00:57:38
Very nice. Moving on to another topic here, so switching gears a bit. We've done our discussion of looking far into the future, leveraging your exponential expertise. I'd like to now dig into some topics that are maybe more specific to the kinds of listeners that we tend to have on the show, technical listeners, data scientists, software engineers.

00:58:04
In your Substack newsletter and on the Exponential View newsletter with 105,000 subscribers, you have a recent article called Why Humanity Needs AI. It advocates for leveraging AI to address complex problems that require knowledge beyond human constraints. What are these constraints that humans have on knowledge and how can AI overcome them?

Azeem Azhar: 00:58:28
It's a great question. That was a fantastic essay. Thank you for drawing attention to it. AI is a tool for us. If we think about any work that we do, our tools help us. If you're a data scientist, you have probably used Sed or AWK. I mean, I think that's fair to say. It's enabled you to do things you otherwise couldn't have done. You've got 15 million lines of data and you've got to tidy it up so you go off and you use this tool.

00:59:09
We've always used tools to improve our capabilities, but as humans we've also always solved more and more complex problems. We've done that since we emerged from the African continent 100,000 years ago. But the challenge that we now face is that... And this was the point of the argument, it's a hundred-year perspective, is that we know that the human population will peak in about 60 or 70 years.

00:59:42
When it peaks in 60 or 70 years, even if we are much more well skilled and have better tools, the number of people who could actually get involved in problem solving is going to decline and it will continue to decline. For the first time in the history of our species, we won't be creating more knowledge, more science, more problem solving the next year than we did in the previous year. The only way to get around that and to continue that attribute of what it has been to be humans for a couple of million years is to have tools that can magnify our capability significantly. That's where AI comes in. AI becomes absolutely critical for continuing to develop the knowledge production of humanity.

01:00:31
Now, let's boil that back down to something more practical for a data scientist today. If you are looking at a stream of transaction data, there's no way that you personally could identify anomalies. You probably couldn't identify anomalies with the tools that I had in the early-90s which was basically grep and regex. You need something more sophisticated. Today, we call it machine learning, but 10 years ago we may have called it AI.

01:01:09
We're already in a point where we can see patterns, we can see relationships that exist in dimensionalities that are not obvious upfront, because we can bring these tools to bear. I do think there's something really essential about how they relate to how we make sense of a more complex world, which we are ourselves also building.

Jon Krohn: 01:01:34
Nice. Yeah. Great answer there. Something that this makes me excited about and I've talked about on the air before... In fact, I did an episode on this idea of an AI scientist back in episode 812. It was a Japanese company founded by a lot of Google DeepMind people, if I remember correctly, Sakana.

Azeem Azhar: 01:01:57
Yeah, yeah, yeah. That's right.

Jon Krohn: 01:01:58
Yeah. This AI scientist paper where it was just machine learning specific, because that was a neat area for them to develop this AI scientist that is proposing research ideas and executing them. Because with an AI scientist, with an ML researcher, you can run experiments in silico, you can run them on computers, you can provide a budget for these experiments, but the same kind of thinking could be applied to biology and chemistry, material science, where I know that there are teams working on this.

01:02:30
I don't know how well advanced it is, but where you have AI systems controlling pipettes and ingredients and actually running experiments, that could be allowing us to have biological chemical materials breakthroughs, whatever scientific field. This is a really exciting idea, because these AI systems... Again, actually, the You.com co-founder, Bryan McCann, back in his episode 835, he talked at length about how this will transform science and it is really exciting.

01:03:11
As we start to have AI systems be creating knowledge and coming up with ideas, which as Brian McCann pointed out, it seems very likely that these AI systems will have insights that we could never dream of having because these AI systems will have... They'll have well-trained neural network weights across all human knowledge, across all academic papers, all textbooks, which is something a human could never endeavor to do a small percentage fraction of. Yeah, so it's really exciting. But simultaneously, it's also this kind of... That will feel like a different era because it ties into artificial general intelligence kinds of concepts and where does that leave humans if our knowledge... Yeah. Yeah.

Azeem Azhar: 01:04:03
But if we come back to this question of the AI scientist and these systems being... On the one hand, they can help us as tools and where we are today is they help us as tools. The Sakana paper I thought was particularly interesting. It, as you said, went through essentially the traditional way that a scientist will do their work. It'll generate some research ideas that might be novel. It might conduct some literature searches. It was able to write and execute the code. It could run experiments and then ultimately it could write a research paper. And then you say, "Well, let's do this in chemistry." This is lab automation, wet lab automation that people are working on.

01:04:51
There's an AI model called ChemCrow, C-H-E-M-C-R-O-W, that did a sort of Sakana version of research around chemistry a while back. But I think it's also the case that, for a lot of scientific breakthroughs, most humans don't understand them.

01:05:12
When Ross and Franklin was looking at those X-ray crystallography shots of what then became DNA, she's the one who spotted. "This has to be a double helix pattern that's being cast effectively." There were 3 billion of us at the time or 2 billion people at the time. Most of us would not have been able to see that. When mathematicians, when Terence Tao comes up with a new conjecture in maths, it takes people a decade to unpick it and make sense of it and only five mathematicians can figure out what's going on.

01:05:40
That's true for all mathematical research, so I think this idea that we can't understand it is also something that's already true. To some extent, I think what becomes quite interesting is when new methods of research start to emerge... I don't think we've seen what AI systems could do in terms of new ways of actually conducting science that aren't just a faster version of a human conducting science. I think that that's quite an interesting question to ask.

01:06:25
I mean, what I would say is that also the other thing that they can do right away is they can bridge the silos of science. One thing that's happened in science over the last 50 years... And it's really connected to the funding train that is required, right? In order to get funding, you have to tell a grant that you're going to do X or Y, so as the PI, the principal investigator of a lab, you tend to narrow your focus. People are not as freewheeling. PhD students just kind of clunk through version 3.6 or 3.7 or 3.8 of their PI's thesis and things get very narrow and there are very few polymaths who sit across domains.

01:07:11
Actually, scientists today can be polymaths because they can go to an LLM, they could go to a science-specific tool like Elicit or Consensus, and they could say, "Find analogous concepts across all of these areas to help me make sense of the world, to see if there's insights elsewhere that allow me to form a hypothesis." I think that that is also really, really powerful. In this kind of enormous toolkit that we're being given, there's another tool that changes the way science gets done.

Jon Krohn: 01:07:43
What a fabulous answer. I love that. You provided a number of specific tools there that I hadn't heard of before but that listeners can dig into after the episode. That's fantastic. Thank you, Azeem.

01:07:55
As my last kind of topic area to discuss, and it follows on nicely from this idea of the AI scientist and replacing what it means to be human, I loved the example you made there like Rosalind Franklin or a math guru being able to see things that only a handful of people on the planet out of billions would be able to see anyway. There is something actually kind of reassuring in that, okay, this is just another set of brains that will be doing things that we can't understand.

01:08:27
That is cool, but it might make people feel uncomfortable if those brains can be much cheaper and much more effective than maybe what we're used to doing day to day. People listening to this podcast, they're probably amongst... they are probably around the 99th percentile of people who are adapting to new shifts on the planet. We're trying to stay abreast of all these technological changes. How can I be using LLMs like you just described as a scientist to be coming up with new ideas. Of course, you mentioned already earlier in the episode tools like Claude or GitHub Copilot that we can be using to augment ourselves as software composers.

01:09:10
Those kinds of tools are out there, but it doesn't seem like we're too far off from a lot of what software engineers, data scientists, AI engineers do being replaced completely by machines. Part of what makes us so susceptible in our career is that a lot of what we do could be done remotely. In fact, since the pandemic, a lot of people have, at least some days of the week. If you work in a situation where keyboard and microphone inputs and outputs are all you need to be doing your job, that is an easier kind of role to disintermediate with an AI system than if you are washing bed sores off of comatose patients in hospital. That's something you can't do remotely.

01:10:03
So you have been an investor in more than 50 tech startups since 1999. You, no doubt, have interesting insights into where technology is heading but also where work is heading. With the automation of many jobs on the horizon due to technological advancements, what are your thoughts on the future of work and maybe where is the solid ground that our listeners can find in terms of skills they could be developing or career shifts they can be making to hopefully continue to have a job in the decades to come?

Azeem Azhar: 01:10:36
I don't think there's any solid ground. I also understand why people feel apprehensive. You should feel apprehensive. You should go off if you're not. If you haven't had your "holy shit, oh shit" moment with LLMs where you sit and spend two or three days and throw the hardest problems you can at them and you get really impressed with what they can do, you need to go off and do that and go through your mini existential crisis and come out the other side.

01:11:04
We already live in a world where jobs change really frequently. 53% of Gen Zs want to be influencers. When I launched that company that had hired a data scientist back in 2010, we were the first influencer ranking, sorry about this, platform on just Twitter at the time in the world. Now, 53% of people want to be influencers. It's only been 14 years.

01:11:29
An economist called David Autor looked at about 80 years of US employment data and he points out that 60% of today's jobs didn't exist in 1940 and they require expertise that didn't exist at all during that time. In my first job, I worked at The Guardian newspaper, but I was also The Guardian's first load balancer. A lot of you will use something called ELB, Elastic Load Balance, on Amazon. In 1996, The Guardian used my right hand. We had three servers and two ethernet connections and one would get its memory leak and crash out. I would pull the ethernet cable out and plug it into the one that had just rebooted, and then 10 minutes later I would do it for the next round-robin during the peak times during the day.

01:12:17
All sorts of new jobs get created during this time and what we can do is we can look at what's happened over the last 30 or 40 years. There's been a huge amount of middle class job growth, jobs constructed in the types of things you were talking about. We work remotely, we work behind a desk. And at the same time a lot of lower-paying service sector jobs that have been created and that creates a lot of tension, so we should bear that in mind.

01:12:46
I think the second thing we need to bear in mind is that people don't really, really know where this ends. It's very hard to predict the third, fourth, fifth bounce of the ball. You can look back at history and look at people's emotional responses and that may give you some insight. My sense right now is that the way my work has changed and the way that my team's work has changed is that everybody is more boss-like. Everybody has to set goals and objectives to teams that have made up of one or more AI systems and you have to be better at having those boss-like skills. My domain knowledge and my experience and judgment gives me some advantages in that.

01:13:30
In a sense, look, there's no solid ground. We really don't know how fast this technology will develop. We really don't know what will exactly happen to the job market. We can expect a large number, maybe the vast majority of jobs to change. We can expect new jobs to be created and we can expect some jobs to go away. The way to prepare for that is ultimately to skill up. By skilling up, you actually put yourself in a position where you can somewhat define your future a little bit.


01:14:08
I say that just in the sense of being really, really frank and honest about where we are. I mean, people don't know there's increasingly good research coming out, but that will still be only limited and maybe in 20 years we will know, but even then perhaps we're still asking the question.

Jon Krohn: 01:14:25
Very nice. Well, as we start to reach the end of this episode, I really appreciate your thoughts on these things. Even if there is, I'm kind of hoping for that magic solid ground for you to be able to tell me about in addition to our listeners.

01:14:44
If people want to listen to your thoughts after this podcast episode, obviously they have your newsletter, Exponential View. We'll have a link to that in the show notes. Your book, Exponential Age, which we talked about a number of topics from that in today's episode. You also have the Exponential View podcast, which podcast listeners might enjoy. Kai-Fu Lee, one of the biggest names, one of the biggest AI authors out there was on the show just last week at the time of recording. You've also had Andrew Ng, Fei-Fei Li, a lot of amazing guests on that show, so that's a podcast for people to check out.

01:15:26
I pointed out last week on social media that you would be coming and I highlighted those biographical points about you and I asked if people had any questions for you. We had a great question come from Elizabeth Wadsworth who is an IT applications manager in Ohio. Elizabeth asks, "In the future, do you think the same systems and theories will apply?" Some of the futurist work that she studied suggests systems and theories are a good structure to impart when the future of tech is unknown due to rapid advancement. Do you think that we're in the middle of such a rapid transformation that these systems and structures will no longer suffice?

 01:16:07
I think that's a really interesting question. It also relates to a point that you and I discussed before recording today's episode, which is that the kinds of structures that existed, corporate structures, in the workplace, we're still for the most part... Companies are adhering to those structures when in this world that we now have LLMs and we can be doing that kind of taking on those boss-like roles as an individual contributor, that fundamentally changes things. I think it kind of relates here and so I'll leave it to you.

Azeem Azhar: 01:16:39
Yeah. I mean, it's a great question from Elizabeth. Thank you for asking such a good question. In general, history has said they don't stay the same. They do dramatically change. If you go back to factories before electricity, mechanical power was provided by steam engines. They blew up very regularly and they couldn't distribute power very far because you did it through a set of friction pulleys and belts, and the friction would eke off heat and energy and so you couldn't get as much work done.

01:17:17
Factories were small, they were vertical, and you were never too far away from the central drive shaft. That was the design principle and process. The kind of work you did and the kind of work you asked people to do was determined by that and by the fact that, actually, your workers couldn't miniaturize the power to a small hand drill. It all had to be big clunky things. I'm not a mechanical engineer, so I don't know what they're called, but just assume something that's really big that just goes thud, thud, thud a lot.

01:17:50
When you turn up with electricity, the first thing that people do is they stick a light bulb in the room, and then the second thing they is they try to get the electricity system to replace the steam engine, but it just won't have the physical power at that point. And of course, factories don't look like that anymore because you can essentially efficiently distribute electricity across a wide horizontal area. You can partition it, so you can put massive amounts into an arc furnace that's being used to produce steel and really small amounts into an LED light that's over a desk station where I'm sitting there with a tiny little electric brush cleaning a small component. You can do all of those things. The processes by which you organize the work, the skills people need, the way you maintain things, the way you measure things changes fundamentally.

01:18:41
I would say, with AI, back to what are extreme assumptions and what are not, it's more of an extreme assumption to say nothing will change when we implement AI than is to say loads of things will change. Let's just go back to look at what the electricity versus steam change gave us 120 years ago. I think that comes back to the point that you and I have made, Jon, throughout this discussion, which has been we've just got to get using these things, because that's how you put yourself in a position to co-design them and to make them work in a way that you think is a good way of doing it.

Jon Krohn: 01:19:19
Nice. Great answer there. Yeah. Thank you for answering that listener question. I am conscious that we are out of scheduled time here.

Azeem Azhar: 01:19:27
[inaudible 01:19:27]

Jon Krohn: 01:19:28
But really quickly, in addition to your Exponential View newsletter, in addition to the Exponential Age book, the Exponential View podcast, and then there's also for people... Actually, you can watch this online anywhere in the world. There's a Bloomberg TV show that you hosted called Exponentially and that included guests like... How do you pronounce it? Sam Altman?

Azeem Azhar: 01:19:53
Altman. Yeah, that's right. He does some AI stuff, I think.

Jon Krohn: 01:19:58
Yeah. Sam Altman, Niall Ferguson, Dario Amodei are amongst the guests that you had on that show. That's another piece of content that people can be checking out from you. How should people be following you or is there anything that I've missed?

Azeem Azhar: 01:20:14
Oh, you've been so generous anyway giving me this fantastic conversation. I think the best way is if you just go to Exponential View, put it into Google or Bing or whatever search engine you use, and you just sign up to the newsletter. That's the most straightforward. Social does not work as well as it once did, but the newsletter is the best place and would be delighted to see you all.

Jon Krohn: 01:20:38
Fantastic. Yeah. You have a vast library behind you. If you're watching the video version of this, you'll be able to see this. Before I let my guests go, I typically ask them if they have a book recommendation for us other than their own books. I imagine in your case it would be tricky to get it down to just one.

Azeem Azhar: 01:20:59
I am... Do you know what? I have got a lot of books that I really, really like there and I'm going to recommend one that it's really long. It's about 800 pages long and it's called The Prize by Daniel Yergin, Y-E-R-G-I-N. This is about the story of the oil boom from the 1860s on as it started in Pennsylvania and in Baku, in Azerbaijan, and over in Indonesia. What I found really interesting when I read that book last summer was the politics, the machinations, the attempt to influence policy, the backstabbing, the way capital lent itself to the market reminded me so much of what's going on with the big tech companies around AI and the personalities involved. And I thought, "Wow, this is so insightful. We've run this tape once before." Certainly, the first four or five chapters are really interesting as a way of throwing a light on where we are today. It's actually incredibly well written and it's worth persisting all the way through the book as well. So Daniel Yergin's The Prize.

Jon Krohn: 01:22:13
Fantastic. Thank you so much, Azeem. Thank you for being so generous with your time. It's been, as I said at the outset of this episode, unreal for me personally to have you on the show. I'm sure our listeners enjoyed this episode as well. Thank you so much for taking the time.

Azeem Azhar: 01:22:26
It's really been my pleasure. Thank you.

Jon Krohn: 01:22:33
Boom. Such a treat to have Azeem Azhar on the show. In today's episode, Azeem covered how the world is experiencing exponential technological growth with computing power. That's FLOPS increasing over 60% annually for over 50 years. He talked about how AI will be essential for continued knowledge production and problem solving as global population peaks and begins to decline in 60, 70 years. He filled this in on how the energy sector is transitioning from a commodity based system to a technology-based system with solar and battery costs dropping exponentially like computer ship costs.

01:23:05
He talked about modern AI workflows and how these often involve multiple specialized agents working together orchestrated by a supervision agent. Azeem himself uses tools like Claude, Gemini, NotebookLM, Wordware, and Lindy.ai to automate his life and make things easier in his own workflows. He also talked about how the future job market will see massive changes with most roles evolving or being replaced in the coming decades. Success then requires developing boss-like skills to direct AI systems.

01:23:38
As always, you can get all the show notes including the transcript for this episode, the video recording, any materials mentioned on the show, the URLs for Azeem's social media profiles, as well as my own at superdatascience.com/855. And if you'd like to connect in real life as opposed to online, I'll be giving the opening keynote at the rvatech/Data + AI Summit in Richmond, Virginia on March 19th. Tickets are quite reasonably priced and there's a ton of great speakers, so this could be a great conference to check out, especially if you live anywhere in the Richmond area. It'd be awesome to meet you there.

01:24:13
Thanks, of course, to everyone on the SuperDataScience podcast team, our podcast manager, Sonja Brajovic, our media editor, Mario Pombo, our partnerships manager, Natalie Ziajski, our researcher, Serg Masis, our writers, Dr. Zara Karschay and Sylvia Ogweng, and of course our founder, Kirill Eremenko.

01:24:28
Thanks to all of them for producing another exponential episode for us today. For enabling that super team to create this free podcast for you, we're deeply grateful to our sponsors: you. Yes, you can support this show by checking out our sponsor's links, which you can find in the show notes. And if you'd ever like to sponsor an episode of the podcast yourself, you can get the details on how to do that by making your way to jonkrohn.com/podcast. Otherwise, share the show with people who might like this episode, review the show on your favorite podcasting app or on YouTube, subscribe, obviously, if you're not already subscriber. Feel free to edit our videos into shorts to your heart's content. Just refer to us when you do that. But most importantly, I just hope you'll keep on tuning in. I'm so grateful to have you listening, and I hope I can continue to make episodes you love for years and years to come. Until next time, keep on rocking it out there and I'm looking forward to enjoying another round of the SuperDataScience podcast with you very soon.

Show all

arrow_downward

Share on