SDS 647: Is Data Science Still Sexy?

Podcast Guest: Tom Davenport

January 24, 2023

This episode, Tom Davenport and Jon Krohn forecast the future of work and AI’s critical role in it. They discuss the importance of (re)training employees to use and apply data, developing organizational knowledge structures, and why the C-suite must understand how each of these systems works.

Thanks to our Sponsors: 
Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast for sponsorship information.
About Tom Davenport
Tom Davenport is the President’s Distinguished Professor of Information Technology and Management at Babson College, the co-founder of the International Institute for Analytics, a Fellow of the MIT Initiative for the Digital Economy, and a Senior Advisor to Deloitte Analytics. He has written or edited twenty books and over 250 print or digital articles for Harvard Business Review (HBR), Sloan Management Review, the Financial Times, and many other publications. He earned his Ph.D from Harvard University and has taught at the Harvard Business School, the University of Chicago, the Tuck School of Business, Boston University, and the University of Texas at Austin.
Overview
Coauthored with Nitin Mittal, Tom Davenport’s latest book, All-in On AI: How Smart Companies Win Big with Artificial Intelligence, explores the strategies and best practices that companies can use to fuel their business with the power of AI. In this episode, Jon asks Tom about the pressing need for companies that integrate AI into their operations to take an interest in how such adoption will impact their human workers. To be effective, AI must not only be understood but also trusted, and so the task for the executives implementing such an approach will not only be to know the skills their workers will need but also to encourage a shift in mindset. To highlight the importance of this shift, Tom gives the example of AI radiology image analysis: Such analysis tools are now as good as, if not better than, the average human radiologist. Nevertheless, evidence shows that humans will be more skeptical of a machine if it has made an error than a human who makes the same error.
Therefore, a degree of trust-building in the workplace will be necessary to adopt AI fully. This is especially crucial when so much attention has been given to the potential automation of jobs. This threat, Tom explains, remains hypothetical. Sharing his experiences of researching and co-writing All-in on AI, he explains how the fear of humans being replaced by AI is not the reality of 21st-century work, asserting that most of the predictions are wrong. Tom clarifies that it is not necessarily entire jobs that will be automated so much as their individual tasks. AI helps carry out repetitive tasks and structured activities that require little variation, and so Jon and Tom note that AI is more likely to augment rather than replace human activity, taking on repetitive processes and freeing up human workers to spend time thinking, imagining and creating.
Lastly, Tom turns his attention to the C-suite. CEOs must be able to understand and sometimes even be able to work with AI and algorithms for the future workplace. Tom even labels not engaging senior leaders in discussions about AI as akin to organizational malpractice, considering that many essential strategic decisions will be (and indeed are already being) made by consulting data. Knowledge gaps pervade nonetheless: Tom acknowledges the short lifespan of Chief Data Officers, explaining the difficulty that data management roles have in showing value. “Value” is often defined as “rapid improvement”, which makes data governance a tough role that is usually there to serve a short-term purpose. He believes that knowledge management will continue to be a crucial component in maintaining the health of any data-led company.
Listen in to hear Tom share his thoughts on what organizations need to do to keep hold of their data. And Jon pulls back the curtain on how he always knows which relevant SDS episode to recommend during his interviews!
In this episode you will learn:
  • Cognitive bias in understanding AI [14:13]
  • How AI will augment rather than replace human workers [24:27]
  • OpenAI and regulatory action [35:13]
  • Jobs that might be at risk of being automated [39:57]
  • The potential of citizen science in accumulating and analyzing data [1:02:18]
  • How AI will change the game for the C-suite [1:15:17] 
Items mentioned in this podcast:
Follow Tom:

Podcast Transcript

Jon Krohn: 00:00:00

This is episode number 647 with Tom Davenport, many-time bestselling author and professor of ITN Management at Babson College. Today’s episode is brought to you by Kolena, the testing platform for machine learning. 
 
00:00:17
Welcome to the SuperDataScience Podcast, the most listened-to podcast in the data science industry. Each week, we bring you inspiring people and ideas to help you build a successful career in data science. I’m your host, Jon Krohn. Thanks for joining me today. And now let’s make the complex simple. 
 
00:00:48
Welcome back to the SuperDataScience Podcast. Today I am feeling unbelievably lucky to be joined by the iconic professor Tom Davenport. Tom has published over 20 books, such as the bestselling, Competing on Analytics, The AI Advantage and Analytics at Work. He’s penned over 300 articles in publications like the Harvard Business Review, and writes regular columns for Forbes and the Wall Street Journal. He’s the president’s distinguished professor of IT and management at Babson College. He’s visiting professor at the side business school at the University of Oxford. He’s senior advisor to the AI practice for the global Professional services giant Deloitte. And with nearly 300,000 followers, he’s recognized as a LinkedIn Top Voice. Today’s episode is equally well suited to technical and non-technical listeners alike. Every part of today’s episode should be appealing to anyone who’s keen to hear about the leading edge of commercial applications of AI. 
 
00:01:41
In this episode, Professor Davenport details the discreet AI maturity levels of organizations, how organizations become AI-fueled, which jobs are susceptible to replacement by AI, which jobs are ripe for augmenting with AI, what roles other than data scientists are required to deploy effective machine learning models, what the future of data science will look like, and having coined data science as the sexiest job of the 21st century a decade ago, whether he still thinks it is today. All right, you ready for this terrific episode? Let’s go. 
 
00:02:25
Wow. Tom Davenport, welcome to the SuperDataScience Podcast. I’m delighted to have you here. I know tons of our audience members are delighted to have you here. We had a huge amount of engagement on my social posts indicating that you’d be coming on the show, and we’ve got great questions from guests coming up near the end of the episode. Anyway, welcome to the show, Tom. Where in the world are you calling in from? 
Tom Davenport: 00:02:48
Thanks, Jon. Very happy to be here with you. I am in Santa Barbara, California. I am bicoastal. I just came out here from Boston. 
Jon Krohn: 00:03:02
Nice. So you flee Boston in the winter. Yeah, exactly. 
 
00:03:06
That makes a lot of sense to me. So Tom, your Twitter bio says that you are an analytics and AI person, a Red Sox fan, and an academic who gets out into the world as much as possible and from my perspective, that bio is quite an understatement because you’re a sought after speaker and advisor, a distinguished professor and last but not least, a prolific author. You’ve written, co- authored or edited over 20 books, including many bestsellers. And we’ve got a link in the show notes to all of your books. The latest blockbuster is called All-in On AI. Can you explain to us what this new book is about? 
Tom Davenport: 00:03:47
Sure, happy to. So this, for me at least, I co-authored it with the head of AI at Deloitte, he’d probably have a different answer, but for me, it’s an AI-centric version of a previous topic I had written on, called Competing on Analytics. So this is about companies that compete on their AI capabilities, they’re legacy companies, so to speak, not digital natives who decided AI is really important to their future success and so they devote a lot of energy and money and resources to using AI to transform their businesses. They’re the most aggressive adopters of AI in their industries generally, and they use it to do something new, not just refine their existing strategy or business model or way related to customers. They try to do something transformative with it. 
Jon Krohn: 00:04:57
Nice. So one of the things that you talk about a lot in that book is that there are challenges with the human side of AI. So what are those challenges? Why are those preventing organizations from being able to adopt AI or do new things like AI you just described? 
Tom Davenport: 00:05:21
Sure. Yeah. Well, I’m always interested in those because I’m a sociologist by academic background, not a computer scientist or, God forbid, a data scientist. That didn’t exist when I was in graduate school, but so I’m always interested in how organizationally and in terms of human behavior and culture and so on to companies get this thing going on and usually it’s a cliché or kind of cliché, I hate to say this, but it has a lot to do with CEOs in many cases, and they get passionate about how AI can have an impact on their business, and they make everybody else go along with that. So that particular type of human is really quite important. And then also, I’m really interested in the humans who are doing the work with AI on a day-to-day basis. And most companies don’t really involve them enough and thinking about how their work is going to change and what skills they’re going to need and so on.
 
00:06:38
And particularly the thing about AI is it tends to support knowledge work. Previous technologies we had were more administrative oriented, and so I’ve recently written another book about healthcare. I’m really interested in AI in healthcare, and I’ve observed some settings in which AI, particularly in radiology now, is being rolled out to identify potentially cancerous lesions and radiology images and so on, and a lot of doctors who have that capability just blow it off if it disagrees with their interpretation because they don’t understand the algorithms. Nobody understands the algorithms. They’re all these deep learning models that nobody can really interpret. So they just distrust it and don’t end up changing their diagnosis as a result. And I think it really requires a lot of effort and more technological sophistication than we have today to get this collaboration between somebody like a well-educated doctor and an AI system. So I think there are a lot of things that we have to work out if we’re going to be successful with AI at a large scale. 
Jon Krohn: 00:08:06
Yeah. Do you think that part of that in healthcare is related to people feeling like they don’t want to give up the importance of their own cognition? I think, especially when you’re in a role like that, that is so prestigious, you’re a radiologist, you’ve worked very hard to get into that position and so it’s your identity to believe that what you think is important and if a machine comes along and maybe you are a radiologist that’s been working for decades and you had never had computer recommendations of diagnoses before, and now computers are coming along, they’re black boxes that as you say, we can’t very often get a good interpretation, a good explainability of what’s going on inside of a deep learning model. And deep learning is what we use for probably all of these diagnostic imaging cases. And so it comes along, it outputs something that is different from what the doctor thinks. And so I can see how that’s intimidating for the doctor and they’d want to ignore it. 
Tom Davenport: 00:09:19
I think that’s probably true in a lot of cases. And even among the most intellectually secure, I guess, doctors or lawyers or whatever the highly educated professional might be. If a system is overruling, overriding your perception of the situation, how do you know whether you should give it up or not? I mean, maybe someday AI systems will be so much better than human cognition that we would not dare to question them. But I think now most of the research studies on, for example, radiology imaging with AI suggest that it’s about as good or maybe a little better than the average human and human radiologist, so why wouldn’t you question it or override it, particularly if you can’t understand what’s going on? So it’s going to be challenging, I think, until if and when AI becomes so incredibly superior to us humans that we will bow down and then defer to its proclamations. 
Jon Krohn: 00:10:40
And I’m certainly not an expert in this radiological machine vision space, but my understanding is that the machines are very good at commonplace situations. So where we have a lot of training data historically, common tumor types in common tissues, the machine vision algorithms are going to be better, but then the radiologist will be better for handling edge cases than probably the machines. Is that roughly what you’ve heard as well? 
Tom Davenport: 00:11:15
That certainly makes sense. I’m not an expert at it either, but I know… I had a relative recently who came down with a rare form of cancer, and we were told that there weren’t drugs designed for this cancer because it’s rare. There weren’t immunotherapies. There weren’t a lot of treatment approaches and I’m guessing that also means there’s not a lot of training data for those kinds of things. But I haven’t seen that in a literature, but it certainly makes sense. 
Jon Krohn: 00:11:57
Yeah, it’s a big limitation for today’s deep learning architectures where without a very large number of positive and negative cases, the algorithms can’t, like a physician would be able to. A physician might be able to have encountered this rare disorder just once a decade ago, but they took great notes, they looked into it in detail, and they can recall that or they can dig back into the literature and make decisions in a situation where the deep learning algorithm is helpless. I don’t know exactly how these medical tools work, but hopefully they have some confidence score associated with them so that it’s not just some binary classifier, where if the output is above 0.5, it’s like, “Oh, there’s a tumor,” and below 0.50 it’s, “Oh, there’s not.” Hopefully there’s some… I’m not really sure in this situation. This isn’t something I’ve seen much before. 
Tom Davenport: 00:12:52
Yeah, I think that was going to be the case with the Ill-fated IBM Watson Healthcare Solution. I don’t know if that’s true in the deep learning image recognition models or not. Haven’t seen many that close up. 
Jon Krohn: 00:13:12
Yeah, no, I haven’t worked with them either. They certainly could. They could have that information brought up to the physician to say, “This is a situation where I’m very confident.” 
 
00:13:22
Are you unit testing your machine learning models? You certainly should be. If you’re not, you should check out Kolena. Kolena is an ML testing platform for your computer vision models. It’s the only tool that allows you to run unit and regression tests at the subclass level on your model after every single model update, allowing you to understand the failure modes of your model much faster. And that’s not all. Kolena also automates and standardizes your model’s testing workflows, saving over 40% of your team’s valuable time. Head over to Kolena’s website now to learn more. It’s www.kolena.io. That’s Kolena. io. 
 
00:14:03
Anyway, yeah, we’re getting deep into this. Have you come across from a couple of years ago, there was a study that showed there was this cognitive bias, it had a name. It’s not coming to me off the top of my head, but it seems to me like you might be the person that remembers it. Anyway, I can describe the effect, which is that humans are much more skeptical of a machine after it’s made just one error, whereas humans are much more forgiving of humans making the same errors. So I can’t remember who did the study or what they named the effect, but there’s this cognitive bias where the same mistakes, if it’s presented to you by a physician, or let’s say, you’re a radiologist and there’s a radiologist colleague of yours, they make a mistake every once in a while, you think, “Oh, everybody makes mistakes, it’s no big deal.” But then when a machine does it, you’re like, “Oh, I knew it. Can’t trust this machine. I’m done with this.” 
Tom Davenport: 00:15:06
Yeah, I don’t know what it’s called either and we shouldn’t feel bad about that because if you look in Wikipedia under cognitive biases, there’s a lot of biases, there are like 180 of them so remembering them all is really quite difficult. But I think that that inability to forgive machine error is a big issue for us in terms of, for example, the acceptance of autonomous vehicles. We know humans crash into each other all the time but in autonomous vehicles, I mean, there are a lot of issues around them. But if one person gets hit by a car in Phoenix and gets killed, everybody says, “Forget it. I’m not going to drive or ride around in those things or even walk around with those things around.” And we don’t seem to have any sense of relative probabilities of human error versus machine error. So I think it’s a problem for the commercial acceptance of a lot of AI. 
Jon Krohn: 00:16:20
For sure. Autonomous vehicles could be 10 times safer per mile driven. It could be a 10th of the human fatalities cause rate or accidents cause rate. And yeah, because it makes headlines, it could still lead to legislators saying, “Nope, no autonomous vehicles.” Yeah, it is certainly a problem. 
Tom Davenport: 00:16:45
And, I think, also what we don’t realize is that the capabilities of autonomous vehicles and of AI in general are very situational. As you were suggesting before, about rare cancers, if it’s a rare event, black ice on the road, for example, which those of us who live in the Northeast have encountered now and then. There’re going to be very few images that you can train your autonomous vehicle on and even if you… It can’t even be seen by a human anyway, much of the time, that’s why it’s called black ice. So I think, it’s all got to be very granular when autonomous vehicles work out and when they don’t, that’s why they’re in geofence to areas within Phoenix, for example, where you don’t have much black ice. And I think humans may have a hard time with that as well, knowing what the circumstances are when you should trust them and when you shouldn’t. 
Jon Krohn: 00:17:56
Yeah. Yeah. I think in addition to the black ice thing, a big problem for autonomous vehicles has been glare. So I know there was a famous case of, I think it was a Tesla, where there was a decapitation. 
Tom Davenport: 00:18:09
Yeah, the glare of that white truck or whatever. 
Jon Krohn: 00:18:11
Yeah, Exactly. 
Tom Davenport: 00:18:13
It was mistaken for the sky, I think, and changing a few pixels, maybe there’s dirt on the camera or something, that could be a factor. And you see that in radiology as well, because different machines have different resolution levels and are more effective than others in their lenses and so on and they may not be the ones that the algorithm was trained upon. 
Jon Krohn: 00:18:45
Right. Yeah. That could be a big issue. I could see that for sure. All right, so we’ve talked about in your book, All-in On AI about how some of these challenges present from the human side where CEOs could be the problem, or people who are unopened to change could be part of the problem. So let’s flip that around to the positive. What can an organization be doing to become AI fueled and see the success that these AI early adopters, that you chronicle in the book, have enjoyed? 
Tom Davenport: 00:19:28
Yeah. Well, it’s some degree of vision, an ability to connect what AI can do with the business problems that the organization is facing. It means that typically these senior decision makers are going to need to know a fair amount about AI and how it might progress and in the book, I talk about one company, most of the companies are really big. This one’s middle sized. It’s, I don’t know, 800 million in revenues or so. It’s called CCC Intelligent Solutions. It’s been around for a while. But CCC used to involve collateral. It was a company that provided collateral values for cars that got crunched in crashes, and now they do image recognition and so if you have a car accident, you can take a few photos of your car and with some companies like USAA, you can, assuming that there’s no possibility of underlying frame damage, they’ll give you an estimate, basically, within seconds. 
 
00:20:43
And for that to happen, the CEO, who’s a technically oriented guy, Gitesh Ramamurthy is his name, had to be aware of the possibility that this image recognition was going to mature, aware that smartphone cameras were going to increase a lot in their ability to take high quality, high resolution photos and start to develop that capability long before it was really fully ready and I think they were working on it for a decade or so, before they did a deal with USAA. So that takes guts and it takes the ability to look into the future and to have some knowledge, some pretty substantial knowledge of the progression of the technological capability. 
Jon Krohn: 00:21:40
Right. So not only an understanding of the potential of AI in general, but also the underlying technologies, consumer technologies perhaps, or industrial technologies and where those are going. So you can say in about five years, consumer phones will have cameras that are good enough that we could have this capability. We need to start going on the infrastructure and the legal frameworks internally to be able to handle processing images and making decisions automatically in about five years. 
Tom Davenport: 00:22:20
And this same company, by the way, is I think by the time autonomous vehicles come out, they’ll be ready with some ability to help insurance companies decide which vehicle was at fault and vehicles are throwing off massive amounts of data even now. And so they can start to gather and model that data in order to help make decisions about it. 
Jon Krohn: 00:22:48
And you just gave me a light bulb around how we could end up going in the direction of autonomous vehicles in the future, regardless what your average person thinks, which is, yeah, you’re nodding your head already. You already know, you’ve already thought of this, which is that insurance companies could just say your premiums will be a 10th of what they would otherwise be if you just let your machine drive all the time as opposed to yourself. You’ll have to pay 10 times as much every month if you want to handle the steering wheel. 
Tom Davenport: 00:23:17
Well, that would make sense, and it would be relatively easy for them to do because a lot of these companies starting with Progressive, who we talk about in the book, are doing continuous monitoring of human driving anyway. Maybe they would say, “You’re not a very safe driver. You should turn over more to the machine, to the autonomous vehicle.” On the other hand, “This other person, you’re very safe driver. We’re not going to give you as much of a discount if you make that conversion.” So again, it could be a more granular calculation than saying, we’ll reward everybody for moving to this technology. Mm-hmm. 
Jon Krohn: 00:24:05
Cool. All right. So that covers what you’ve got in your latest bestseller in All-In On AI. Another theme in your writing about AI from other books as well as the various other publications that you write, is that you advocate for putting humans in the loop. So in other words, you believe that AI can augment human capabilities more effectively than replacing humans altogether. So would you mind elaborating for our listeners on this viewpoint? 
Tom Davenport: 00:24:38
Sure. Yeah. I’ve written two books on this, and I was a convert. I started out with the idea that I would write a book about how AI was going to take away our jobs, and I even had a title in mind of No Humans Need Apply. And then partially it was doing the research, partly my co-author, Julia Kirby, of this book convinced me, “Well, it should really be called Only Humans Need Apply because augmentation, smart humans and smart machines working alongside each other, is both a far more effective approach.” And I think at least thus far, a far more likely approach in organizations. I talked to a ton of companies about this and not a single one yet has said, “Yeah, we’ve laid off a lot of people because of AI.” 
 
00:25:39
In industrial manufacturing, robots, according to some economists, have led to moderately substantial job losses, about three jobs lost for every robot deployed on average. But in other areas of business, an organization, hardly any job loss at all. And so I wrote another book that came out recently called Working with AI and it’s 29 case studies of people who work with AI already on a day-to-day basis. And then a few chapters about those kind of lessons learned from those case studies. Again, nobody seems to be worried. The people who are doing these jobs aren’t worried that they’re going to be losing their employment anytime soon. And the combination seems to work pretty well. So maybe at some point we’ll have the singularity occur that AI is better than us at everything we do, but for now, I think we’re pretty safe. 
Jon Krohn: 00:26:56
Yeah, we have a whole episode dedicated to this topic. So episode number 443 with Jeff Wald is actually when I am talking to somebody at a party and they find out that I have a podcast and they’re like, “Oh, well I probably couldn’t listen to it.” I’m like, “Well, we do have some episodes that you don’t need to be a technical expert to listen to.” And I typically point them in the direction of this episode with Jeff Wald, which is all about this conversation. He blows my mind on air. He turns around. So the same experience that you had while you were writing this book. What we were intending on No Humans Need Apply and then it becomes Only Humans Need Apply, the same I go through as this episode is being filmed. 
 
00:27:41
So Jeff does a great job of summarizing statistics from respected organizations like the WHO and academic papers to convince me and hopefully some other listeners, that by and large, as you say, there are some cases where in particular industries machines do replace. But on a society wide scale, just like every other industrial revolution beforehand, more jobs are created than are destroyed by these new technologies. 
Tom Davenport: 00:28:19
Yeah, I mean there are tons of predictions, which you had probably seen saying that a lot of jobs will be lost. There was this Oxford prediction, I think 47% of US jobs are automatable. There’ve been several predictions about how many jobs would be lost and how many would be added. I think you’re probably right, but we don’t really have good data on anything except for all the predictions are wrong. Certainly the negative ones are wrong. And we don’t have a good way to even know how many jobs have been added because of AI. It takes the Bureau of Labor Statistics and all these government organizations a long time to sort of figure out there’s a new category of job. And people are saying now, “Oh, with things like ChatGPT, Generative Technologies, prompt engineer is going to be a big new job.” But it’s going to be probably be five years before the Bureau of Labor Statistics recognizes something called prompt engineer, assuming it does really turn out to exist. 
Jon Krohn: 00:29:36
Right. Yeah. 
 
00:29:38
Mathematics forms the core of data science and machine learning. And now with my Mathematical Foundations of Machine Learning course, you can get a firm grasp of that math, particularly the essential linear algebra and calculus. You can get all the lectures for free on my YouTube channel. But if you don’t mind paying a typically small amount for the Udemy version, you get everything from YouTube plus fully worked solutions to exercises and an official course completion certificate. As countless guests on the show have emphasized, to be the best data scientist you can be, you’ve got to know the underlying math. So check out the links to my Mathematical Foundations and Machine Learning course in the show notes or at jonkrohn.com/udemy. That’s jonkrohn.com/U-D-E-M-Y. 
 
00:30:22
Yeah. So that big paper, that Oxford paper, it seems to be it’s almost always the one that’s cited in these are our jobs susceptible to computerization. So that paper is called The Future of Employment. It’s by Carl Benedikt Frey and Michael Osborne. Yeah, it’s interesting how everyone always comes back to that [inaudible 00:30:41]. 
Tom Davenport: 00:30:40
Yeah, I’m a visiting professor now at Oxford. And I have met with Carl and he’s sticking to his guns. I think he probably will admit that it’s taking a little longer than he thought. They covered themselves by saying automatable, not that they would actually be automated. But definitely if it’s happening, it’s happening very slowly. 
Jon Krohn: 00:31:07
Yeah, I’ve been trying to get the other author, Michael, on the show because I met him at a wedding before the pandemic. 
Tom Davenport: 00:31:13
Oh? 
Jon Krohn: 00:31:14
Yeah, through a friend whom I met at Oxford when I was studying there. And Michael, very kindly, in my one book that I’ve written, he did a… What’s it called when they write [inaudible 00:31:33]? 
Tom Davenport: 00:31:34
Foreword? 
Jon Krohn: 00:31:34
Like a… 
Tom Davenport: 00:31:34
The foreword? Yeah. 
Jon Krohn: 00:31:34
No, not the foreword. Just like a positive- 
Tom Davenport: 00:31:35
Oh, a blurb. A blurb. An endorsement technically but written within a blurb. 
Jon Krohn: 00:31:36
An endorsement, yes. Yes. Yes. He very kindly wrote an endorsement. And yeah, I would love to get him on the show someday. But yeah, really fascinating work. And it’s interesting how that became the gold standard citation for all this work, even though now it’s 10 years old. It’s 2023. 
Tom Davenport: 00:31:59
Yeah. Yeah. And there are predictions of what would happen in 2022. I think the World Economic Forum said… I forget the exact numbers, but said that hundreds of millions of jobs would be lost and somewhat more would be gained by 2022. And I think now we can officially declare, “Eenk! Wrong. Neither happened.” 
Jon Krohn: 00:32:22
Right. Something that we touched on in the preceding episode of this show, so episode number 646 with Zack Weinberg, we were talking about ChatGPT. We were talking about ways that you can derive commercial value from ChatGPT right away. And you happen to mention that a couple minutes ago, ChatGPT, and this idea of a prompt engineer. I want to tie that, ChatGPT, to another word that you said a few minutes before that, which is artificial general intelligence or an algorithm that would be able to have all of the learning capabilities of human. Given your expertise, I think you might have a really interesting idea here. So Zack, in that episode, he’s a layperson. And so that’s why I deliberately brought him on because I wanted to be showing to listeners, whether they’re technical or not, that they can be driving commercial value right now, generating marketing copy, generating blog posts, generating copy for digital ads. You can do these trivially easily now with ChatGPT. 
 
00:33:25
And so Zack was concerned about AGI. One of the things that we got talking about is how we don’t need to have an algorithm that has the same kind of thinking as us, that can learn in the same way as us for it to be incredibly powerful and incredibly dangerous. And ChatGPT meets those criteria. 
Tom Davenport: 00:33:53
Right. We’re not statistical predictors of the next word that needs to come out or the next image or whatever, yet it still manages to impress. 
Jon Krohn: 00:34:05
It still manages to impress. So there are ways that we can learn that it can’t, but it’s also capable of doing things. The specific example that we end up talking about in the show is you can have a script in the style written by Larry David, where Snoop Dogg is an actor on the program and Snoop Dogg raps about the global recession or the COVID pandemic or whatever you want. And so that kind of capacity, to be able to imitate the style of a rapper inside a comedy script and does it unbelievably well in a way that very few humans would be able to do, or if a human could do it, it’d be quite laborious. It would take hours of research on the styles of the rapper and the writer and to figure out how to tie that together into the content. And you get results instantaneously. 
 
00:35:06
And so even if we never have AGI in our lifetimes, things like ChatGPT coming out a few months ago show that there’s going to be incredible generative AI capabilities that far exceed anything that a human can do. And that OpenAI’s done an incredible job of putting guardrails around what ChatGPT outputs, but it’s inevitable that in six months somebody is going to be open sourcing how to be doing the same kind of thing. And then you can get around those guardrails. It’d be generating a propaganda that is targeted at a sociodemographic group. The same way talking to Zack in the last episode, he can generate marketing copy for his digital ads. Dictators can be generating propaganda against minority groups in their country. 
Tom Davenport: 00:36:03
Yeah, no, I agree. And you’re talking mostly about text, but obviously a lot of potential for image manipulation and deepfakes. There are already some open source capabilities for that. OpenAI does not allow you to use public figures as inputs into your prompt, but other companies don’t mind. And you already are seeing some of that. Sadly, I don’t see any sort of regulatory action happening quickly in the United States at all at least. So I think these capabilities are going to get way ahead of what we ought to be doing with them. 
Jon Krohn: 00:36:58
I can’t remember if I ended up discussing this on air or if it was after we’d stopped recording with Zack for the preceding episode, but my go-to answer, my kind of boilerplate answer when people say to me, “Oh, isn’t AI going to cause terrible damage in the coming decades? These kinds of capabilities are out of control.” The point, the boilerplate answer that I come back to is that humans have been creating structures beyond any individual’s control for millennia. Governments, armies. Armies have done horrible things that nobody wants that outcome from that structure having been created. And hopefully we don’t end up having the same kind of havoc wreaked in the 20th century by armies happen by AI in the 21st century. Mistakes are inevitable, but hopefully we catch on to those quickly and they don’t become pervasive. 
Tom Davenport: 00:38:01
Yeah. And I think that humans are also very good at knowing what is sort of real or human-based. I mean, we’ve all been receiving for decades now marketing content that is intended to be personalized. But I think we can almost always recognize that’s generated by some database marketing engine and it’s not, they don’t know me. And despite the fact that they say, “Dear Tom…” And maybe even recognizing the image on the envelope of handwritten address, we start to be able to figure that out. It takes a while. I think that will be true with things like ChatGPT and so on. I think we’ll come to recognize amazingly good, but still relatively low quality of much of the text that it produces. And maybe it’ll make us even appreciate more the truly human and creative stuff that comes out. 
Jon Krohn: 00:39:18
Yeah, I think that idea of when you see fake handwriting on an envelope that comes in, you’re like, “Oh man, this is going to be salesy for sure.” 
Tom Davenport: 00:39:29
Yeah. That’s where you don’t even open it anymore, right? 
Jon Krohn: 00:39:31
Yeah. I guess similarly if you are on a website and you’re getting attention right away and somebody’s willing to serve you and spend time with you, instantly you’ll be like, “Okay, this is obviously a bot,” no matter how good the quality of conversation is. 
Tom Davenport: 00:39:45
Yeah. No human would ever give me immediate service. 
Jon Krohn: 00:39:48
Exactly. Yeah. All right, so we’ve talked a bit about the Frey and Osborne paper, Tom, maybe our listeners would be interested in hearing what your thoughts are on what kinds of jobs will be wholly automated with AI in the coming decades and what kinds of workers will be spared. 
Tom Davenport: 00:40:08
Well, AI is really good at tasks, but not entire jobs. So when we have a job that consists of only one task, I think that’s the most at risk. 
Jon Krohn: 00:40:22
Right. 
Tom Davenport: 00:40:24
I’ve looked at this for radiology and radiologists. A lot of people think all they do is read images, but they actually do a variety of other things. Some people have articulated, people who know more about radiology than I do have said that’s one of 13 things that a radiologist might do in a day. So yeah, if that one gets taken over, it will mean a change in the job, but it won’t mean that it goes away. We’ve seen it in law with this idea of e-discovery, this idea of reading through massive amounts of documents to see, did somebody say anything inappropriate in an email or something relevant to a case in an email. For a while, regular lawyers did that. And then it went to lower-paid contract document review lawyers, and now it’s done by machines. But lawyers are still around in large numbers, maybe numbers that are too large for the health of our society. And for the most part, they just find other things to specialize in and let the machine do that particular task. 
 
00:41:43
I think if a machine is going to take over an entire job, it has to be a pretty structured activity that doesn’t require much variation. People used to say truck drivers were going to be the first thing that got taken over, but I’m not sure we still believe that. A lot of the autonomous truck companies have gone out of business now and who knows when that will be the case. 
Jon Krohn: 00:42:15
With these persistent rumors over the past decade of truck drivers being automated in the very near future, there’s this enormous shortage of truck drivers in the United States certainly because people thought, “Oh, why would I go into that career?” So now it turned out to be something that’s quite a lucrative thing to get into because they’ve had to pay so much to convince people to come into it.
 
Tom Davenport: 00:42:39
Yeah. And people have said the same thing about radiology, that radiology used to be a really sought after job by medical students and some were scared away. We have a big shortage of radiologists, whatever the reason, and it’s not going away anytime soon. And in fact, with all of this stuff, we as humans, we don’t really want human jobs to go away. But on the other hand, we’re spending a lot of money on AI and related technologies. And if it’s not going to give us some productivity gains, we’re not getting a very good return on our investment at all.
 
Jon Krohn: 00:43:20
Right. 
Tom Davenport: 00:43:20
So we need to think about both perspectives. I do think that when we can automate a structured task, we ought to try to do it, but we ought to give people some warning and help them try to figure out some other things that they might do within our organizations. 
Jon Krohn: 00:43:40
Mm-hmm. Mm-hmm. Let’s talk about that. Let’s talk about getting value, getting a great return on investment from AI. So I gather from what we’ve just been talking about that if you are a leader in an organization or a data scientist in an organization and you’re looking for opportunities for something to automate, it sounds clear that something that has a lot of repetition, some task that’s highly repetitive is ideal for automating. What kind of task is ideal for augmenting? 
Tom Davenport: 00:44:08
I think anything involving innovation and adaptation and so on. These smart machines are very capable, but they’re really only good at doing things that have been done a lot before. That’s where we get the data to train them. And so if the environment is changing or has changed and you need to have an entirely new approach, humans are going to help a lot in that regard. And I think we, humans, generally most of us find that more interesting work anyway. It’s just we can’t fall into the trap of doing this job the same way over and over again or we’re likely to be replaced by a machine. 
Jon Krohn: 00:44:58
Yeah. and that’s that kind of idea of being innovative. So if you are in a role that requires some innovation and you are stuck on what to do next, well, now you can open up your ChatGPT browser and augment with ideas. 
Tom Davenport: 00:45:13
There you go.
 
Jon Krohn: 00:45:14
You can describe your situation, “Hey, ChatGPT, for the last two quarters, revenue has been stagnant in my organization. Give me a few ideas of things I could be doing.” And then you can narrow down, you can say, “Okay, that first idea is a pretty good one. Let’s make that more specific to this particular industry that I’m in” or whatever. And it’s as you say, it’s not going to be able to come up with the complete answer. It’s not going to be able to tie everything together, but we can augment with this idea generation. 
Tom Davenport: 00:45:46
I agree. I think that we should all lose our jobs if we don’t start using tools like ChatGPT to sort of figure out has somebody already thought of something that we might not have considered. In a related vein, I haven’t seen too much of it yet. I recently wrote an article in Harvard Business Review. It was right before ChatGPT came out, which is a little unfortunate, but I’ve talked a lot about GPT-3 and so on. One company I interviewed, Morgan Stanley, is trying to use these generative tools to kind of manage their internal knowledge, not just the knowledge in the world at large that happen to be on the internet or whatever used to train the model, but their own internal knowledge. And through sort of fine tuning of the model, they could find what is the best recommendation for an investment in this particular situation, say. And I think organizations will be delinquent if they don’t develop that kind of capability. It’s sort of a new approach to knowledge management, which was one of my previous enthusiasms years ago. 
Jon Krohn: 00:47:07
Yeah, we had a cool guest two years ago now. In episode number 455, we had Horace Wu, who’s the CEO of a legal AI startup called Syntheia, who’s doing the kind of thing that you’re describing. You were already talking earlier in this episode about legal firms being able to search over documents. And so what Horace’s company, Syntheia, allows you to do is it allows you to generate clauses based on your internal database of existing clauses. So these big law firms have millions and millions and millions of documents. And so you can be ingesting those. And then he has an add-on for Microsoft Word so that you can highlight a clause and say, “I’m looking for something like this, but this isn’t exactly right.” And then it’ll search over. So it’ll convert that language into a vector, and then it’ll look for similar vectors in the existing company database and return you some suggestions of ways you could be adapting the clause. 
Tom Davenport: 00:48:09
I must say you have an encyclopedic knowledge of your previous podcast. 
Jon Krohn: 00:48:15
Well, I have a spreadsheet. So I furiously, when I have ideas of how can I tie this to a previous episode, then I’m flipping over to my spreadsheet of past guests and searching their name quickly to be able to cite. I haven’t memorized all the old episode numbers. 
Tom Davenport: 00:48:32
Well, before long, I think you’ll have to… If you haven’t done this already, you’ll have to turn all your previous episodes into text and have that as a training database for ChatGPT or whatever. And then you don’t need guests. You can just say, “Generate me something on this combination of topics.” 
Jon Krohn: 00:48:53
Well, as it happens, Tom, we’ve already done all of the foundational human labor required to facilitate that AI capability because every episode certainly since I’ve been hosted the show for over two years, we have a transcript that has been meticulously cleaned by our podcast manager, Ivana. And so every episode, people go to www.superdatascience.com/podcast and they go to whatever episode they like, they can actually just read the entire episode instead of listening to it or viewing it. 
Tom Davenport: 00:49:23
Using Otter.ai or something like that? You transcribe it? 
Jon Krohn: 00:49:29
We use augmented humans. 
Tom Davenport: 00:49:30
Oh, okay. 
Jon Krohn: 00:49:30
So instead of a standalone service like Otter, we’ve decided to augment instead of automate. So we use a service that has humans that try to… They must, surely. It’d be insane if they aren’t as the first pass using a tool like Otter. And then they get it, they clean it up, they pass that to us. And then our podcast manager, Ivana, reads through the entire thing and fixes all of the kinds of the proper nouns and things that the transcript creator doesn’t know. 
Tom Davenport: 00:50:06
Yeah. And by the way, I’ve found out the same approach is being used in translation for important business documents, and localization of marketing content, and so on. Something like Google Translate wouldn’t be able to handle it at all. It’s computer aided translation that suggests something and then the human can say, “Nah, don’t like that, got to change that, move along.” It greatly improves their productivity, of course. 
Jon Krohn: 00:50:35
For sure, exactly. Cool. So yeah, very interesting discussion so far, and I’m not surprised to have had it on several of your books now. In addition to your books, you’ve also authored 250 groundbreaking articles for prominent business publications, like the Harvard Business Review, the MIT Sloan Management Review, the California Management Review, the Financial Times, and so on. It’s countless. So our listeners, some of them who are hands-on data practitioners, they may not be particularly familiar with your writing in these in business publications. We probably have other listeners who very much are. They’re in senior management and that is where they get a lot of their AI content. But those technical data scientist people, they have almost certainly heard of your work through your iconic piece called Data Scientist: The Sexiest Job of the 21st Century. An article that is now almost 11 years old. And you followed it up in July of 2022 with an article, Is Data Scientist Still the Sexiest Job of the 21st Century? So what’s happened, Tom, over that intervening decade? How did the role’s demand requirements and challenges change? 
Tom Davenport: 00:52:03
Yeah. And I should say that both of those were co-authored with DJ Patil, and I made a good prediction in seeking a co-author. After we wrote that first article, he became the first Chief Data Scientist of the United States of America in the White House. So that worked out well. 
Jon Krohn: 00:52:22
I’ve been busy in my spreadsheet over here. He’s our guest on episode number 355. 
Tom Davenport: 00:52:25
Okay. I’m going to try to mention somebody who has not done your show. Anyway, so one of the big changes, I think, is… And we didn’t really know enough yet about large scale production of AI, when we first wrote that article, to realize that data scientists were probably not well suited to doing everything that was necessary to create an effective data product. They’re really good at fitting models to data, and tinkering with algorithms, and feature engineering, and all that sort of thing. But most data scientists, I think, are not really good at many of the other things you have to do if you’re going to successfully deploy a model into production and manage it over time. Things like building the trust of senior executives to adopt the model for their business in the first place, and retraining the people who do the day-to-day work, and redesigning the business process. And integrating the model into your existing IT architecture, and scaling it so that it can handle lots and lots of concurrent users and so on. 
 
00:53:51
So, now, I think we’ve realized we still need this model-building capability, for the most part. We also have this issue of automated machine learning, which can do the more mundane aspects of that. But we have things like machine learning engineers, and data engineers, and machine learning operations people, and translators, and data product managers, and so on, who we also need if we’re going to be successful with deployment of successful data products. And so that means that data scientists can no longer pretend to be unicorns, if they did. They need to be members of a team that works together. I think that good data scientists need to worry about whether their models are getting deployed or not. If they aren’t, I think they’re not going to be successfully employed, by a company anyway. But everybody can have their specialties. 
Jon Krohn: 00:55:06
Oh, I just had an idea for a great article title. So in academia, we have, of course, publish or perish. And so, now, you could have deploy or be unemployed. It’s close, deploy to be employed. I don’t know, there’s something there. 
Tom Davenport: 00:55:23
Yeah. Yeah, well, it’s interesting. I didn’t really realize this. I wrote an article once, it was a relatively new journal coming out of Harvard, my graduate school alma mater. And you had Xiao-Li on one of your episodes. 
Jon Krohn: 00:55:42
Yes, Xiao-Li Meng. 
Tom Davenport: 00:55:42
Yeah, he’s the first editor and creator of this Harvard Data Science Journal. And I loved him. He was great. He was dean of the graduate school after I left, did wonders for their statistics department. But I wrote this, it was supposed to be an opinion piece, a column. But Xiao-Li believes in getting everything peer-reviewed. So I wrote, “Data scientists need to worry about deployment.” And two out of the three of the peer reviewers said, “No, they don’t.” And I was shocked. I thought that this was self-evident, that deploy or be unemployed. And so I had to water down the message somewhat in that article. And since then, I’ve focused a fair amount on that issue. Because there’s a lot of data suggesting that data science models, the majority don’t get deployed at all. And they don’t produce economic value as a result. 
Jon Krohn: 00:56:45
Yep, that’s right. It’s a small fraction. It’s 10% or 20% of data models [inaudible 00:56:50]. 
Tom Davenport: 00:56:49
It shouldn’t be 100% obviously, but it should be higher than 13% or one of the figures cited. 
Jon Krohn: 00:56:58
And so for those of you who really want the episode numbers, Xiao-Li Meng’s episode number 581. And yeah, actually, Xiao-Li’s episode is one of my favorites ever. It’s jam-packed with really interesting conversation. 
Tom Davenport: 00:57:12
He’s such a fun guy. And to the statistics courses at Harvard, he brought things like dating; and wine; and chocolate; and so on, things that had never been discussed in a statistics classroom before. 
Jon Krohn: 00:57:26
There’s a lot of those specific topics in the episode, actually. And the point that you’re making about data scientists potentially being worried about their employment. I think they should be concerned. I think there is… Or rather the way of phrasing it is that, if you think that these skills, using scikit-learn to create a machine learning model, that in and of itself is not enough in this field anymore. And so I constantly say on air, a very common question that I ask our guests is, what do you look for in people that you hire? Or what roles do you have open? 
 
00:58:15
And because we have quite a few guests who are CEOs of fast-growing startups, they’ve just raised $100 million. What kinds of roles are you hiring for? They are not always hiring data scientists. Sometimes they are, but typically there’s just a few positions. And they can typically find really great people to fill those roles. However, all of these people, without fail, evergreen job description out there for software developers, things like machine learning engineers, data engineers. These exact roles that you’ve described that are adjacent to data science. That are either building the plumbing, in the case of the data engineer that is flowing the data into the models for the data scientist. Or the machine learning engineer that’s bringing it into a production system. Yeah, so I’m frequently encouraging listeners to be picking up software developer skills where they can, even if they are a “pure data scientist”. 
Tom Davenport: 00:59:16
Yeah. And I think the job that is increasingly going to coordinate all of those different roles, and make sure everybody’s collaborating to create a good outcome, is the data product manager. And that, not terribly well recognized in a lot of businesses yet, but I think it will join prompt engineer in a future list of fast-growing jobs in the world of data science. 
Jon Krohn: 00:59:49
Yep, agreed. And having recently had somebody join my machine learning company, Nebula, Alice… She came from France. Alice is how it’s spelled to the anglophones. And her title isn’t data product manager, but that is effectively… She’s product manager for data product. So she might as well have that job title. And it’s been a game-changer. She’s been working with data scientists on our team a lot. And somebody in that role who’s really good about thinking about how users will interact with outputs from models or data distributions, any kind of data that’s being fed into or out of a model, it’s been a game-changer for us. And so I think it’s great that you’re highlighting that particular role. It’s not one that we’ve talked about on an air before to my knowledge. 
Tom Davenport: 01:00:45
Yeah, well, there’s a guy named Brian O’Neill, who also has a podcast, maybe you can appear on each other’s podcasts or something. But he’s quite oriented to the design and user interface aspects of data products. And then there are other ones too. I think the whole idea of MLOps… Which you’ve recently had a podcast on, I’m sure you’ll come up with the number in a second.
 
Jon Krohn: 01:01:12
There’s too many. 
Tom Davenport: 01:01:13
Yeah. But the ongoing management of models is something that data product managers have to be worried about too. Are customers using it? Has the world changed, so the model needs to be retrained? The machine can tell you some of these things, but it still requires some human oversight. And product managers still perform their jobs in other areas after products are released into the world. And I think they need to hear too. 
Jon Krohn: 01:01:42
Mm-hmm, mm-hmm. So for people who want an MLOps episode, my apologies to all of the other MLOps guests that we’ve had on recently. The two big ones, episode 595 were Joe Reis and Matt Housley. They recently wrote a very popular O’Reilly book on MLOps. And then Mikiko Bazeley in episode 599 is a great one as well. All right, so very cool. We’ve talked now about how data science has changed over the last 10 years. Do you have any insight into how the role may change over the coming 10 years? 
Tom Davenport: 01:02:23
Yeah. Well, I’m a big fan of democratization of these tools. And you see it in other areas with low-code and no-code. You see it in the area of automation, with users taking over citizen automation, if you will. So I am a believer, I know there aren’t too many people among data science professionals who are big believers in this, but the citizen data science movement, I think, is going to be very important. And so that means professional data scientists will evolve into only doing the really hardcore, new algorithm development work. And checking the efforts of the citizens. And that will be, I think, a major change for them. But if you believe data science is important, we can’t restrict its use to a relatively small number of highly trained PhD types. No offense to you, a PhD data scientist. We have to engage other people in this. There’s just too much important work to be done. There’s too much data out there. And let’s face it, most of the work that’s being done in data science is not breakthrough algorithm development work. It’s pretty mundane stuff. 
 
01:03:56
And, also, another change I think we’ll see… It’s widely stated. I don’t know if you have a podcast about this or not. But 80% of data science work is munging around with data. And a lot of that’s going to be done by AI. And it increasingly is now for things like integrating data across separate databases that turns out to be the same people, or the same products, or whatever, through probabilistic matching technology. And so I think when that’s done by AI, that will free up the data scientists to… The ones that are really great with algorithm work can do that. And the others can supervise the citizens, and train them, and make sure they’re effective, and help them pick out the right tools, and so on. 
Jon Krohn: 01:04:54
We haven’t done an episode on automated data munging. And I like that, that’s a good topic idea. And I think that this ties in nicely to the conversation we had, a long time ago now in this episode, about radiologists. Where, you were saying, “One in 13 things that a radiologist does in their day is now being automated to some extent.” And these kinds of tools like AutoML, which we already mentioned… And I should have already brought up an episode number for that. I’ll have it in a second. But AutoML tools, while we might think of a data scientist, their job fundamentally being creating models, deciding what the best model is it, I don’t know if it’s one in 13, one 13th of their day. But it isn’t actually the majority of their day, even though it is the most prominent thing that we think of as we define the data scientist role. 
Tom Davenport: 01:05:47
It may be the thing they like the most. 
Jon Krohn: 01:05:52
Yeah, exactly. And so I think data munging could be where a data scientist today spends a huge amount of their time. But I think it’ll be great to have more and more of that automated too, because people hate that part of it anyway. So then data scientists can be adding a huge amount of value, as you’ve said, supervising citizen data scientists. That’s a great idea. But then, also, just communicating what these things mean, and interpreting them, and thinking of commercial applications. 
 Tom Davenport: 01:06:23
Yeah. Yeah, so I’ve mentioned in that book Working with AI. I did one chapter on 84.51°. I don’t know if you’ve come across them. They’re the captive data science subsidiary of Kroger, based in Cincinnati. And that’s the longitude of Cincinnati. They say, “We work with longitudinal data, so we chose 84.51°.” But they have done a great job, I think, of adopting AutoML, engaging a new class of people to do model work that they call insight specialists. And having the data scientists supervise and train them. 
Jon Krohn: 01:07:05
Cool. That’s a really great specific example. It’s great that through all the case studies that you’ve done, you have all these specific examples that you can reel off. I love that. Nice. So another thing that you’ve been able to come up with, as a result of all these case studies that you’ve done over the years, is you’ve come up with an idea of different eras for analytics. So you have Analytics 1.0, 2.0, and 3.0. So what is the difference between these levels? And I mistakenly blended together two different concepts of yours that we were talking about before we started recording. And so what you call errors, these Analytics 1.0, 2.0, 3.0, I was calling them maturity levels. And I think that’s something separate that you have. So maybe you could fill us in on the errors and the maturity levels, and maybe how they’re related? 
Tom Davenport: 01:08:03
Yeah, so I wrote an article called Analytics 3.0. Since then, I’ve added another era. But 1.0 was the… I call it Artisanal Analytics. It was the era of decision support, and not a lot of prediction, mostly descriptive analytics. This was what I grew up on, maybe some regression analysis for prediction. And then 2.0 was the big data era, where companies, particularly digital native companies, had this vast amount of new data. This is when data science was born as a profession. The advent of working on data products, as opposed to internal support of executive decision makers, came along at this time. And then, around 2013, 2012 or so, I started seeing big companies say, “We want to do that too.” And I call that data economy analytics, where everybody can get in on big data plus small data decision support. But more industrialized and involving large scale decisions. And, also, big companies developing data products too. And then that was the beginning, I think, of widespread use of machine learning. 
 
01:09:37
But Analytics 4.0, if you will, is the AI era, of course. And we’ve talked about that a lot. I don’t need to go into detail about it. The maturity models I did for analytics, I dabbled, in the latest book, All-in On AI with a maturity model for AI. But I didn’t have a lot of data. But I have had it for analytics. And so, level one, you’re not doing much of anything. Level two is very siloed, and organization doesn’t recognize analytics as an important business resource. 3.0 is where executives start to have a glimmer that, “Hey, this might be important to us.” And they start to invest and build up, but it’s still early days. 
 
01:10:32
And 4.0, you’re really good. I should just say level four, not 4.0. Level four, you’re good at analytics but you’re not obsessed by it. You don’t really think of it as a competitive weapon. So a number of banks… Obviously, banks have to make good decisions about whether to extend credit or not. And a lot of them were level four. And level five is these analytical competitors, where you think that it’s the key to your success of your strategy. You build all sorts of capabilities around it. So Capital One was one of my favorite examples in that category, Progressive Insurance. Caesars, which has now fallen back to, I think, probably level three now since the- 
Jon Krohn: 01:11:19
Caesars the casino company, or the pizza company?
Tom Davenport: 01:11:23
Casino, yeah, the casino company. 
Jon Krohn: 01:11:24
Oh, that’s Little Caesars. 
Tom Davenport: 01:11:26
Little Caesars has gotten better. And a friend of mine is the chief analytics officer, and they’ve started to use a lot more stuff in analytics and AI. But Caesars was headed by a PhD economist from MIT, a friend of mine named Gary Loveman. And he brought all sorts of analytical discipline. But he left, and they’ve evolved back to the past, sadly. So I think it’s hard for companies to embed this so deeply that they don’t go back, when a charismatic leader who believes in data and analytics and AI ends up leaving. Maybe Capital One won’t be that way. They’re still run by the founder, but I think it’s pretty deep in that company now. 
Jon Krohn: 01:12:13
Mm-hmm. Is lagging behind, in a lower maturity level, a death sentence for an organization, relative to their competitors? 
Tom Davenport: 01:12:23
I think, over time, it probably is. We’ve seen this in other generations of technology, where we had a lot of small retailers. And they all pretty much fell by the wayside when Walmart came along, and did these great supply chain systems, and built a big satellite network for moving data around, and so on. And we’ve certainly seen that Amazon, with the use of those technologies and a huge amount of analytics and AI, has put a lot of small retailers out of business. So yeah, I think that, if it’s a data intensive industry or it can be a data intensive industry, I think, ultimately, that’s going to win out. And it’s hard in some of these areas to be a fast follower, just because with AI, you have to accumulate a lot of data and a lot of skills and so on. And it’s hard to do that overnight. So I think it’s very dangerous to say, ” We’re not quite ready yet. We’re going to wait and see what happens.” 
Jon Krohn: 01:13:34
Mm-hmm, mm-hmm. So how can organizations level up as quickly as possible? Do you have frameworks for that? 
Tom Davenport: 01:13:40
I do. In that analytics work, I call it the DELTA model. Now, there’s a DELTA Plus and so on. A company that I co-founded called the International Institute for Analytics, has developed all these assessment tools and models. But DELTA was D for data; E for enterprise orientation; L for leadership; T for targets, which was where you really focus your efforts; and A for analysts or data scientists, if you will now. But I think technology has come to play a much bigger role than it did in the early days of analytics, where all everybody needed was a Teradata data warehouse, and SaaS; or SPSS; or something like that. Now, technology’s a big factor. The methods that you’re using are a big factor. And I don’t know, obviously, analysts have become data scientists. But I think that whole cadre of people that we were talking about earlier, the machine learning engineers, data engineers, have to be factored in too. 
Jon Krohn: 01:14:46
Mm-hmm, mm-hmm. Cool. All right. Yeah, DELTA Plus. All right, so recently in the episode, I had you gazing into your crystal ball as to how data science might change in the coming decades. Well, so you’ve been a distinguished professor at Babson College for nearly 20 years, and you teach artificial intelligence for business there. In that course, students have to study material about how AI works. So do you think that future business leaders, not just data scientists, have to be more well versed about technology and AI in particular than they’ve had to in the past? 
Tom Davenport: 01:15:28
Yeah, no doubt about it. As I was saying earlier, I think you need to understand this stuff pretty well if you’re going to make big strategic decisions about it and not very many senior leaders do. So I think it’s almost organizational malpractice not to engage senior leaders and what this stuff is all about. It’s the most powerful kind of general purpose business tool we have these days, like electricity almost. And I do this some myself. I was just involved in an executive program at MIT for the US military. Obviously there’s going to be spending a lot of money on AI and they need to understand a lot of different aspects of it. We haven’t really talked about the ethical aspects of this. Clearly that’s important on the military side, but I think it’s important for every company. So yeah, you absolutely need to train senior leaders in this sort of stuff. 
Jon Krohn: 01:16:34
Cool. And yeah, earlier in the episode, we talked about the kinds of technical roles that have evolved or are evolving as a result of AI. So things like data engineers, machine learning engineers have already become really, really important in the future. Data product managers, maybe things like prompt engineers will become more and more important. And so what about in the kind of executive leadership? How will AI transform the kinds of specialized C-suite roles that are around? 
Tom Davenport: 01:17:05
Yeah. Well, in 1992, I think when the first chief data officer was named at Capital One, now many of those chief data officers, I think quite fortuitously have evolved into chief data and analytics and AI officers. Usually, they don’t add the AI, it’s just CDAO. And that’s good because it’s hard to show value just from data management alone. It’s analytics and AI that really provide the visible value. So I’ve written that those people have had low tenures in their jobs. Fortunately, there’s a lot of demand for them, so they can find other ones. But with analytics and AI included in the role, I think it’ll make it much longer lasting. And those tend to be sort of business oriented. In many cases, they report even to CEOs. It’s kind of surprising to me, but I’ve just done a survey with AWS, and I do some other surveys showing a fair number of them report to CEOs, some report to chief operations officers, not very many anymore reporting to chief information officers who are kind of left to think about infrastructure stuff. 
 
01:18:20
And they define their success in business terms, not so much technical terms, and many of them have business backgrounds. So I think it’s a very important role and you’re seeing pretty widespread growth of it, although there’s still, I think, a lack of understanding in many companies about what it’s supposed to do. 
Jon Krohn: 01:18:40
Cool. Yeah, that’s a great insight there. Well, chief data officers have short tenures. Do you have insight into that? Why do chief data officers have short tenures? 
Tom Davenport: 01:18:51
Just because it’s hard to show value if you’re just doing data management. It’s kind of an abstraction to most executives, most peer executives. It’s hard to show rapid improvement. It’s just data governance. Nobody wants to be governed anyway. It’s just a tough job unless you add analytics and AI to it, that makes it a lot more appealing. As somebody, one of those people said to me, “CDO, without analytics, that’s a two-year job.” And people figure out whatever crisis you were brought in to solve is more or less fixed, so we don’t need you anymore. 
Jon Krohn: 01:19:36
Right. Right. Cool. Well, yeah, so chief analytics or chief AI officers, we’ll see more of them in the future and maybe we’ll have more job stability. Cool to hear that they report directly in to CEOs and unsurprising to hear that they’re focused on commercial success as opposed to technical success. All right. So we’ve had you projecting into the future a lot now recently in the episode, let’s now look deep into your past. So your undergraduate and graduate degrees are in sociology, but you’ve been writing articles about business and analytics for nearly four decades. So what career events got you involved at the intersection of business and analytics? 
Tom Davenport: 01:20:26
I think I was still an undergraduate and I got hired by a medical sociologist to do some computer analysis of some data that she had. So then when I got to graduate school, there was a job in the computing center for social sciences to help people do their statistical analysis work. So I did that for four years or so, and then I worked for a consulting firm as their data analyst. And overtime I got less and less interested in sociology and more and more interested in computer stuff. So now I guess you could say I’m a sociologist of business information and technology, but I don’t think that’s a recognized field in the sociology profession. But yeah, that’s kind of how I got there. And I have kind of bounced back and forth between consulting firms that focus on information technology and then business schools I’ve been at mostly for the last 20 years or so. 
Jon Krohn: 01:21:32
Cool. Yeah. Was there anything in particular, was it that opportunity coming up or was there something about that initial opportunity that appealed something about analytics that you were curious about? 
Tom Davenport: 01:21:56
I don’t know. I taught statistics to some degree, and I always found it hard to teach. I wasn’t like Shelly Ming, I wasn’t creative enough to come up with all those cool examples, but I really thought it was amazing how computers could chew through data, and I really enjoyed sitting down with my customers and saying, “Okay, what are you trying to approve? Let’s see what model might work for it.” In those days, it was sort of regression analysis or analysis of variance or cross tabs. We didn’t have any… There were no random forests to be discussed at that point, but I found it really fun to… It’s like puzzle solving, I guess, for those people. 
Jon Krohn: 01:22:49
Nice. All right. Well, we’ve gotten through all of my questions, and as I mentioned at the onset of the show, we had a lot of audience engagement and so let’s dig up some of the most popular questions that came up on those social media feeds. So first one here is from Mike Nash. So Mike had a number of questions for you, but we’ve actually addressed most of them already though we just happen to, in the conversation that we were covering, but one that we haven’t talked about at all is that it’s possible here that in 2023 with interest rates going up around the world, we might be entering choppy economic times. Do you think, Tom, that this will impact the growth of AI for the better or for the worse? 
Tom Davenport: 01:23:39
I think it’s a great question. I think we’re beyond the AI winter stage, there’s just so much going on in the AI area that I don’t see a major retreat, but I do think that a number of companies will sort of reexamine their activities and make sure they’re yielding value. So it’s even more important to get that productivity, I mean that deployment, and productivity, and economic value that we were discussing earlier, but I don’t see… Maybe fewer startups because there won’t be as much venture capital money around. But in the latter days of 2022, we obviously saw a lot of really exciting things happen with the generative AI and that’s sort of just beginning. 
Jon Krohn: 01:24:36
Yep. I couldn’t agree more. I don’t think that a recession is going to hold back the flood of AI applications that are becoming more and more widely available. Another question here comes from Nigel Thompson. He’s a strategy consultant, and he, again, has a number of questions for you. I don’t know if I’ve ever had so many people come up with so many different… It’s usually people just have one or two. But Nigel, again, he has four questions. They’re all pretty big. Many of them, we already kind of covered a little bit in discussion that we’ve already had in the episode, so I’ll skip right to his fourth one, which is, “would you, Tom, use intelligent wearables if you had no idea how the data were being used at the backend?” So I guess this is kind of question about… So maybe things like your heart data, or other health data, or private data that you might have that you’re sharing about yourself. Do you have concerns personally when you’re not sure how those data are being used on the servers of the company that’s taking those data? 
Tom Davenport: 01:25:51
Yeah. Well, I guess I do because I have my trustee iPhone, and my Apple Watch, and so on, and I certainly don’t know how all the data from the apps and so on is being used. I have minor concerns about it. I think it was the CEO or chairman of Google, Mr. Schmidt, at one point that said, “If you don’t want people finding out bad things in your data, don’t do anything bad.” So I don’t think I’d do anything bad enough to worry about it. 
01:26:29
And two, I think I’ve been hearing for decades about how surveillance capitalism, to use Shoshana Zuboff’s term, was going to ruin all of our lives. And frankly, I rarely ever get any sort of personalized offer that knows enough about me for me to be impressed at all. Once I got an offer, I think it’s a Groupon for a restaurant that I really wanted to go to. It’s almost like a tear came to my eye. It was such a rare event, I couldn’t believe it. A personalized offer that I really want. So I guess, I think, some of these concerns are a little overblown, but in the future I think we should all be concerned about it. My friend John Thompson is writing a book about data now where he sort of says, “We’re all going to be damned and in hell if we don’t start paying more attention to what happens to our data.” So I probably grudgingly agree. 
Jon Krohn: 01:27:39
Yeah. Yeah. I think, I’m in the same camp as you in that I’m somewhat suspicious about where my data goes. And so I feel like of the options available to me with my operating system choices and my phone choices, I end up going with Apple, that’s become their big thing. 
Tom Davenport: 01:27:55
Yeah. I went with them before it was their big thing, but I think you always have to think about, well, what am I getting in exchange for it? So one of the email clients I use is Gmail, and Gmail, even when they were… I think they said they don’t share the content of messages anymore. They did at one point. I’m not saying anything interesting enough to worry about. So if I were committing crimes or whatever, then I would probably be using Signal or something highly encrypted, but I’m not worried. 
Jon Krohn: 01:28:28
So you’re going to love… I mean, I really love so many episode numbers right now because we had a lot of… Some of our biggest past guests on the show when they found out that you were going to be on, they wrote comments. So Sadie St. Lawrence, who has been on the show a number of times in recent years, most recently in episode 641, it was a first episode of the year, we did a data science trends prediction for 2023, and she said that your book Competing on Analytics was the first book that she ever read on analytics. And now, she’s become this huge AI analytics and strategy leader. So I think you’ve really inspired somebody there. Similarly, Ben Taylor, he’s been on probably 10 or 12 episodes over the years. He might have the record as our most frequently occurring guest. He said that he loves your content. And Ann K Emery who was on recently in episode number 637, we already covered her question, which was, does Tom still think that data science is the sexiest job? So we got that one down for you, Ann. And then finally Christina Stathopoulos. So she was in episode number 603, and Christina is popular on social media for running a book club, and many of these books are data books. And so she has already queued up the question that I always ask our guests at the end of the show, anyway, which is… So she says, “I know Tom has written books and I’m familiar with them, but what books does he recommend others read other than his own books? I would love to hear the data and non-data related recommendations he has.” 
Tom Davenport: 01:30:25
Yeah. So the data ones, I like that book The Master Algorithm a while ago. In terms of actual use of data and business, I like this book called The Man Who Solved the Market. I have it on my bookshelf about that, Jim Simons the Renaissance Technologies hedge fund guy. I sort of find that you can either write a lot of books or read a lot of books. I tend to write them more than I read them. I look at books as I need to learn more about them, but I really like fiction more than I like non-fiction. 
 
01:31:15
And so I read a really nice book a couple of weeks ago called Tomorrow and Tomorrow and Tomorrow about a bunch of gamers and really interesting about sort of male-female friendships, not romance. And I’m reading now this pretty interesting book by Robert Harris. It’s a historical fiction book called Active Oblivion about the people who… I think this is largely true, the people who chased the killers of King Charles the first, the sort of puritans around New England. And one of my ancestors, John Davenport, the founder of New Haven Colony is… So I found it quite interesting for that reason, but I didn’t know that much about the Puritan era, and Oliver Cromwell, and so on. So it tells you a lot about that in a fairly painless way. 
Jon Krohn: 01:32:19
Nice. Yeah, I love fiction for bringing you into some historical context and so you can learn about history in a really undry way. I don’t think the word we use there is wet. 
Tom Davenport: 01:32:34
Yeah. Painless, maybe. Right. 
Jon Krohn: 01:32:38
Great. Well, I’m sure Christina and lot of our listeners will appreciate those great data and non-data book recommendations. All right. And then the final question that we always have is how should people follow your work after this episode? 
Tom Davenport: 01:32:54
It’s sort of all the usual channels. I have a webpage, tomdavenport.com. I put almost everything I write in some form on LinkedIn so you can connect with me or follow me, either one. I think I still can accept a few connections, and I write for Forbes, and Harvard Business Review, and MIT Sloan Management Review primarily. So if you like those kinds of places, I appear fairly often there. 
Jon Krohn: 01:33:29
Nice. Well, thanks a lot, Tom. I really enjoyed this episode. It’s been great. I hope our audience members have as well. I feel very confident they have, frankly, this has been brilliant. Thank you so much for coming on the show, and yeah, maybe in a few years we can catch up with you again. 
Tom Davenport: 01:33:45
Thanks, Jon. It’s a pleasure. Thanks for the great questions. 
Jon Krohn: 01:33:54
My goodness, what an honor to have a world-class speaker and AI communicator like Professor Davenport on the show. I relished every second of that fabulously engaging experience. I hope you loved the episode too. In it, Tom filled us in on how organizations become AI fueled by having senior decision-makers that know a lot about AI and confidence in where the requisite consumer technology is going. How machines are generally more effective at augmenting humans in the workplace than replacing them, but the jobs with lots of repetition are susceptible to automation with lots of change or innovation are ideal for augmentation with AI. We also talked about how rules like ML engineer, data engineer, and data product manager are essential to effectively deploying the models that data scientists develop, and how the future of data science will be characterized by democratization, by a low-code, no-code tools, AutoML, streamlining model development, and data munging happening in an increasingly automated fashion. 
 
01:34:48
As always, you can get all the show notes including the transcript for this episode, the video recording, any materials mentioned on the show, the URLs for Professor Davenport’s social media profiles, as well as my own social media profiles at www.superdatascience.com/647, that’s www.superdatascience.com/647. If you too would like to ask questions of future guests of the show, like several audience members did during today’s episode, then consider following me on LinkedIn or Twitter as that’s where I post who upcoming guests are and ask you to provide your inquiries for them. Another way we can interact is coming up on March 1st. I’ll be hosting a virtual conference on natural language processing with large language models like BERT and the GPT series architectures. It’ll be interactive, practical, and it’ll feature some of the most influential scientists and instructors in the large natural language model space as speakers. It’ll be live in the O’Reilly platform, which many employers and universities provide access to. Otherwise, you can grab a free 30-day trial of O’Reilly using our special code SDSPOD23. We’ve got a link to that code ready for you in the show notes. 
 
01:35:52
All right. Thanks to my colleagues at Nebula for supporting me while I create content like this SuperDataScience episode for you. And thanks of course to Ivana, Mario, Natalie, Serg, Sylvia, Zara, and Kirill on the SuperDataScience team for producing another terrific episode for us today. For enabling that super team to create this free podcast for you, we are deeply grateful to our sponsors whom I’ve hand selected as partners because I expect their products to be genuinely of interest to you. Please consider supporting this free show by checking out our sponsors links, which you can find in the show notes. And if you yourself are interested in sponsoring an episode, you can get the details on how by making your way to jonkrohn.com/podcast. Last but not least, thanks to you for listening. We wouldn’t be here at all without you. So until next time, my friend, keep on rocking it out there and I’m looking forward to enjoying another round of the SuperDataScience Podcast with you very soon. 
Show All

Share on

Related Podcasts