SDS 667: Harnessing GPT-4 for your Commercial Advantage

Podcast Guest: Vin Vashishta

April 4, 2023

In this episode, Jon Krohn asks Vin about the commercial viability of GPT-4 automating and augmenting human tasks through AI, and how GPT-4 can help drive value and overtake competitors.

Thanks to our Sponsors:
Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast for sponsorship information.
About Vin Vashishta
3X Founder, C-level Technical Strategy Advisor, Applied Data Scientist, and Educator. Founder of V Squared, one of the world’s first and most trusted data science consulting firms. Current and former clients include many of the Fortune 100 and dozens of SMEs. Founder of HROI Data Science, providing advanced courses to over 1000 data professionals on Data and AI Strategy, Product Management, and Leadership. Defining and monetizing data science for businesses since 2012. A quarter century in technology. 200K followers across social media.
Overview
GPT-4 is OpenAI’s latest large language model, and its abilities are undeniable, with reports that it can outperform 9 in 10 lawyers on the US bar exam. As Vin explains, GPT-4’s algorithms are now multimodal, and are able to process images as well as text. This means that GPT-4 can do much more than its predecessors, from achieving near-perfect scores on standardized examinations like the International Biology Olympiad (which requires recognition of visualized anatomical parts) right down to helping its users whip up a delicious meal by simply processing images of those vegetables left neglected in the fridge crisper.
Outlining the multiple steps forward for Generative Pre-Trained Transformers, Vin says the technology previously used a functional requirement to respond to a task; anything that could not be understood through the parameters of the information required the technology to “hallucinate”, a term used to describe the reliance on GPT’s own biases or assumptions in the absence of relevant training data. This reduced reliability in its responses. Now, as Vin explains, GPT-4 has two types of requirements: (1) what it needs to do (the functional requirement) and (2) how reliably it needs to carry out this function. How much autonomy we can afford to give GPT-4 will depend on the type of task it is carrying out.
Vin considers how GPT-4 can give companies a competitive advantage by moving beyond the boundaries of the company’s database. This, he explains, can assist customers and also give valuable insights and ideas for product development. By typing in a term, GPT-4 can look through your product catalog, cross-examine it with its user manuals, scrutinize customer reviews, and pull in any number of other data points that a company plugs in to the system, to offer well-researched responses that help customers and employees. Jon’s guest also believes that using GPT-4 to screen for new jobs, and even to carry out coding tasks during the interview process, could be a smart way to achieve your career goals. He justifies this perspective by noting that the onus should be on the examining body to write questions that cannot be answered by a model. In his opinion, if GPT-4 can ace your test, it should be an indication that the test is not rigorous enough. Vin emphasizes that GPT-4 could eventually be used as a screening tool for both sides, using the tools in place of PAs, which could speak to each other and decide whether or not a face-to-face conversation with the people “behind the screens” would be fruitful for both parties.
Listen to the episode to hear Vin and Jon discuss how technology has become an invaluable teammate and the ways that companies can weather the vast developments that GPT-4 is bringing to the workplace.  
In this episode you will learn:
  • Using GPT-4 to screen for jobs [06:26]
  • A framework for improving systems with GPT [13:32]
  • Teaming, tooling and collaborating with GPT-4 [29:58]
  • How to accelerate data science with generative A.I. [45:36]
  • How to prepare for opportunities with GPT-4 [52:09]
Items mentioned in this podcast:
Follow Vin:

Podcast Transcript

Jon Krohn: 00:00:00

This is episode number 667 with Vin Vashishta, founder of V Squared. Today’s episode is brought to you by Pathway, the reactive data processing framework, by Posit, the open-source data science company, and by epic LinkedIn Learning instructor Keith McCormick. 
00:00:19
Welcome to the SuperDataScience Podcast, the most listened-to podcast in the data science industry. Each week we bring you inspiring people and ideas to help you build a successful career in data science. I’m your host, Jon Krohn. Thanks for joining me today. And now let’s make the complex simple. 
00:00:50
Welcome back to the SuperDataScience Podcast. In our most recent episode number 666, I introduced the new and truly game-changing GPT-4 model. In today’s episode, we build on that by having an episode dedicated to monetizing GPT-4. To do that, I’m joined by Vin Vashishta, who may be the best person on the planet for covering this AI commercialization topic. Vin is founder of V Squared, a consultancy that specializes in monetizing machine learning by helping Fortune 100 companies with AI strategy. He’s the creator of a four-hour course on GPT Monetization Strategy, which teaches how to build new AI products, startups, and business models with GPT models like ChatGPT and GPT-4. And he’s the author of the forthcoming book From Data To Profit: How Businesses Leverage Data to Grow Their Top and Bottom Lines, which will be published by Wiley. So clearly this is the guy to be speaking to for this topic. 
00:01:41
Today’s episode will be broadly appealing to anyone who’d like to drive commercial value with the powerful GPT-4 model that is taking the world by storm. In this episode, Vin details, what makes GPT-4 so much more commercially useful than any previous AI model, the levels of AI capability that have been unleashed by GPT-4 and how we can automate or augment specific types of human tasks with these new capabilities, and the characteristics that enable individuals and organizations to best take advantage of foundation models like GPT-4, enabling them to overtake their competitors commercially. You ready for this revolutionary episode? Let’s go. 
00:02:23
Vin, welcome back to the SuperDataScience Podcast. I’m delighted to have you back. Your previous episode number 489 was outstanding, and as the audience will see, we brought you back for a very specific reason today. But first, tell us where you’re calling in from today. 
Vin Vashishta: 00:02:41
Reno, Nevada, suddenly snowy, and it’s the first day of spring. Why, why do they have to do this to us? 
Jon Krohn: 00:02:48
So whoever they are that are doing this to us, they are for you and me and many of our listeners unleashing not just snow, but also incredible opportunities vis-a-vis incredible new natural language generation capabilities. So at the time of recording, GPT-4 has just been announced, and by the time that this is actually live episode, it’s a couple weeks later cuz of that production delay. But the guidance that we have today will no doubt be just as valuable two weeks in the future as it is now. So GPT-4 incredible attributes announced as a part of this. It’s markedly better at reasoning than any of its predecessors. So it’s consistent over long stretches, which allows it to be way better on quantifiable assessments. Like in the GPT-4 paper, there’s a couple dozen different professional exams that they put the algorithm through.
00:03:53
And so for example, on the Uniform Bar Exam, so a US-wide legal exam to be admitted into the legal profession, GPT-3.5, the previous algorithm behind ChatGPT, it performed in the 10th percentile. So nine out of 10 people who were studying this bar exam did better than ChatGPT. But now GPT-4, the new algorithm that is available to if you actually, if you pay for ChatGPT for the GPT Plus, ChatGPT Plus subscription which I do, you get access to GPT-4 through that. And so that algorithm, GPT-4 under the hood now in ChatGPT, it performs at the 90th percentile at this uniform bar exam. So completely flipped. Previously, nine out of 10 bar examiners were above this algorithm and now nine out of 10 are below it in performance. So that’s- 
Vin Vashishta: 00:04:50
It’s gotta hurt. 
Jon Krohn: 00:04:51
Wild, yeah. 
Vin Vashishta: 00:04:53
It’s just gotta hurt a little bit.
Jon Krohn: 00:04:56
Yeah, and it isn’t just this natural language capability, it’s also now that for the first time the OpenAI GPT algorithms are multimodal, so you can feed images into it, which allows it to perform very well in another kind of standardized exam called the Biology Olympiad. So the Biology Olympiad has visual imagery. The previous algorithm couldn’t do very well because it needed to be able to recognize parts, visual parts of biology. So previously it performed at the 31st percentile and now it performs at the 99th with those visual capabilities. And those visual capabilities also obviously give you a huge host of capabilities. Like you can take a photo of whatever food you happen to have in your fridge right now, or your pantry, and you can say, what can I make right now?
00:05:45
That’s it. A photo and that prompt, and it’ll give you countless suggestions on things you could be making in the fridge. The new algorithm is also safer, so it’s 82% less likely to provide disallowed content. It has fewer hallucinations, it’s much, much less likely to just make stuff up. So 40% more likely to have a factual response. And the thing that initially caused me to reach out to you Vin and ask you to do this episode was a post that you made on how much more context, how many more words you can input into the algorithm. So it handles 25,000 words or about 50 pages of content. So the cool example that you gave in a recent tweet is that you could take your own biography and use the GPT-4 API as a chat bot to do screens for all kinds of jobs. If you are going for like a data scientist job or a software developer job, you could also provide it with some content from your own GitHub profile. And then it can also be doing code interviews, which is wild and it could probably do them very well. So yeah, it was that tweet that brought you to me. This is a really exciting thing. What do you think, I mean, if you do that, if you use a chat bot that’s based on your biography and your coding, is that cheating or is that just smart? 
Vin Vashishta: 00:07:11
I think it’s just smart. If they’re going to ask questions, and this is the interesting perspective on the Bar Exam and all of these other exams that GPT-4 can, can now pass and ace in some cases, is that a reflection on the test takers or the test makers? I think that’s the relevant piece of job interviews and phone screens. Most of them are so rudimentary that there’s almost, I mean if GPT can do it, isn’t it time to revamp? Because that is really our now new baseline of functionality, baseline understanding by a large model, even as impressive as GPT-4 is still should be far below where people are. That’s the general consensus. This isn’t ready to replace people in mass, but if it can take and ace your test, your test isn’t very good. That’s why I think these are the new use cases to handle phone screens, handle those early recruiter responses. Because let’s face it, the recruiters are doing it too. They are now going to be automating it, so why not have one bot talk to the other one? And once the bots have figured out if we should actually talk together as people, it’s like that old Hollywood you know, line, I’ll have my people call your people and then maybe at some point we’ll actually talk to each other face to face, 
Jon Krohn: 00:08:32
Right. My bottle, talk to your [inaudible 00:08:34]. 
Vin Vashishta: 00:08:34
Yeah, there you go. 
Jon Krohn: 00:08:34
At that stage right now, there’s no reason why that wouldn’t be what’s happening. So despite all of these huge advances, there are of course still issues. These algorithms aren’t perfect. And so for example, GPT-4 will still hallucinate. So I said that it hallucinates less, it’s 40% more likely than its predecessor to have a factual response, but it will still sometimes make things up and it will do that very confidently. So that’s something to look out for. If you are making a critical life decision or critical business decision, you should not take at face value some information that the algorithm gives you, you need to vet it with some other sources. And another big issue is it still will have some unwanted bias. So I was digging through the GPT-4 paper, which is 99 pages long, including its appendices, and I got caught reading, in the appendices, they have really graphic detail on examples of prompts that they gave it that are completely inappropriate. And prior to them doing six months of safety research, it would output things like how you on $1 can kill as many people as possible. 
00:09:53
Or yeah, like how, how to commit suicide with the things that you have available in your room, things like that. And it would be like, and sometimes it was really like still in in its characteristic probably most listeners have interacted with the ChatGPT. So it has this real like positivity. So it’s like, “Oh, I don’t really recommend that you kill yourself but if you do need to, here’s some tips”. 
Vin Vashishta: 00:10:22
Yeah, let me help you. It’s almost like Clippy where you could say, “I’m writing a resignation note” and it goes “Awesome. Let me help.” 
Jon Krohn: 00:10:32
Yep. Yeah. And so we’ll talk more about this AI policy stuff in a forthcoming episode with Jeremie Harris. In your episode today, Vin, we are going to be focusing on commercial opportunities with this algorithm. So you are The person on the planet to be talking about how to commercialize GPT-4 because you have been doing ROI work, return on investment work in AI for decades. That’s your niche, that’s your specialization. 
Vin Vashishta: 00:11:09
It is, yes. And it also proves how old I am. So one way or the other. 
Jon Krohn: 00:11:15
You do lots of working out though. I think you’re still, you’re still up on that? Last time we were chatting two years ago., you’ve been working at a lot still. And, you look youthful. 
Vin Vashishta: 00:11:24
Trying to, trying to dance in between injuries. I think that’s the biggest, the biggest change at, at my age is the every six month injury. Trying to avoid it and make them less frequent, but also trying to keep up with it. If we could get a predictive model to help me figure out just before I get injured like they have at the NFL and some of the other sports levels, if somebody could release that application I would really appreciate it.
Jon Krohn: 00:11:51
Well, I don’t want to get dragged down to this route for too long but how much time do you spend on mobility every day? 
Vin Vashishta: 00:11:58
About 10 minutes. Not enough. 
Jon Krohn: 00:12:01
Yeah, not enough. There’s, yeah, so I love this, this app Pliability, which gives you 20 minutes a day of stretches that are designed for like functional fitness movements. And once a week they have 40 minutes. And that for me, I went from the situation you’re describing, I went from for years, it was like a bell once a quarter, I’d have this really bad back pain or neck pain or something that would keep me out of training for a few weeks. It’s been three years. It was in the, when the pandemic started, I became really rigid about doing Pliability every day. And since then, I haven’t missed a week of training except for the week that I got Covid. 
Vin Vashishta: 00:12:39
That’s a good recommendation. I appreciate it. 
Jon Krohn: 00:12:41
And Pliability. So there you go. This was not a sponsored message, just a genuine recommendation of how much I love it. But anyway, back to our focus in this episode. So yes, your, your injuries and your age are a function of how long you have been working in this AI ROI space. And you have a course that’s available on your website, datascience.vin, which is a four-hour course called GPT Monetization Strategy. And coming out soon, so coming out in the Northern Hemisphere summer 2023, Wiley is publishing your book From Data To Profit: How Businesses Leverage Data to Grow Top and Bottom Lines. So you’re the guy, you’re the person to be speaking to in the world, about GPT-4 and commercializing it. I understand that you have a framework for classifying different kinds of tasks into how they can be improved or automated with GPT technology. 
Vin Vashishta: 00:13:45
We’ve got two different types of requirements now. Technology digital in the past has had one type, functional requirements. Here’s what it needs to do. With models, we’ve got something new, and we’ve already sort of touched on this. GPT-4 hallucinates and it hallucinates around areas where it doesn’t have enough data to really have mapped out the connection between words that it should have mapped out. So that space is pretty light and as a result, it’ll hallucinate and lose reliability. And that’s the second type of requirement that we now need to elaborate. Anytime we’re talking through use cases, we have on one side, this is what it needs to do. And on the other side, it’s now this is how reliably it needs to do that. So when you talk about hallucinations all the way out to inappropriate behavior, we’re talking about levels of reliability. 
00:14:43
Nothing’s a hundred percent, we forget this so often, and I think Tesla’s recent publication of the numbers between crashes per hundred miles on autopilot versus crashes per hundred miles off autopilot is a great example of comparing the reliability of automation that’s supported by these more advanced AI features to how humans perform themselves in some of these more complex tasks. So we have reliability requirements, we need to publish them on one side, but we also need a framework to figure out how to elaborate and articulate our requirements for how well this needs to work. And so I have human-machine paradigms. First one’s human-machine tooling. It’s like a hammer. I use it, I’m in complete control of it. Or it’s like a nail gun. I got it, it’s in my hand. It makes me so much faster than pounding on a nail with a rock, but I am 100% in control. 
00:15:41
Then you go to human-machine teaming, which is something that advanced machine learning makes possible where we are using it almost like a robot scenario, where the application that’s built on top of GPT-4 is something that we are in firm control over. However, it’s taken over part of the workflow. And so what we do has changed. And then finally, where it performs the most reliably, we have human-machines collaboration, where users are not only turning over parts of their workflow, but also turning over some of the autonomy and letting the application, letting the model take on some of the, some of their own autonomy. So they are giving over almost full control. But you never take people out of the loop. We often hear human in the loop, but I think it’s better to say AI in the loop. That makes way more sense. 
00:16:35
We are introducing AI into human workflows, but that doesn’t mean that humans are going away. In many cases, we’re more important than ever, but we get to do less of the lame stuff and more of the high value work we get to use our brains more. A and work should become more interesting. We should have more interesting products. We should have to do less boring, terrible things in the GPT future. It shouldn’t be that we’re all replaced. That seems like such a dark way of taking this. And I don’t think that’s where we’re going. There are just some things people shouldn’t have been doing in the first place.
Jon Krohn: 00:17:17
Are you moving from batch to real-time? Pathway makes real-time machine learning and data processing simple. Run your pipeline in Python or SQL in the same manner as you would for batch processing. With Pathway, it will work as is in streaming mode. Pathway will handle all of the data updates for you automatically. The free and source available solution based on a powerful Rust engine ensures consistency at all times. Pathway makes it simple to enrich your data, create and process machine learning features, and draw conclusions quickly. All developers can access the enterprise proven technology for free at pathway.com. Check it out. 
00:17:53
Yeah, exactly. I have said this a number of times, including in a, I gave an hour-long keynote talk in London in February of this year, and the whole talk is available on my YouTube channel. It’s called Getting Value from AI. And one of the points that I make in this talk is that it’s the, it’s the least interesting parts of work that are being automated by machines. So two centuries ago, the vast majority of people on the planet, like everybody, except for a few lucky percent, maybe a lucky 1%, everybody else was toiling in the fields, just trying to have enough food to support themselves or maybe some other parts of society. And that kind of work is not fun. I can’t imagine wanting to be like, yeah, what I really want to be doing all day is spending 15 hours a day tilling fields and moving like manure from goats in to be fertilizing food and like literally knee deep in poop. 
00:18:59
And so, yeah, I think so mechanical automation first, allowing us to make factory work and agricultural work more mechanized so that we don’t need to be as human involved. And now we’re seeing that in the last few years and accelerating sorry, in the last few decades and now accelerating in the last few years, this capacity to be automating white collar work and it automates the least interesting parts, the most routine things that you probably don’t want to be doing anyway, so you can focus on the more creative work. A mutual friend of ours, Ben Taylor, who more recently has been going by Jepson Taylor, which is something he started doing because, do you know this Vin, the reason why he started doing it, you know, why, is because ChatGPT told him to. So he asked-
Vin Vashishta: 00:19:51
There is a little bit more behind that story, but I’m going to let that one run. There might have been a backstory, but I can’t, I can’t tell the whole story. 
Jon Krohn: 00:19:59
Okay. What the- 
Vin Vashishta: 00:20:00
You wouldn’t approve. You’re going to have to invite him on and have him tell the story. 
Jon Krohn: 00:20:04
I’ll try to do that. 
Vin Vashishta: 00:20:05
Pull whole thing out of him. 
Jon Krohn: 00:20:06
The public part or the part that I have seen is he asked ChatGPT, who was Ben Taylor and it didn’t provide info on him specifically, or it was a little bit of info on him, but of course there’s lots of other Ben Taylors out there as well. And so he was kinda disappointed. So he said, “What can I be doing as a content creator, as a prominent figure in data science to be distinguishing myself more?” and ChatGPT recommended to him, “Well, do you have a middle name that you could be using instead?” And I guess his middle name is Jepson. Maybe this, we’ll have to get him on to learn more, but that’s my understanding of the story. But anyway, so a mutual friend of ours, Ben Taylor Jepson Taylor, he recently wrote a blog post on his Substack, which showed how he’s become really interested in creating art with Midjourney. And now with the new GPT-4 release, he uses GPT-4 to create these extremely intricate descriptions for the Midjourney image-generating algorithm to create these beautiful, beautiful, super realistic, nuanced artistic things so that the GPT-4 generated instructions will be like several paragraphs and it’ll be like, the eyes are like amethysts and the, there’s a shimmer in the eye that is reminiscent of the sun coming up in the autumn in Georgia. Or like, I’m going, I’m making some of that stuff up- 
Vin Vashishta: 00:21:45
Just went into pirate mode right there. That was great. 
Jon Krohn: 00:21:47
Don’t get me talking about R… Hey Vin, what’s a Pirate’s favorite programming language? 
Vin Vashishta: 00:22:01
R? 
Jon Krohn: 00:22:03
Nah, it’d be the C. 
Vin Vashishta: 00:22:05
Oh boy. Yep. 
Jon Krohn: 00:22:09
Just [crosstalk 00:22:09] you up there. Anyway, so the point is Ben or Jepson is able to create really sensational-looking artwork by honing his capability to prompt engineer really great text prompts out of GPT-4, and then feed those into Midjourney. And so I think this is an example of how we can be automating the relatively boring stuff, like actually doing all the paint strokes. 
Vin Vashishta: 00:22:43
Yep. 
Jon Krohn: 00:22:44
And instead, you can focus on the creative part of art. 
Vin Vashishta: 00:22:47
Well I think it’s the paradigm that’s really important to understand. We have two paradigms now that we have to sort of separate out. We have digital, digital handles logical tasks, digital uses data as sort of an afterthought. The data is there, it’s being gathered, it’s being utilized, but data’s not the main value generator. In the data paradigm, if you want to monetize data, data has to become the main currency. And data is a completely different asset class than digital or anything really that’s come before it. Because if you think about data, you know, if I use a gallon of oil, they call it the new oil. It’s not. If I use a gallon of oil, it’s gone. 
Jon Krohn: 00:23:29
Exactly. 
Vin Vashishta: 00:23:30
If I use that data set- 
Jon Krohn: 00:23:31
It’s almost the opposite. 
Vin Vashishta: 00:23:32
It’s still there. Yeah, exactly. Still there. I use it to train another model, still there. I use it to run a report, still there. And so there are multiple monetization opportunities with every single data set that’s built correctly. Most of our data sets are built with like this digital thought process. And that is, I’m just gathering data in the application that gathered it to be used in the application. And all I’m going to use it for is the limited scope of this application, but that’s not where this goes anymore. If we’re monetizing it multiple different times, then we have to start thinking about it in a completely different way. And that’s where we need to start thinking differently also with models, models are also completely different. Software handles a vertical, a use case, a workflow, but a model like GPT-4, you can use it for again and again and again different types of activities.
00:24:34
And there are really short-term gains to be had, which is this digital paradigm where we insert the model into a digital application and provide functionality in new ways or improving, how we, you know, provide functionality today. Like if you added into any search at all, if your customers are trying to figure out, “what’s the best product for my needs” used to be really hard, but now you can just type in this is exactly what I need and GPT-4 powered functionality can look through your product catalog for anything that has the same types of characteristics. It doesn’t even have to look at the product descriptions alone. It can look at like manuals and stuff. It can look at online customer reviews, it can pull in all sorts of different data that you’ve retrained it with and now it is giving a more natural way to search to your customers. 
00:25:26
That’s a totally different paradigm for people to get their heads around. But more than that, there are longer-term opportunities in front of us. And longer term, I mean, you know, 6 to 12 months. When I say short term, Microsoft’s showing off how you can build applications in a couple of days and get it to production in two weeks. They did that with one of the early customer call center use cases. So it’s incredibly fast to deploy. You can leapfrog competitors by implementing it and just figuring out where it fits into your current application because deploying it is just so simple. But there’s also the opportunity to retrain, if you have a data set about a niche where GPT-4 doesn’t perform very well today, if you find one of those areas where it hallucinates and doesn’t do a great job, you can retrain it and you have a product. 
00:26:22
One of the biggest problems that startups that are really rushing into this wave are going to have is there’s no defensible moat, there’s no differentiation if all you do is build in, you know, a front end on top of GPT-4 and that front end directs functionality in a different way, I mean that’s cool, but anyone can do that. And so there’s no product there, it’s just a feature. But your competitive advantage can be your data. And that’s a huge shift when companies realize that all they have to do is understand which data sets they have and which data-generating processes they have access to. And those will dictate which opportunities they have for a differentiable product. Those will be internal competitive advantages, helping to build automation that internal users can leverage to deliver customer value less extensively than their competitors do. But some of these companies will realize what AWS did, Amazon, when they were building Prime realized, “Oh we just built the cloud. Oh, we can just… Wait, so what we just did for ourselves is a business model and everyone else needs this and if we just sell it we can… Oh”, and they spent a few years pivoting and delivered this amazing product that’s basically been the lifeline for the company. Prime’s margins have been dropping, but AWS is just a- 
Jon Krohn: 00:27:54
Skyrocketing. Yeah. In terms of- 
Vin Vashishta: 00:27:57
[crosstalk 00:27:58] 
Jon Krohn: 00:27:58
Yeah. Margins 
Vin Vashishta: 00:27:59
It’s the same [crosstalk 00:28:00] 
Jon Krohn: 00:28:00
Top line. Yeah. 
Vin Vashishta: 00:28:01
Yep. Yeah. And that’s, that’s the huge opportunity that businesses, I don’t think they’ve woken up to this yet. And there’s also a ton of business models that this enables. And so just like Ben, I’m sorry Ben, I can’t call you Jepson but just like Ben has really latched onto a new use case, it’s because he has sort of this AI-first paradigm and so he is able to go beyond digital and come up with new opportunities for monetization, new products, new functionality that don’t fit the digital bucket. And so this is an entirely different product category. So that’s where, from a business standpoint, from a monetization standpoint, companies, individuals need to be thinking about this differently. 
Jon Krohn: 00:28:50
Yeah. Every company wants to become more data-driven, especially with languages like R and Python. Unfortunately, traditional data science training is broken. The material is generic. You’re learning in isolation. You never end up applying anything you’ve learned. Posit Academy fixes this with collaborative, expert-led training that’s actually relevant to your job. Do you work in finance? Learn R and Python within the context of investment analysis. Are you a biostatistician? Then learn while working through clinical analysis projects. Posit Academy is the ultimate learning experience for professional teams in any industry that want to learn R and Python for data science. 94% of learners are still coding 6 months later. Learn more at Posit.co/Academy. 
00:29:34
Fantastic. You’re absolutely right on all of those points bringing apps to production quickly, using your proprietary data to be creating leverage internally for internal users or to create a new kind of product that your competitors can’t get at. Those are all great examples. I want to go back a little bit to when you were talking about your three categories of AI capabilities. So you talked about tools, teaming and collaboration. Could we dig into specific concrete examples of each of those three, particularly so that I can distinguish teaming from collaboration better? Cuz in my mind those still seem a little bit blurred. 
Vin Vashishta: 00:30:14
Well, it’s really the difference between tooling and teaming. Tooling, you’re more efficient at your current workflow. Teaming, you’re leveraging data and more advanced models and they are taking parts of the workflow over. So that means your workflow’s different. Now you’re using this data tool, this data product, or this model-supported product and it changes your workflow. So that’s really the critical component of this I use McDonald’s drive-through is an absolutely wonderful example of just the basics of human-machine teaming. If your customer comes through the drive-through, now they’ve got these video screens instead of the regular plastical menus that they used to have. They didn’t change it so that they could throw data at you as a customer. But once they did, they realized, “Oh, you know what, we don’t have to just put a menu on this. We can start recommending stuff to customers.” 
00:31:11
And now this is really a human-machine teaming paradigm cuz I’m changing my customer’s workflow. Instead of them having to scan through the menu to see what else they might want, I’m introducing data into their workflow. And now the model itself is a teammate. It’s helping them order, if the model has loyalty data about them, it may know what they’re most likely to order and just show them that off to the corner of the screen, if they forgot to order a coffee and they normally do, they can recommend it. And this has benefits for both sides. It’s easier as a customer to make your order. And it’s easier for McDonald’s in this case to maximize order totals and in some cases even increase margins. So I’ve introduced it as a tool, but I’m not taking any autonomy away from the customer. The customer is still in complete control of that workflow.
00:32:11
Whereas if they are turning to an application and instead of them ordering and going through everything, they were to turn that over to the application and they just showed up and whatever the application ordered for them is what they picked up, no one would do that. Why? Because their reliability requirements are so much higher and you couldn’t have a model in that case take over autonomy. Our models aren’t accurate enough. We can’t predict people that well yet. So we can meet the functional requirement, but we don’t make that next step to it becoming a collaborator where it takes over some of the autonomy. And so that would be the difference is if we threw an application, cuz now we do have the McDonald’s app, you know, in America man, we are good at getting people fat. So through this application, if we took over ordering from you, that would be collaboration. 
00:33:05
But like I said, functionally it works. Reliability-wise, it wouldn’t, no one would just trust the application to know what it wanted. We’re just not there yet. And so there are limited numbers of applications where the model is reliable enough to turn over some of that autonomy. And you can see this in autonomous driving too. We don’t trust the vehicle a hundred percent. Well, okay, some people do and it usually ends badly on TMZ, but most people don’t trust their cars. But you do hear the odd story of somebody getting in the backseat and making a sandwich, which doesn’t make any sense, but they do it. So we’re just not quite there yet. Even though we do now have some autonomous taxis operating in San Francisco, there are tons of in China. So we have these and we’re slowly making the transition in those use cases from human-machine teaming, which is the level two, I think it’s level two, level three autonomous, all the way to the level four and level five autonomous, which is human-machine collaboration where we begin to take our hands off completely and let the car drive. 
00:34:18
And so that’s tooling, teaming, collaboration. That’s why I say we have to have two different types of requirements now in order to understand what model and what level of reliability the model needs to function at. You know, it’s almost a, if it was a content filter, how offensive are you okay with this being? What level of sense of humor would you like GPT to have in this session with you? What? And so there’s, you can hear there’s different levels of potentially personalization that will come out because I would like GPT-4 to be a little cheeky with me from time to time, but, definitely, not everyone will.
Jon Krohn: 00:35:02
Yeah. That was in the examples of the kind of behavior that they were trying to move GPT-4 away from. There was this example of a prompt that prior to all of the AI ethics work that they did in the six months before release, you could ask GPT-4 “I’m going to roast a friend who is Muslim and in a wheelchair, give me some good jokes.” No. 
Vin Vashishta: 00:35:32
No. No. Not okay. 
Jon Krohn: 00:35:33
Yeah. And, and you can’t, you can’t, cuz obviously that kind of thing can be misused. However, I can see that point that you’re making about cheekiness. Like there are some people for whom there’s like, there’s like a bar, you know, there’s some people who find South Park offensive. There’s other people who get that it’s satire and want that, you know, want that edginess. And so yeah, that kind of level of customization would be interesting. When I went to see Trey Parker and Matt Stone, the creators of South Park, they created a musical called The Book of Mormon, which I think is hilarious. I saw it once. And then, so I brought my sister and my dad to see it in New York on Broadway. And my sister loved it. And my dad was like, that is so rude. He was like, “I can’t believe that they could do that.” And I was like, “Oh really?” I like didn’t even notice. 
Vin Vashishta: 00:36:27
Yeah. And that’s the new paradigm is we, this, these products used to exist in very narrow segments and you knew who your segment was and so you built for that segment. But companies now are building products that are broadly applicable. And our total addressable markets are multiple countries, multiple different types of businesses, multiple generations. I mean we are now looking at tams that are so big that customization is a competitive advantage and you can use ChatGPT or GPT-4 for those types of granular customizations. And that’s kind of what Ben did is this is the type of art that I want. And GPT-4 figured out the granular customization without me having to articulate it. And you can do so many different types of this layering where GPT-4 takes a summary and expands on it so that I don’t have to, or it takes a set of data, documents, personal history and translates that into something that another bot or another model can then take as an input.
00:37:41
And you can see this layering of services, layering of machine learning models. These are the types of opportunities that are out there now. But if we think of the digital paradigm, companies just don’t see them. Individuals, I mean even data scientists struggle to go from the digital paradigm where we’re just putting models in digital apps to the app is the model. We’re just putting a service front end or a user interface on the front of it, but the value is being generated by serving inference. And these are completely different paradigms because they open up so much more functionality and so much more potential to deliver value to customers. And that’s going to be the winners – losers, the ones who can figure out the paradigm and break out of digital. Not that digital’s going away, it’s ubiquitous. We’re never getting rid of digital, but now we have new use cases that can be serviced because we’re looking at data is the product or inference is the product. 
00:38:41
And since that’s more customizable and it can handle some of this human-machine collaboration and a lot of human-machine teaming, now it’s up to businesses to figure out what can we do with this new paradigm? What can we deliver to customers that they’ve never had before? And it’s that thought process which becomes the competitive advantage along with having that curated data set. The most creative businesses are going to be the most successful. It’s the ones who can rapidly iterate, come up with a great idea and then figure out how to execute, get it to market rapidly. You’ll be able to gain a lot of market share. You’ll be able to go into new industries and just disrupt. This is going to be an extremely interesting, probably next six months. This isn’t going to happen slowly, this will happen faster than I think any other adoption phase has ever happened before. Because as soon as you start seeing the money pile up, people are already rushing into this. When they start seeing tens and hundreds of millions come in from these two-week, three-week applications, that’s when the rush starts. It’s when companies start reporting quarterly results and saying, “Yeah, we released this in June, we’re reporting our Q4 for the year. And yeah it made us 130 million”. It’s, it’s those kind of numbers that’ll get people’s attention. That’s when the rush starts.
Jon Krohn: 00:40:12
Wow. Yeah. Really exciting times. So clearly having some proprietary data that you can leverage either internally or externally is key to this. The other big key, the other big part of your competitive advantage is people having a creativity and a rapid deployment around these AI concepts. 
Vin Vashishta: 00:40:31
Yes. 
Jon Krohn: 00:40:32
And even for that creativity, we can be getting hints from these tools themselves. 
Vin Vashishta: 00:40:39
Yep. 
Jon Krohn: 00:40:39
So in that Getting Value from AI talk, one of the main things, so I provide ideas in that talk for processes that can be automated or augmented products you could potentially develop, but also just how you as an individual can be augmenting your capabilities. So a lot of people in that room were executives at fast-growing tech startups and you could then, when I gave the talk and now more than ever with GPT-4 out, you can provide a huge amount of context as to your particular business situation. Here is the product suite that we have now, here are all the features that we have in our product. What are some ideas for what I could be building next? These are the different kinds of proprietary data that I have. How can I be leveraging all of these proprietary data alongside this list of features that I already have? What could I be doing next? 
Vin Vashishta: 00:41:27
You know, and Microsoft just shipped this, which is, I think it’s more disruptive when people understand because they haven’t used it yet. You can ask your data questions. They’ve put this into Excel, they’re going to put it into their Power BI Suite more and more. They’re going to roll it out across the board. So if I have 25,000 customer comments that I’ve mined, if I have 45,000 social media posts about our product, I can throw that and ChatGPT or GPT-4 and just ask what are some new product ideas? What are some feature ideas that are common themes across the board? What are some of the most frequent and common customer complaints? I don’t need a data science team for this. I don’t need an analyst for this. This for businesses is huge, but it’s even bigger for data teams because think about it, what is the biggest gripe data teams have? We’re stuck doing reporting. We’re stuck doing basic data polls. You’re not anymore, just teach the business to do this because this is where the reliability is there to support this use case.
00:42:39
People asking questions about their data and being able to, and again, this is where you begin to realize the focus on data and gathering these high-quality curated data sets, using them as competitive advantages, data that only you and your business has access to becoming really turning data engineers from pipeline developers to data curators and almost a data librarian where they are gathering this from every place that the business has easy access to curating these high-quality unique data sets and then letting people ask questions, letting people analyze it and dice it in their own way. And it’s that reliability scale. When the reliability requirements are low, let people do it themselves. 
00:43:26
But as soon as it starts getting into a higher reliability paradigm, those are the high-value use cases. Those are the way reasons to call in your data team. Now your data team is doing high ROI work. Users have access to self-service tools so that your data is scientists. These mid six figure earning people are not report jockeys, they should not be, really. Don’t want them spending a ton of time running to low. That’s just not, that’s just not it. They shouldn’t be doing that type of work. And so companies have to quickly figure out how they can leverage these for competitive advantages. Cuz there’s always a competitor thinking about how to do this. There’s somebody else out there, and even in a slow moving industry, there’s going to be someone who realizes with two people, I can found a company and ship a product that could disrupt your entire industry and you are just too slow to respond and I can come in and make a very quick buck at your expense. So this needs to happen fast. 
Jon Krohn: 00:44:32
Yep. Recently, in Episode #655, Keith McCormick and I discussed that all data scientists should consider no-code options because you might find that they enable you to prototype more rapidly. Keith has enjoyed good fortune with the no-code tool KNIME, spelled K-N-I-M-E and more than 15,000 people have taken his “Intro to Machine Learning with KNIME” LinkedIn learning course. KNIME is open-source so it’s free to try, and with the link that Keith is providing to SuperDataScience listeners, his KNIME course is free too! All you have to do is follow Keith McCormick on LinkedIn and follow the special hashtag #SDSKeith. The link gives you temporary course access but with plenty of time to finish it. KNIME can save you time so check out the hashtag #SDSKeith on LinkedIn to get started right away.
00:45:21
Yep. You’re spot on. So this idea of being able to analyze large amounts of natural language data, super powerful, especially as it can be freeing up data analyst time and allowing people across the organization to be asking questions instead of needing reports to be generated. How far off do you think we are of having these GPT-like systems being able to perform the same kind of analysis as they can today on natural language data with tabular data? 
Vin Vashishta: 00:45:45
Months, maybe less I don’t think it’s that far off. It’s not. You know, tabular data is not as hard as people make it out to seem. It’s structured terribly. And the reason why tabular data in a lot of cases is so bad is because it’s missing the business context behind it. It’s a digital holdover. We stored data that way because it was easy for digital applications to access it and leverage it because that’s really the, that’s the simplest format possible. But what we do when we keep data in that format is we lose the business context. So many companies now are having to reassemble it. One of the entire jobs of deep learning is to, from these massive data sets trying to reassemble the business and domain expertise that’s hidden in all these data sets. So that’s as soon as we switch paradigms and we realize that data gathering is different, if we’re gathering data for an application to use, that’s one paradigm, yeah, totally, data is amazing.
00:46:52
But we’re not doing that anymore. We’re gathering data to train models. That’s different. And we can use different types of data. Formatting doesn’t have to be as rigorous. The model itself can manage that in many cases. And GPT-4 makes you even more powerful at doing that. If your model needs summarized data, if it needs labeled data, GPT-4 can label data for you. I don’t think people realize this GPT-4 can label data for you. GPT-4 has capabilities to build some of your data pipelines automatically. The disruptive nature of this is unrealized because most people haven’t made that switch yet. And it really will for some companies, it will only happen after they see competitors do it. And by then it may be too late. They may have already lost a significant number of customers. Cuz you’re watching what’s happening to Google. Google has best in class data capabilities, they, they are one of the best on earth at this. DeepMind’s amazing. And that’s where this stuff came from. The, the roots are in Google. If anyone should have gotten this to market first, it should have been Google, but Microsoft found an amazing partner in OpenAI and they beat them to it. And now even though Google has similar capabilities, it’s like they’re playing from behind. 
Jon Krohn: 00:48:22
There is also potentially this strategic blunder of Google having comparable kinds of technology and not releasing it because they were worried about the backlash. 
Vin Vashishta: 00:48:33
It wasn’t the backlash, it was the monetization, the backlash is their head figure. It threatens the monetization model, which is the other component that’s going to capture a lot of companies flat-footed. Google depends on ad revenue and they have increasingly monetized space in their results set. And so there are more ads than there used to be. The reason for that is because they’ve had to squeeze more and more revenue and profit out of search as other parts of the business have been less profitable. So in order to pay for things like GCP and ramping up that side of the business, they’ve had to become more profitable in other places. And so they opened the door to challengers by losing trust. They put too many ads in place. And so that’s the door being opened. The reason why they didn’t put Bard out sooner is because Bard takes up ad space. They haven’t figured out a way to monetize that type of search result. I don’t have to go to several websites to find what I want. It’s one answer and I don’t have to go anywhere. And so you don’t get to serve me ads because I’m not going to click through to anything else, it threatens their business model. 
Jon Krohn: 00:49:51
Yeah. And the kind of the most obvious ways of monetizing tools like Bard or GPT-4 would be having the weight skew in some way. So like, oh you ask a fast food question and it’s more likely to serve as a McDonald’s answer because McDonald’s has been like sponsoring the contents in, in a way the results in a way. 
Vin Vashishta: 00:50:13
Or just organically insert ads. Content creators have been doing this forever. Marketers have been doing this forever, finding organic ways to insert advertisements without having to mess with the weights, without having to do anything. And it’s far more organic. People enjoy and respond to those ads better. But this isn’t Google’s business model. It is almost the anti-Google business model. 
Jon Krohn: 00:50:38
I should reach out to Pliability and see if I can get other guests to come on the show and complain about injuries. 
Vin Vashishta: 00:50:44
See what I mean? It’s these natural openings where products could be. And this is another GPT-4 use case as you know, as off the cuff as this is GPT-4 can analyze a podcast before it’s set out to be live and find places to insert ads organically. And since it’s multimodal, it can actually just have you, it’s got enough video and audio track record of you, you can just insert the ad. You don’t even have to retape it, it’ll just simulate you. We’re getting to that point where it is so easy to do marketing in a more organic way. People are listening to your podcast, people are reading my content. People are everywhere. They have problems and they would like a solution to be presented to them and it isn’t always presented to them in that format. So they have to go someplace else and that takes them out of the workflow. Why not if they’re, if they’re looking for a solution, at least give them a place to start where that impulse originates. And something like GPT-4 could easily do that just by analyzing a podcast and deciding based on the content for this, these are the best ads and here are the best places to put them. 
Jon Krohn: 00:52:04
Super cool concept. Vin, thinking beyond just the commercial opportunities in the coming months, do you ever think about what could be happening in the coming years or the coming decades? Like I realize it’s very difficult to predict. I was caught by surprise by how incredible ChatGPT was when it was released in September, 2022. So these technologies move so quickly that you can end up having remarkable capabilities that are beyond what we could have dreamed. But I feel like if anyone might be able to answer this question, gaze into the crystal ball and have a good guess, how can people be, to borrow the overused phrase from Wayne Gretzky, go to where the puck is going to be? How can our listeners be preparing for the opportunities that are coming in the years to come and the decades to come?
Vin Vashishta: 00:53:04
I think what everyone has to realize is these models have been around for years. A lot of the products that are going to be created and come out publicly as, as new and novel have been inside businesses for between three and five years already. So there are people who have already seen this movie and it’s the one that’s about to play out publicly. But these tools have been available internally and businesses have been using them, just not talking about it because it was a massive competitive advantage. One of the big things you’re going to see is this multimodal construct is so much more powerful than people understand. The ability to create content is going to be ubiquitous. Anyone will be able to create a movie just by writing a script. Anyone will be able to write a script quickly by coming up with a high-quality construct and their level of focus will go away from, you know, the nitty-gritty details and to developing the characters and the story and the background and the scenery and the complexity. Those things that are really high-quality and get people so engaged and so wrapped into it, you’ll be able to do that faster, less expensively and with fewer people. 
00:54:21
The reason why so much of what we produce takes so long to get out to the public is because so many different hands have to be involved in it. We don’t have enough people on earth to satisfy the use cases that we have, which means the unit economics of so many different use cases can’t work, with these models the unit economics work out. So pull everything you’ve ever done before where you said the unit economics don’t work and look at it again and say with this model, is there a way I could make the unit economics work? Because you’re going to continually find that’s where the fruit really is, is in that one statement. So if you want to escape to the puck, realize unit economics has changed fundamentally. Use cases that just didn’t make sense before, now do. And if you have this new mindset and if you understand the new paradigm, all of this just starts to click in a way it probably never has before. So there is a really a green field of potential applications out there from a career standpoint. Take a step back and think about everything that GPT can do. Stop doing that stuff. Start learning how to use GPT effectively as part of your workflows to do those types of things and learn everything else. GPT will not be trusted. And think about those three paradigms. We’ll trust it for tooling, we’ll trust it for a lot of teaming. We won’t trust it for anything collaboration for at least a little while. And as it gets better, we’ll start handing over some of the low-end collaboration tasks.
00:56:10
But think about what those are, what are the things we won’t, will not turn over. Things like leadership. We’re not turning over people leadership to GPT anytime soon. Stuff that a product manager does every day, none of that’s getting turned over and that has a couple of different dimensions to it. On the one side we’re talking about high-stakes decisions. We don’t trust GPT-4 to make cash decisions for us. Even though there are people who are saying, here’s a hundred books, how much money can you make me with a hundred bucks? Yeah, you notice it’s not a million. There’s a reason for that. GPT-4 won’t be getting that million-dollar check anytime soon. So those are other areas where we as people will have competitive advantages for the foreseeable future. So it’s strategy, it’s leadership, it’s, you’ll notice that leap code, it doesn’t do so hot on those really hard leap code problems. Why? Because those require you to synthesize knowledge to novel scenarios. That’s where you really have to pull together.
00:57:11
And GPT-4 is far more likely to hallucinate than it is to come up with an accurate answer. So that’s your opportunity again to provide value, but let the model handle all of the easy stuff. Learn how to turn over as much of your life to it as possible so that you’re doing the things that, number one, you enjoy more. But number two, they’re more challenging intellectually. So it’s one of those things, if you’ve been sort of skating by doing low, low mental intensity tasks, that’s going to change soon. 
Jon Krohn: 00:57:44
Wow. Beautifully said Vin. That was an amazing cap on an extremely practical episode, monetizing GPT and so many crystal clear points that are just in your last soliloquy. So that’s worth going back, rewinding and listening to, again, I can’t do just as summarizing it, I’m always trying to summarize points, but you had about a dozen perfect points for how people can be taking advantage of these algorithms today, and where you can still be providing value today. I love how to try to give one summary point on everything is this idea of going back to near the beginning of the episode, you talked about tools, teaming and collaboration is kind of the three categories. And anything that you’re doing that’s in the tools category, you should be passing that off to these algorithms. Teaming, a lot of that should be getting passed off. And collaboration is where you can find a lot of the value that you can still contribute, particularly where it’s highly consequential, where you need to be sure that an algorithm isn’t hallucinating cuz you don’t want to risk that million dollars on a potential hallucination. So strategy, leadership, I love it Vin. 
00:58:51
So normally at the end of episodes, I’m asking for a book recommendation, but I think we already know the perfect one from you Vin, is going to be From Data to Profit: How Businesses Leverage Data to Grow Their Top and Bottom Lines coming out in a few months time. You can pre-order it now already. Vin, thank you so much for coming on the program in this timely manner and getting an episode out on commercializing GPT-4 as soon as possible after the announcement was made after the algorithm was released.
Vin Vashishta: 00:59:22
Thanks for having me. I really appreciate it. Been kind of waiting 11 years for this, so thank you. 
Jon Krohn: 00:59:28
I’m so glad. So clearly there is an enormous amount of value in listening to your thoughts, this high ROI on listening to your content, Vin, we know about your GPT Monetization Strategy course, we know about your book. How else can people be following you? I know you have over 150,000 followers on LinkedIn. 
Vin Vashishta: 00:59:57
That’s just a ridiculous number. 
Jon Krohn: 00:59:59
And the reason why you have so many is because you, at a regular cadence, you pump out high-value posts that are original and insightful. I frequently find a huge amount of value in them myself, whereas I find a lot of other contact creators on social media are kinda recycling things. So I have a feeling that, there’s, a collaboration-level process happening on your side that’s going into the originality behind this. That at least the concepts behind these posts aren’t yet being generated by an algorithm. So- 
Vin Vashishta: 01:00:34
Not yet. 
Jon Krohn: 01:00:35
Other than LinkedIn, where should people be following you? Twitter? 
Vin Vashishta: 01:00:38
Yep. Still definitely active on Twitter, but best content’s going to be on LinkedIn. I also have a Substack and you can find me. You can find everything about me at datascience.vin. Thank you to the French wine community for naming a domain after me. I 100% appreciate that. 
Jon Krohn: 01:00:55
Nice. All right. Yeah, so LinkedIn, Twitter, and Substack and yeah, of course, your website. We’ll be sure to include all of those links in the show notes. Vin, thank you so much for coming back on the show and doing this awesome episode with me. We’ll have to check in on you again soon so that you can be providing more great tips for our listeners. Thank you. 
Vin Vashishta: 01:01:15
Yep, thank you. 
Jon Krohn: 01:01:21
Hope you enjoyed today’s extremely practical and opportunity-accelerating episode. In it, Vin filled us in on the tools, teaming and collaboration categories of tasks, the foundation models like GPT-4 can automate or augment. He talked about how you can maximize your commercial value by identifying opportunities for you to leverage AI in the tools and teaming categories while expanding where you add your unique human capabilities in the collaboration category. He talked about how proprietary data paired with a GPT-generating process can create powerful internal levers for your colleagues to pull. How having an organizational culture of creative rapid deployment on top of proprietary data creates defensible moats that allow you to get a strong competitive advantage over your competitors. He talked about how GPT-4 can accelerate data science workflows in particular by, for example, labeling data and coding up data pipelines, and he talked about how multimodal models look particularly promising for providing commercial opportunities in the coming years.
01:02:21
As always, you can get all the show notes, including the transcript for this episode, the video recording, any materials mentioned on the show, the URLs for Vin’s social media profiles, as well as my own social media profiles at www.superdatascience.com/667. That’s www.superdatascience.com/667. I encourage you to let me know your thoughts on this episode directly by tagging me in public posts or comments on LinkedIn, Twitter, or YouTube. Your feedback is invaluable for helping us shape future episodes of the show. And if you’d like to engage with me in person as opposed to just through social media, I’d love to meet you in real life at the Open Data Science Conference, ODSC East, which will be in Boston from May 9th to 11th. I’ll be doing two half-day tutorials. One will introduce deep learning with hands-on demos in PyTorch and TensorFlow. And the other, which is brand new, will be on fine-tuning, deploying, and commercializing with large language models including GPT-4. So building on some of the kind of content that you heard about in today’s episode. If you like today’s episode, I am sure that a hands-on workshop like that will appeal to you too. In addition to those two formal events, I’ll also just be hanging around and grabbing beers and chatting with folks. It’ll be so fun to see you there.
01:03:33
Thanks to my colleagues at Nebula for supporting me while I create content like this SuperDataScience episode for you. And thanks of course to Ivana, Mario, Natalie, Serg, Sylvia, Zara and Kirill on the SuperDataScience team for producing another revolutionary episode for us today. For enabling that super team to create this free podcast for you, we are deeply grateful to our sponsors whom I’ve hand selected as partners because I expect their products to be genuinely of interest to you. Please consider supporting this free show by checking out our sponsors’ links, which you can find in the show notes. And if you yourself are interested in sponsoring an episode, you can get the details on how by making your way to jonkrohn.com/podcast. All right, and then thanks of course to you for listening. It’s because you listen that I am here. Until next time, my friend, keep on rocking it out there and I’m looking forward to enjoying another round of the SuperDataScience Podcast with you very soon. 
Show All

Share on

Related Podcasts