SDS 698: How Firms Can Actually Adopt A.I., with Rehgan Avon

Podcast Guest: Rehgan Avon

July 21, 2023

For Rehgan Avon, the difference in AI adoption over the last six years has been like night and day. She speaks with Jon Krohn about why this has happened and offers some best practices for how to implement AI in a company that might have cold feet in moving to an augmented AI-human work environment.

About Rehgan Avon

Rehgan Avon is the co-founder & CEO of AlignAI, an AI Adoption Platform that is changing the way companies build, adopt, and govern policies and procedures for data & AI. She has worked on architecting solutions and products that operationalize machine learning models at scale within the large enterprise for the last decade. Rehgan’s experience has been fueled by a passion for early-stage startups, AI, and product development. Rehgan also founded Women in Analytics in 2016 to promote more visibility to diverse individuals making an impact in this space. This community has over 7000 members and hosts a global conference that has put over 200 women on the stage.
 

Overview

CEO of AlignAI Rehgan Avon notes that, while many industries like banking and finance have been using machine learning models for decades, the explosion of generative tools on the market coupled with the improved interfaces of products has in the last year skyrocketed AI uptake.
AI’s new capabilities, lower upfront costs, and ease of access have made it much easier for companies to understand how such technology will provide a return on investment. It is this latter improvement – ease of access – where Rehgan sees the most impact for companies. As Rehgan explains, the greater the number of people who can use a system to facilitate their work, the more likely a company will integrate it into its workflows. The days of needing large teams of data scientists are gone. Now, anyone can leverage company data to experiment, test, and prototype in an intuitive environment.
If improved UX is the fulcrum of AI adoption, education will be the lever. Rehgan identifies a psychological barrier to taking up new systems and explains that moving from a known system that has worked for decades, to an unknown system that will take time to learn may be enough to deter the C-suite. The key lies in educating everyone, from the C-suite to entry-level employees, on the basics of digital workflows, the capabilities of generative AI, and the benefits of an AI-augmented workforce.
Finally, Rehgan exercises caution in how to explain the benefits to company leaders. She says that it is most important to emphasize the use cases that solve apparently mundane-yet-fundamental issues in the company (e.g., improving production cycles and cross-departmental communication) rather than the bombastic capabilities that might typically tantalize a CEO.
Listen to hear more of Rehgan’s thoughts on AI adoption, why it’s so important to speak to everyone in the company – and our scoop on the world’s first Krohn-themed beer (remember, you heard it here first).
Items mentioned in this podcast: 

 
 Follow Rehgan:

Did you enjoy the podcast?

Podcast Transcript

Jon Krohn: 00:06

This is episode number 698 with Rehgan Avon, CEO of AlignAI. 
00:27
Welcome back to the SuperDataScience Podcast. Today I’m joined by the outstanding communicator and AI entrepreneur Rehgan Avon. Rehgan is Co-Founder and CEO of AlignAI, a firm that specializes in helping organizations successfully adopt AI. For more than a decade, she has architected products that operationalize machine learning models at scale within large enterprises, particularly financial institutions. She also founded Women in Analytics, a global community with more than 7,000 members. She holds a degree in Industrial and Systems Engineering from Ohio State.
00:59 In this episode, Rehgan details how AI perception has evolved in the enterprise over the past decade, and she provides practical guidance on what organizations can do to actually adopt AI successfully. All right, let’s jump right into our conversation. 
01:13
Rehgan, welcome to the SuperDataScience podcast. I’m delighted to have you here. Where in the world are you calling in from? 
Rehgan Avon: 01:18
Good old Columbus, Ohio. 
Jon Krohn: 01:20
Nice. And a place, I’ve got a visit, actually, something that we, hopefully, he’s actually listening right now. We’ve got a listener who’s founding a brewery. I don’t know all the, if I can go into all the details yet. So for now, I’ll leave him as well as the name of the brewery, nameless. But in later this year, like fall or winter, there’s going to be a launch of a SuperDataScience or a Jon Krohn-themed beer in Columbus, Ohio. So that’s, that’s something that we started to work on. And so I- 
Rehgan Avon: 02:00
You have to tell me when that happens, cuz I’ll definitely show up and try it. 
Jon Krohn: 02:04
Yeah, I think it’ll be, yeah, I’m really excited. We’re, so it’s going to be, we’re going to be using data science and AI to, to drive what components go into the beer. So it’s going to be a data-driven beer and yeah, hopefully it tastes good too. It should. It should. Like that’s the idea. And luckily like the brewer, so it was his idea. He’s been a listener for a while. And so it’s not like I had this zany idea of myself. This is like a brewer who was like, I know this will work. Like, we’ll use a data-driven approach to guide what, you know, exact ingredients we put in and how much, but he’s like, if it comes up with something ridiculous, I’ll know, we’re not going to make it. 
Rehgan Avon: 02:50
Yeah, I’ll give you reinforcement learning with human feedback if you need it. 
Jon Krohn: 02:56
Awesome. Yes. Oh yeah, we’ll need lots of RLHF on this beer. So yeah, so I’ve never been to Columbus, but I guess I’m going to be there later this year and I’m looking forward to it. Anyway, we have met in person, despite me not having been to Columbus. We met in person at the Open Data Science Conference East, ODSC East in Boston, this this northern hemisphere spring. And yeah, it was wonderful to meet you there. I know we’ve been corresponding online for a while and I’m delighted now that we have this opportunity to film an episode with you. 
Rehgan Avon: 03:29
Definitely. Yeah, I’m super excited. I’m, I’m excited for my hat so I can rock the hat. 
Jon Krohn: 03:34
Yeah, that’s already on its way to you. 
Rehgan Avon: 03:36
Good. 
Jon Krohn: 03:38
It should, we really should have arranged so that the SuperDataScience hat was there. 
Rehgan Avon: 03:41
I know. 
Jon Krohn: 03:42
Before we film the episode, but what are you going to do? 
Rehgan Avon: 03:44
I would’ve worn it. 
Jon Krohn: 03:46
Let’s have to do another one. So you are expert at helping organizations change to AI, to become data driven, to adopt AI. You’ve been deploying models at banks since 2017. And so over that period of time, so now six years later, how has the perception of AI changed in the enterprise over that time? And particularly, I imagine there’s been a big shift since ChatGPT release late last year. 
Rehgan Avon: 04:18
Definitely. It, it is night and day from 2017 to now on what companies are thinking about AI and how they’re thinking about AI. Back in like 2017, 2018, obviously some of the industries like insurance and banking have been leveraging machine learning models for a long time. And when thinking about the enterprise component of that, they’re all, they’re also regulated, so they have to think about compliance quite significantly. So they were using pretty simplistic techniques in most cases just for auditability and explainability purposes. And then you had kind of this boom of generative that hit the scene last year, and companies were a lot more interested in exploring what AI was and how they could leverage it. And not just from the simplistic version of, you know, a simple classification model or, or a clustering model, but now these large language models and what they can do. 
05:22
And I think the promise on the ROI for companies is a lot more clear now than it ever was. And because of that, they’re willing to make the investment, the investment’s also much lower than it ever was. And I say that because there’s a lot of tools now where you can like build and fine-tune models on, you know, without being super technical, which is really cool. Whereas before you had to hire tons of data scientists and tons of data engineers and architecture folks, and people who really understand, understood the, the deep nuances of building these models. And it’s getting easier, which is great. So the barrier to entry is lower, the path to ROI is more clear and everybody loves a good hype cycle. So all of the, you know the board members and C-level individuals want to understand how they can leverage this to, to become more competitive in the market. 
Jon Krohn: 06:20
Crystal clear Rehgan, that makes perfect sense to me. It is not surprising to me to hear that it is night and day since you know, over the last five years with respect to people’s perceptions to AI, I can, I can get, especially with your experience in the finance sector in particular where things need to be auditable. Things need to be explainable or, or at least they needed to be historically. But now we’re at this point where if people want to be getting the coolest functionality, this great ROI that is now eminently clear to anyone who’s used ChatGPT, they’re like, wow, I, you know, you probably have lots of ideas as a business owner of things that they’d want to automate at their bank or, or their organization. And the, the downside to that is because these models are so enormous, even if you could possibly understand what’s happening in each of the billions of artificial neurons, it still wouldn’t make sense in totality. 
07:15
You wouldn’t be able to say, oh, you know, this X goes in, therefore Y comes out. It’s like, it’s it’s way too complex. So yeah, people have got to take that leap. But as you say, it’s also great that the barriers to entry or lower we have talked a lot on this show over the last few months about open source LLMs that people can be getting really inexpensive ways of training them, like Parameter-Efficient Fine-Tuning with Low-Rank Adaptation like LoRA. But it’s also interesting to hear, and this isn’t something I’ve talked about much on air, so if you have more to tell us about this I’d be super interested is, you talked about you, it sounds like somebody would be able to take advantage of these technologies without even writing code.
Rehgan Avon: 08:00
Yeah. I, I have talked to a couple of folks in the enterprise who are starting to toy around with it. And one of the most interesting conversations I’ve been having is this idea of experimentation, cheap experimentation, efficient experimentation internally with these LLMs. So providing low-code, no-code options for innovation teams, for example, that can just get a sense of how they function and think through some of the opportunities and possibilities with models like this. This is really turning into a UX challenge. Now, there’s a lot of technical challenges with these large models as well, and those are always super fascinating to dive into. But when we’re thinking about applicability, we’re really thinking about the user experience and we’re thinking about functionality and what these models can do and can’t do and what type of data they need in order to be super performant. 
09:00
And so if we can provide these environments that enable somebody who’s not super technical to at least understand the core concepts and paradigms and layers of abstraction that they need to understand to really toy around with it and think about business implications, then we don’t really have to worry about performance too much because that’s when they can start to talk, talk with some of the more technical folks on their team to build something more robust. But getting to an MVP and playing around with the technology that we have today I think that, we’ve now created an environment where people who aren’t as technical can start to toy around with it. 
Jon Krohn: 09:38
Very cool. I love that. And I can see why experimentation is such a critical item because of these complexities with user experience. So that’s, that’s something as we have been building generative AI modeling capabilities as a data science team, at my company Nebula, there, a lot of the complexity comes with, well, how are we going to make this so that the user understands what they’re doing in an easy way? That it’s, that it’s signposted clearly and that these generative AI elements interact effectively with other parts of the platform. So it’s a nice thing to think, oh, I’d like to have kind of like a chatbot style functionality that explains things in my platform. That’s kind of like an easy application that you might think of to have in your platform, but then having that chat interact thoughtfully with all the aspects of your platform, this is really non-trivial and could, there’s lots of user experience challenges as well as engineering challenges to make sure that okay, if, if the conversational agent is providing guidance to you, is it changing things about the platform? It probably should if you want this to be a good user experience. But then you’re talking about like potentially having this completely new LLM element making changes to parts of your platform that might be a decade old. 
Rehgan Avon: 10:57
Absolutely. I think the, the interesting piece here is really connecting what’s technically feasible. So those things that you just outlined with kind of the empathetic experience of that end user and how they interpret results and how they make decisions based off those results and how they decide what their next action is going to be. Because this is why I keep harping on the, the UX paradigm here, because you’ve got people who don’t want to be exposed to the guts of what’s happening. So that UX has to provide this barrier to what is happening under the hood, but also needs to take new actions. Like again, this feedback loop of this is wrong, this is right, I’m navigating this way because I don’t know what’s happening over here. Like that still needs to be thought through and we don’t want people to deviate too far from what they’re doing today. So we think about operations folks, customer service or people in supply chain, supporting supply chain nurses. Like these are the types of conversations that we’re having with companies is how do we build products for folks like that who are kind of frontline running the operational components of the company and building things that are useful and helpful for them and not confusing and not this new kind of like adoption barrier.
Jon Krohn: 12:19
Adoption being the key word for my very next question for you. So we talked about lower barriers to entry to being able to have LLMs, but still a lot of organizations aren’t able to effectively adopt AI. There still are some barriers, maybe psychological barriers. So what are these barriers that are preventing organizations of any size to be adopting AI? We see the huge tech companies like Microsoft, Google, they are very much AI forward. They’ve been, or they’ve been trying to be data driven as much as they can for years or they’re, they’re in, it’s in their entire existence. But, you know, those, those big orgs are, you know, they’re, they’re getting on top of AI relatively easily, but smaller organizations are going to need to as well if we’re going to see the effects of AI ripple through all aspects of the economy. 
Rehgan Avon: 13:13
Yeah, I always compare it to this, you know, you’ve got a company, maybe a bank that’s been around for a long time, they’ve been running as a bank without any issues, you know, for 50, 60 years. And then you kind of slap on this AI capability on the side of it. It’s not a core part of the membrane of the company. Like if all their models go down, there’s going to be some issues, but the bank will continue to run. And I think once you make that shift as an organization where you are dependent on these models to run, like a lot of these tech companies are where it, you know, they’re AI native, they started with this premise and it’s in the fabric of their products and their operations. That shift is really hard cuz you’re basically standing on something sturdy and you’re telling them to jump onto this like wobbly foundation. 
14:10
And that’s scary for a couple of reasons. Number one, what are the, what are the business impacts of moving over to being dependent on this new decision system and what if it goes wrong? And two, what sort of operational changes will it have on the organization? Are we going to let go 20% of our operations workforce because it’s automating so many elements of, of what happens day to day? Like these are real fears and real, even if they’re subconscious, real barriers to adoption that companies have. And I think what’s challenging for these organizations is there’s no clear path to doing this incrementally. It feels all or nothing. It feels like okay, there’s this huge opportunity to in, you know, save 15% of OpEx for customer service. What does that look like? Where do we start? How do we get our workforce to participate, help with the design, help with the feedback, help build these things as a collaboration effort? 
15:20
And that is a top-down cultural movement. And when you have a leadership team who’s not educated on what that looks like or what that path forward is, you run into these areas of resistance, you run into these areas of failure. People are always looking for these modes of failure so they can point at it and say like, see this doesn’t work. We should keep doing it the way that we’re doing it. And there’s all the political elements to that too. So one of the major things that I noticed deploying models in banks for a long time was that it was never a technology problem. It was always a people challenge, it was a process challenge, it was a people challenge. People don’t like to change and they are scared of things they don’t understand and we don’t spend nearly enough time getting people ready for that change before we start any sort of project with AI. 
Jon Krohn: 16:19
So is that the key? Is the key to just be talking through people’s fears to kind of unearth what those really are? Do you have like a, when you are trying to evoke change in an organization, do you have like a roadmap for doing that? 
Rehgan Avon: 16:34
Yeah, so there’s a couple of core elements that we include. One is education. Again, people are scared of what they don’t know, so can we get them to understand even the basics and can we get them excited about it? One of our customers literally sent somebody from their operations team to Germany to hang out with all the data scientists and see what they were working on and they came back super excited, like they were the biggest champion for the project that they were trying to, to get up and running. And those types of things seem nuanced and seem like they’re not going to be super helpful, but they’re immensely helpful. So I have a systems engineering background. One of the things they always tell you is go to the line, I put that in your quotes for people listening, but go to the line is like, put on your steel-toe boots, get on the line and talk to people who are working on the line. 
17:23
Because you’re some engineer sitting in an office building and you’re going to come down with a solution that theoretically is super efficient and great on paper, but people weren’t a part of the design process. They don’t understand what you’re doing or why you’re there and they don’t know how it’s going to impact them. So education is number one, meeting people where they are getting them involved really early. And this should come from the senior leaders at the company. This is an initiative, we want you to be involved, here’s our plan. And I don’t see enough leaders stepping up confidently in that regard. And I think that’s why they end up failing because they’re doing these small experiments and they can’t get mass adoption. We all know if they can’t get the model into production and they can’t get people using it, it’ll never see an ROI. And that’s usually the failure. 
Jon Krohn: 18:18
That all makes perfect sense. The core elements of change being education and getting people excited. Going back to something you said a little bit earlier, you mentioned how people tend to have this all-or-nothing mindset about adoption of these programs. So how can we allow adoption to be incremental? What does that look like? 
Rehgan Avon: 18:41
Yeah, I think people get really, really excited and they want to either dive in completely or they’re not willing to put in enough investment. There’s kind of like, it’s like a step function almost at first because there’s kind of a bare minimum. Like you’re either going to hit this amount of investment or you’re going to see nothing. And so if you can get people over the step function of getting started and give them kind of the fundamentals, you know, they can get somewhere, they can start to see some tangible results. But I think focusing on kind of the boring use cases, but the boring use cases always end up being the ones that are like, wow, like I can’t believe that we were able to get that much revenue or that much cost savings from this model. But it, it’s where you’ve got a decent amount of data that’s high quality and available, you’ve got a fairly simplistic decision process that the model can be super beneficial towards and you have people who are excited about it.
19:43
Those use cases can start to demonstrate to people what the power of AI can do and then they’ll start to get creative once they have that example and they can start to think of other kind of tangential use cases for AI and where AI can be very helpful inside of that part of the organization. But you really have to be very strategic about that first use case or the first couple of use cases that you’re going to explore and don’t get super hung up on the most interesting thing the CEO thinks we should start with. 
Jon Krohn: 20:18
Right. So having like conversational agents everywhere in the business all the time. 
Rehgan Avon: 20:22
Right, exactly. 
Jon Krohn: 20:24
Awesome. You mentioned in one of your core elements of change while your first core element of change was that education is key. So what is the AI education gap and how do we close it? 
Rehgan Avon: 20:38
Yeah, number one, so I usually break it down into this like pyramid at the bottom is concepts and on top of that is frameworks on top of that is tools. And so, and then you’ve got kind of your roles and responsibilities. So when you think about it from that perspective, your concepts are very general, right? This is what AI is, here’s the definition of, you know, a large language model. Here’s the definition of the Reinforcement Learning with Human Feedback. Like these are just core general concepts that you get people to understand the same terms, you speak the same language, you can start to have productive conversations. That is what a lot of people are referring to as like literacy or fluency, either in data or AI, which I think is great. That’s like baseline. 
21:29
The second piece is how does this affect me? What changes in my day-to-day? And this could be anywhere from a data scientist to a bank teller, right? What am I doing differently because this thing exists, you know, if we’re modernizing our MLOps platform, what am I doing differently when I test models or deploy models? What frameworks are we referring to? What are the steps that I go through? If I’m trying to do a data quality improvement initiative with my bank tellers when they’re entering data into the system, what implication does this have? Why is this important? What, what am I doing differently? What extra tasks do I have to do? That level of process is super important. I think we never, we don’t usually get to that level because it’s really hard to specialize that with materials you find online. Cuz materials you find online hit concepts and they hit general frameworks, but they don’t hit it specifically for the company.
22:27
And then the roles and responsibilities, I think people really mess up here too because I’m sure as you know, being in this field for a long time, everybody call something a little bit different. These different roles that are defined in the industry, if you go to different companies, they have completely different roles and responsibilities defined. And if we don’t have these handoff points or points of collaboration between the business and technical teams and data science teams, then we start to see all of these inefficiencies and quality issues with solutions that get built. So if we don’t know what that ecosystem looks like, like I’m doing these processes different and here’s how I connect or collaborate with this other group, then it starts to break down as well. And then lastly, tools, which people love to focus on because tooling training is easy. You can tell people to click a series of buttons to do a certain feature or, you know, function inside of a a platform, but if you just focus on concepts and tooling, it falls down pretty much every time. But we’ve, most companies we’ve talked to, they’re like, we have the, you know, platform licenses for you name it, Pluralsight Data Camp, LinkedIn Learning, Coursera, those are great, they’re needed. We have all the certification programs for Snowflake and Azure and whatever else those are needed, but they’re missing the two pieces in the middle, which is who does what and who talks to who and at what point and what am I doing differently inside of my company. And that’s the gap that we’ve seen over and over again because it’s very hard to build that stuff.
Jon Krohn: 24:04
Very well said, you can tell that you’re a pro at this. That was amazingly and concisely delivered Rehgan. That’s fantastic. So I think our listeners should now have a clear sense of what they need in, in order to be able to actually adopting in order to actually be adopting AI in their organizations. And one of those key things is education. Having everything in there that you mentioned, the concepts, the frameworks, the roles and responsibilities, the tools that is going to be essential. Did I miss anything? Those are- 
Rehgan Avon: 24:35
No, those are it. Yeah. 
Jon Krohn: 24:37
So fantastic. Thank you so much, Rehgan. Before I let you go, do you have a book recommendation for us? 
Rehgan Avon: 24:43
Yeah, I, it’s probably going to be redundant or repetitive, but honestly I just love it so much. It’s the Designing Machine Learning Systems by Chip Huyen. I know you’ve probably promoted that before, but it is honestly such an incredible book that, I mean, I, I’ve been in the MLOps space for the last six or seven years and just the way she articulates specific concepts inside of that book is fabulous. So people who have not read it or gone through it, it is really, really good.
Jon Krohn: 25:13
Yeah, so Chip was in episode number 661 of this show, so our listeners can check that out if you want to get a taste of Chip’s style of communication I guess. And we did actually, we focused on her book in the episode. So it’ll give you a good sense of whether it’s something that you need, but if you’re trying to get ML adopted in your organization, it’s probably a book that you need. Chip was a Stanford instructor specifically on engineering ML systems and so she really, she’s the expert. She put together a really great book on this topic and it’s been flying off the shelves. So yeah. Thank you for that recommendation, Rehgan. If people want to follow your thoughts after this episode, how can they do that? 
Rehgan Avon: 25:59
Yeah, I’m on LinkedIn a lot. Just started getting back in the Twitter game, so if people have tips or tricks on Twitter, let me know. But you can find me on LinkedIn, Rehgan Avon, my name is spelled R E H G A N. And that is my handle for Twitter as well, so you can follow me there. I post random stuff on, on Twitter, so-
Jon Krohn: 26:24
Nice. We’ll be sure to include all of those links in the show notes. Rehgan, thank you so much for your insights being on the show and we’ll have to catch up with you again in the future and see how enterprises AI adoption journeys are coming along. 
Rehgan Avon: 26:37
Yeah, it’s fascinating. Thanks so much for having me. This is incredible.
Jon Krohn: 26:42
Nice. What a practical conversation that was. I hope you took a lot away from it. In today’s episode, Rehgan covered how it’s night and day for AI adoption in the enterprise, where suddenly thanks to LLMs, the path to ROI from AI is clearer than ever, while the barrier to entry is lower than ever. She also talked about how jumping from the sturdy foundations of their current tech infrastructure to wobbly AI models makes enterprise leaders apprehensive about making the pledge into AI, but incrementality, education, excitement and starting with projects that have high quality data and a big revenue impact put organizations in a place where they successfully adopt AI. 
27:20
All right, that’s it for today’s episode. If you enjoyed it, subscribe to ensure you don’t miss any of our exceptional upcoming episodes. And until next time, keep on rocking it out there folks. And I’m looking forward to enjoying another round of the SuperDataScience podcast with you very soon. 
Show All

Share on

Related Podcasts