Podcastskeyboard_arrow_rightSDS 809: Agentic AI, with Shingai Manjengwa

70 minutes

Data ScienceArtificial Intelligence

SDS 809: Agentic AI, with Shingai Manjengwa

Podcast Guest: Shingai Manjengwa

Tuesday Aug 13, 2024

Subscribe on Website, Apple Podcasts, Spotify, Stitcher Radio or TuneIn


Agentic AI is revolutionizing the tech landscape, and Shingai Manjengwa from ChainML is here to tell us why. Discover how AI agents are becoming an integral part of our lives, automating tasks like travel bookings and daily inspiration. Shingai explains the power of multi-agent systems, where AI agents collaborate to solve complex challenges, and highlights how blockchain technology is enhancing AI transparency and trust. Plus, get an inside look at ChainML’s innovative Theoriq protocol and the groundbreaking Council Analytics tool. 


Thanks to our Sponsors:





Interested in sponsoring a Super Data Science Podcast episode? Email natalie@superdatascience.com for sponsorship information.

About Shingai Manjengwa
Shingai Manjengwa is the Head of AI Education and Solutions Engineering at ChainML Labs, the makers of Theoriq AI. A seasoned data scientist, Shingai previously directed Technical Education at the Vector Institute for Artificial Intelligence, where she transformed complex AI research into practical educational programs. These initiatives promoted AI adoption and spurred innovation across industries and government. She is also the founder of Fireside Analytics Inc., dedicated to educating diverse audiences about AI literacy, data science, and issues of bias and fairness in AI. Her online courses on platforms like IBM’s CognitiveClass.ai and Coursera have engaged over 500,000 learners. Shingai authored The Computer and the Cancelled Music Lessons, a book that introduces data science concepts to children aged 5 to 12. An acclaimed leader in the AI field, Shingai has been recognized with numerous awards and is a Public Policy Forum Fellow.

Overview
Agentic AI is the buzzword you need to know. Shingai Manjengwa, Head of AI Education at ChainML, shares how AI agents are changing the game. Shingai breaks down what these agents are and why they're the hottest trend in AI right now. Imagine AI agents booking your flights or sending you motivational quotes every morning—all autonomously! She explains how large language models (LLMs) play a crucial role in shaping these smart agents, making them more intuitive and responsive to human needs.

Shingai also goes into the nitty-gritty of multi-agent systems, explaining how they enable AI agents to team up and handle complicated tasks. These systems allow agents to specialize, collaborate, and tackle problems together that a lone agent couldn't manage. The synergy between agentic AI and LLMs is transforming the AI landscape, with blockchain technology enhancing the trust and transparency of AI interactions.

ChainML is at the forefront of these developments with the Theoriq protocol, which uses blockchain to track AI agent activities, ensuring transparency and accountability. Shingai also introduces Council Analytics, a game-changing tool that lets you chat with your data in plain language, making data analysis more accessible than ever. For data scientists, AI enthusiasts, and anyone interested in the future of technology, this episode is packed with insights on the innovative paths AI is carving. Tune in to Episode 809 with 809 with Shingai Manjengwa today. 

In this episode you will learn:
  • What A.I. agents are [10:51]
  • How blockchain technology helps humans trust A.I. agents [18:27]
  • The Theoriq protocol developed by ChainML [34:05]
  • How Council Analytics lets you “speak” to their dataset with natural language [39:00]
  • A future of multi-agent systems [50:42]
  • Challenges and risks associated with agentic AI [1:04:17] 

Items mentioned in this podcast:

Follow Shingai:
Jon: 00:00:00
This is episode number 809 with Shingai Manjengwa, Head of AI education at ChainML. Today's episode is brought to you by Gurobi, the decision intelligence leader, and by ODSC, the Open Data Science Conference.

00:00:18
Welcome to the Super Data Science podcast, the most listened to podcast in the data science industry. Each week, we bring you inspiring people and ideas to help you build a successful career in data science. I'm your host, Jon Krohn. Thanks for joining me today. And now, let's make the complex simple.

00:00:49
Welcome back to the Super Data Science Podcast. Today we've got the astoundingly intelligent and articulate Shingai Manjengwa on the show. Shingai is Head of AI education at ChainML, a prestigious startup focused on developing tools for a future powered by AI agents. She's also founder and former CEO of Fireside Analytics, which developed online data science courses that have been undertaken by over half a million unique students. She was also previously director of technical education at the prominent global AI research center, the Vector Institute in Toronto. She holds an MSC in business analytics from New York University. 

00:01:23
Today's episode should be equally appealing to hands-on practitioners like data scientists, as well as to folks who generally yearn to stay abreast of the most cutting-edge AI techniques. In today's episode, Shingai details what AI agents are, why agents are the most exciting, fastest growing AI application today, how LLMs relate to agentic AI, why multi-agent systems are particularly powerful, and how blockchain technology enables humans to better understand and trust AI agents. All right. You ready for this stellar episode? Let's go. 

00:01:58
Shingai, welcome to the Super Data Science Podcast. I've wanted to have you on the show for years, and finally the day has come. Welcome. Where are you calling in from today? 

Shingai: 00:02:08
I am calling in from Toronto. We've just had a massive thunderstorm and lights were out, but we're back, and thanks for having me. 

Jon: 00:02:17
Yeah, I experienced that thunderstorm as well. I'm not far away in Waterloo, Ontario, today. Just an hour drive from where you are there in Toronto, and it was a heck of a storm. Luckily my power stayed up, and I'm glad yours is back in time for us to record this episode. And so, yeah, I've known you for years. I've been aware of particularly you were the host of O'Reilly's Superstream. So O'Reilly used to be probably the biggest AI conference provider, physical in-person conferences before the pandemic, and I finally, right before the pandemic happened, I had just been accepted to give my first O'Reilly talk. And I'd been applying for years, and so I was so excited.

00:02:59
And then the pandemic happened and O'Reilly canceled all in-person events. So sad news for me, but at least there was some upside for you because you became the host of their online conferences. So they call them Superstreams and these are half-day or full-day conferences that they host typically tons of amazing guests, the same kind of amazing caliber that they would've had in a pre-pandemic world in an O'Reilly conference, and, hey, you were the host. And so I was aware of you from hosting those, and at that time you were working at the Vector Institute in Toronto, which is also a super cool place to work. Geoff Hinton is there. And I don't know how many people... There are hundreds of AI researchers doing cutting-edge research.

Shingai: 00:03:45
600 AI-

Jon: 00:03:46
How many? 

Shingai: 00:03:46
... researchers at the Vector Institute. 

Jon: 00:03:46
600. 

Shingai: 00:03:46
Yeah. 

Jon: 00:03:47
There you go. So amazing place to work. So I was like, "Let's get Shingai on air." And it didn't happen then, but now that there are some conferences coming back, it's actually, it's interesting. So the O'Reilly conference that I would've been speaking at if there hadn't been a pandemic, would've been in the Javits Center in New York, and the first place that I ever saw you IRL, in real life, it doesn't really save any time if I spell it out, still three syllables, but the first time that I saw you in real life was at the Data Universe conference, which was at the Javits Center a few months ago in New York. So there's kind of an interesting little circle- 

Shingai: 00:04:25
But we didn't meet there. 

Jon: 00:04:26
... thing happening there. No, we didn't talk. I was too nervous to talk to you. 

Shingai: 00:04:33
Oh, that's so kind. [inaudible 00:04:34]- 

Jon: 00:04:33
You're so intimidating. 

Shingai: 00:04:34
... when we did meet at Collision in Toronto, I had been talking to your producer and we were talking about this podcast, and then she said, "Oh, you should meet Jon." And obviously I know about your podcast, but then when I meet you, you start rattling off information about things I've done, which was exciting. It's nice. I always say I'm a celebrity among a very niche group of people. 

Jon: 00:05:02
You sure are. At Data Universe, I just clammed up. I got so sweaty and I couldn't even talk to you. 

Shingai: 00:05:09
No way. Come on. 

Jon: 00:05:10
It was just overwhelming. No, I think at Data Universe... I can't remember. I remember I saw you, but, I don't know, you were in conversation and then other... I don't know. I don't remember exactly what happened. I didn't see you again. But yeah, at the Collision conference in Toronto, I think that was probably Natalie you were speaking to who leads operations for the show.

Shingai: 00:05:28
Right. 

Jon: 00:05:31
And so we got talking because her and I both know Cal Al-Dhubaib pretty well, and we'll have to have him on the show someday. He's an amazing... He's just the smiliest person in the world. At every conference. He seems to have limitless energy and enthusiasm and, yeah, he's just got to be about the kindest guy in the world. Anyway. You know Cal, we know Cal. Cal brought us together. 

Shingai: 00:05:55
I met Cal at Collision and we were fast friends, like immediate. 

Jon: 00:06:00
Yeah. There you go. It's easy. You had a really cool outfit. You were unmissable at Collision because you had, I guess, your colors at your company. Is it ChainML- 

Shingai: 00:06:13
Yeah, Theoriq. 

Jon: 00:06:14
... or Theoriq that's green? Is green.

Shingai: 00:06:15
So it's ChainML Labs and ChainML Labs are the makers of Theoriq. So Theoriq is the product and the brand. And yeah, we have green and yellow and these rich colors. And why not? It actually goes really well with my skin tone, so I was leveraging the corporate colors. 

Jon: 00:06:32
Yeah, you had bright green everywhere, and so you were extremely... Anywhere you were at the conference... It was huge. 40,000 people there, but anytime I needed to find Shingai, I could see you in a second. So yeah, so ChainML Labs is a company that is creating multiple different products. We're going to talk about a few of them. You mentioned one there, Theoriq, and so let's talk about ChainML and your role there more generally before we get into the product. 

00:06:56
So you're Head of AI education at ChainML and ChainML is an AI agent research and development company that offers a decentralized execution and utilization layer for agents. So a lot going on in that sentence. Some of them are relatively easy to understand. Okay, Head of AI education. That kind of makes sense to me. That's pretty. R&D company. That's not so hard. But AI agent, tell us about AI agents because I think that's going to be the most important topic in this whole episode. We're going to talk about agents a lot. So tell us about AI agents and then maybe get into what it means to have decentralized execution and utilization layer for these agents. 

Shingai: 00:07:39
Yes. Let's talk about agents. So Head of AI education, I inherited that title probably from my first startup when I started developing data science educational content, but we may get into that later. And so from having done that startup, that's also how I joined the Vector Institute. I met Garth Gibson at a conference, the former CEO, and he said they were setting up an education function at Vector Institute, and that's how I ended up at the Vector Institute. 

00:08:12
And so AI education, or before it was data science education, and AI, as you'll probably agree, is a category of models that is part of a broader... There are many definitions, but let's go with that, that neural networks in a category of models is just part of data science. So I evolved from data science education to AI education, now doing that at the Vector Institute. And then a couple of people from the Vector Institute founded ChainML Labs, so Ron Bodkin, our founder and CEO, he's also ex-Google CTO's office. He was the CTO at Vector Institute and he started ChainML together with Ethan Jackson, who's our lead research, and then Arnaud Flament and David Muller, who are also the co-founders.

00:09:04
I went on mat leave and they started the company. And then when I came back from mat leave, from Vector Institute, I joined the company. So that's the timeline. And so the story about the job title was we were sort of saying, so... I believe Ron's words were, "We want to unleash you onto the world. What would you like to do?" And it really made sense through talking it out that if we want to have a viable artificial intelligence startup, we also need this function where people know and understand what we're doing. And so AI education became a core component of that. And it's now more popular. I've started seeing the roles popping up a bit more, but it really wasn't there before. 

00:09:43
And it might go by different names. You might say solutions engineering, DevRel. There's different buckets that kind speak to it. But at the end of the day, it's so that our stakeholders and our communities really understand what our products are and what our business is and what the technology is, and that's why you have a Head of AI education. 

Jon: 00:10:02
It makes a huge amount of sense because even for people like listeners to this podcast who are well-informed about AI, most of the world isn't like that. Most of the world isn't well-informed about AI, but even for our listeners who probably are, there's lots of things that you guys are doing at ChainML that are so cutting-edge. I mean, that's why we're focused on that in this episode. You explaining to us about AI agents and then these decentralized stuff. I mean, it's like I still don't... I don't have my head wrapped around it. I can't wait to learn about it from you in this episode. The AI agents part, I feel like I get, but yeah, the whole decentralized part, integrating blockchain with AI, there's a lot to educate on there. So- 

Shingai: 00:10:46
Do you want to get into it? 

Jon: 00:10:47
Yeah, let's go. 

Shingai: 00:10:48
Let's go. Let's do it. So just for everybody who's listening, an AI agent, let's think of it as software that leverages AI. And when we say AI for the moment, the agents that are the hottest right now are the ones that are using large language models. So think of a chatbot. If you build software around a chatbot so that it behaves sort of in an autonomous kind of way, it's able to do some planning and then execute on a task, let's call that an agent. There are multiple variations. We have computer vision, and if you wrap your software on computer vision, you could call that an agent. If you put that agent in hardware, we could start talking about robotics. But the simplest way to think about an agent is really software that we wrap around artificial intelligence and specifically these language models. 

00:11:37
So the simplest implementation of an agent is where you enter a prompt and that prompt executes on something. So I would like a quote every day that inspires me to start my day, so I write that prompt into my large language model, and then I scaffold that with some other instructions, software instructions for how I get that message delivered to me, whether that's in an app or maybe I just open up one of the products, the GPTs, that's what we've been calling them. And so that's how you might start interacting with an agent. 

00:12:11
Now, the reason we're talking about agents now and why they're hot is because agents specialize, right? So if you're working with an agent that does something very specific, that agent can get very, very good at doing something specific. So my favorite example is something like airline travel. You would have an agent that helps you plan an itinerary. Now, that's a task that does require specialization because if you're going for a bachelorette party or if you're traveling with your elderly parents or with your kids, all of those are quite different itinerary. So you want to have some sort of specialization on what are the things that will make that a successful trip. So that's one agent.

00:12:53
Now, you also now want to make some bookings. You need to book some accommodation and you need to book some travel, like flights. Well, those platforms right now use dynamic algorithms. So wouldn't it be amazing if you had an agent that could support you in that process that would be good at interacting with those different websites, getting all the right information, and then learning the timing for when to do the right bookings so it maximizes whatever your objectives are, let's say, to get you the lowest price, and that's a specialized agent. 

00:13:24
And then finally, you might have a specialized agent that's really good at managing your wallet. So you might have different credit cards, different types of accounts, even different types of currencies, right? So I'll throw out digital currencies because that's an option. You might have an agent that specializes in dealing with your wallet.

00:13:41
Now, the art of booking a trip would involve you typically in a SaaS model, going from one task with one bit of software to another task with another bit of software. But the promise of AI is that we can knit together these different activities so that this happens in a seamless way. And these three specialist agents, in our framework, we call them a collective, this collective would be your travel collective, and it would be able to knit together those different tasks and those different specialized agents in a way that makes a really seamless experience for the user that still, as the user, you would interact with in a conversational way. 

Jon: 00:14:24
Many decisions businesses face are massive, complex, and heavily constrained. In these scenarios, mathematical optimization is often the best tool for the job, and Gurobi, which is trusted by many of the world's leading enterprises, is the go-to provider for fast at-scale optimization. While filming episode number 723 last year, I had my mind blown about the wide range of scenarios where optimization is the right solution for the job. Check out episode number 723, as well as the introductory resources for data scientists at gurobi.com to get up to speed on when optimization is ideal and add this uniquely powerful tool to your data science toolkit. Then, coming up in August, we'll have a second episode on optimization with even more tips and tricks from Gurobi guru, Jerry Yurchisin. Hope to catch you then. 

00:15:13
Very cool. So yeah, so agents act autonomously, and so that's what makes them a little bit different from the continuous kind of conversation that we have with an LLM. So often LLMs are used, as you said, to power some of the capabilities of an AI agent, and that's what makes it so easy to now suddenly interface with them. And I think that's why they're so hot right now because we can use an LLM as a natural language interface and then the agent can use information from the LLM to go out on, say, the web and come up with an itinerary for you. They could use its LLM weights, I guess, maybe to come up with the itinerary, and then subsequently, like you said, the word scaffolding there. I like that. So I'm guessing that's the kind of thing that ChainML helps with is providing things like that scaffolding to allow you to have this agent be working autonomously, not just while you're in that conversation with it. 

00:16:10
So if you ask the agent to every day provide you with some inspiring quote, then that's what makes it different. So if you go into ChatGPT or Claude from Anthropic and you go in manually every day and you ask for an inspirational quote, that isn't an agentic AI situation because it's just reacting to you, it's just having that conversation with you at that instant. But if you can say it in natural language to an agent, say, "I want to have an inspiring quote every day," and then it is pushing to use, say, via email or SMS or whatever you prefer, pushing that to you automatically, then it is an agent. 

Shingai: 00:16:56
Right. And maybe one layer before that level of automation, if you think about the GPT Store where it's really just, it's a function. You pull down an agent to perform a function. So it's not yet automated. It's not being automatically delivered to you via an app. At that point, those are agents too, but you have to do the work of going to get it from the store and then incorporating it into your workflow. 

00:17:24
Anthropic also now has a store, and I believe they call them projects at this point, but we're all sort of leaning towards this idea of having a marketplace where you have multiple itinerary agents. So anybody can do that now, especially with the no-code solutions, anyone can go and build an itinerary agent. And if you're particularly good at building one for a family with small kids, you would put up your agent in the store.

00:17:51
Now, with Theoriq, the product that ChainML Labs has built, where Theoriq really shines is how would you make a decision about which agent to use? So you've got access to this GPT Store or the projects or Hugging Face or a platform that has multiple different agents. How do you make a decision about which is the right agent for me to use in this collective? And what we've built is a protocol that has some mechanisms for that to happen in an assisted way. It's an automated way, but let me for now call it an assisted way. 

00:18:26
So the idea that each agent has metadata written onto a distributed ledger is quite cool because it gives us many things. Let's start with we know who made the agent, where it came from, what it was designed to do, and what its function is. And then on top of that, you also get some clues about how the agent works. So the agent gets to keep its secret sauce, but then we also have a way of sort of tracking that. If, Jon, you said it's an itinerary agent and then I run it, but it's doing something else that it wasn't intended for, then finally, as a user, I have some form of recourse. I can go back to the store and say, "Hey, look, this is what's happened." So that's sort of the first thing.

00:19:14
And if you think about the way we interact with language models now, if a language model does something odd, which they've been known to do... So I post usually on LinkedIn something called... It's not Hack Fridays, it's Jailbreak Fridays, and if I encounter something during the course of my week, I post about what the language model has done. And recently I had one where I said, "Give me an image from space and an arrow pointing to where Zimbabwe is." I'm originally from Zimbabwe. So when that happened, it turns out Madagascar is a great time to visit this time of year. It had the right flag, but it was pointing to the wrong place. Now, language models will improve. Fundamentally, I believe that. But what might happen after you listen to this podcast, you'll say, "No way, Shingai." And then you type it in and you try it for yourself, and voila, the arrow is now pointing at Zimbabwe.

00:20:08
It makes it very difficult for me as a user of an agent to go back to any kind of service provider and say, "Look, this is what the agent said to me." So we really like the idea of being able to capture some of the information about how an agent arrives at a conclusion and writing that to a smart contract, which we'll talk a bit more about smart contracts, but that's really diving into the details of why using blockchain technology is interesting for AI. We say it solves AI's trust problem or addresses AI's trust problems because immediately you're into the space where, okay, it's written onto a distributed ledger, it's clear for everyone to see what this agent said it was going to do and all the benefits of having that kind of mechanism, a public, transparent mechanism that helps with the trust issues of language models and of AI today. So I'll pause there. There's a bunch of different components, but I just want to... What do you think so far, Jon? 

Jon: 00:21:08
Yeah, so it sounds cool. So it sounds like at ChainML you are interested in using the blockchain for actual functional purposes, which is nice. So it's not just a crypto coin or something. It's something that does work. And the idea here, it reminds me a bit of this idea that's evolving where because simulated data, deepfakes are becoming so effective, we're in a scenario now where people doubt what happens in the video footage that they watch of maybe a presidential candidate having their ear clipped by a bullet. And so somebody might wonder, is this video footage real or fake? 

00:21:58
And for me, a big way of me today telling whether something's real or fake is I get it from a trusted news source. I pay for The Economist so that I'm getting high-quality information, I think, all the time. BBC News is another place that I really trust their journalism. And so that's when, at this time of recording, it was a few days ago that that assassination attempt happened, and so I went right to BBC News because I knew that I would be getting high-quality information from them. It was unlikely to be fake news.

00:22:35
But so many people get their information from free sources, from social media sources, and there are some benefits to that. There's benefits like maybe speed. You can be getting information maybe more in real time than it takes... It was several hours before The Economist had an article I could read about what was going on. Whereas you could be on the platform formerly known as Twitter and be just getting a live feed of, oh, this and that. So it's a developing thing now that the blockchain is used to be able to allow you to check out the origins of some information. And so even if it shows up on, say, Twitter, you are able to trace that back via the blockchain to the original source. And you could say, "Okay, that's a source that I trust or not trust." 

00:23:32
And so the blockchain is being used more and more for that kind of thing. And so this reminds me of that where in this case, at ChainML, you're using the blockchain to track metadata, like you said, about an agent. So where did this agent come from? And then also to be able to track, it sounds like, if I'm understanding this correctly, also tracking literally the inputs and outputs potentially, so that you can show that, Shingai, you didn't just manufacture for your Jailbreak Friday post last week, you didn't just manufacture an image of a Zimbabwe flag being pointed at Madagascar to say, "Oh, this LLM, it sucks." You'd be able to track that back to actually being an LLM output.

Shingai: 00:24:23
Right. So just to clarify, it's aspects of the output, because blockchain is not a database, And if it were, it would be quite expensive to do so. Perhaps there's a future where we might be able to do that, but there are aspects that we believe are useful for the tracking that will be written on chain. But I just wanted to make that point of clarification. 

00:24:47
And I also want to jump in on just the hint, you didn't say it outright, but just the hint of something that I've encountered in my AI education role now that I'm having to speak more and more about blockchain technology is this inherent... I think you said it's not just another coin. That's kind of what you hinted at, which is that Web3, which is the community that has brought us blockchain technology, it's based on these values where the people that have believed in it for quite a long time have a belief in decentralization and the idea that you could have a digital currency that's not centralized, it's not tied to a particular government or entity, would be a useful thing for society and rethinking the internet. 

00:25:35
That group that... Let's call that Web3, trying to engage on those ideas with a Web2 audience, and I'm going to bucket you with Web2 whether you like it or not, has been challenging because, really, AI has trust problems for sure, but the blockchain community and the Web3 community also have things to answer for.

00:25:59
So really, the narrative that I've been working on crafting is something called convergence, is how do you take the best of these worlds and bring them together to actually solve a problem while not defending any of the bad things that have happened with digital currencies and continue to. There's some bad actors that really leverage that technology for nefarious causes, and really trying to solve a trust issue that AI has. So I just wanted to nip that in the bud because it comes up every single time. 

00:26:30
I was at Startup Fest last week, and as I got up on stage, a couple of people were trying to decide what kind of talk is this, is this a coin one? Is this a blockchain? What is she going to talk about? And then towards the end, people came and said, "Hi. Thanks so much. That was interesting." But they had that trepidation and that hesitation, so let's nip it in the bud, let's talk about it. And really, the idea of a distributed ledger and the blockchain technology is useful, especially for AI. 

Jon: 00:27:00
Okay. Nice. So if we can't track the inputs and the outputs because that's too expensive computationally, or, I guess... Is it a computational expense or like a memory expense that you can't... Because you're right. I'm a Web2 baby. I'm not a Web3 person, really. I don't really understand that much about blockchain. So when you say that we can't today be storing all the inputs and outputs from an LLM, is that a computational constraint or a memory constraint?

Shingai: 00:27:30
I would say it's both. But for us, where I've seen it personally is down to the reading and the writing. So writing onto a chain and then reading off of that chain is expensive. Every time you make a call, that would be associated with an expense. 

00:27:46
But I also, I have to say I'm a Web2 native, so I'm also learning about blockchain technology and we have included a lot of the details in our light paper and also in our website. So I would also recommend that those who are really interested in getting deeper on the subject to go and check out the verified information there. But yes, 100%, it's down to the reading and writing, and fundamentally, it's not a database, so we have to be selective about what we're putting on chain. But it has to be useful enough that it helps us verify what an agent has done and it allows for that process of recourse that I described earlier. 

Jon: 00:28:28
How would you do that? What kind of information or what kind of metadata do you then store in order to be able to, say, to verify that that image that you shared of the jailbreak was genuinely created by an LLM? 

Shingai: 00:28:42
Yeah, so right now what we're working with is the planning. So one of the things that agents do really well is that they're able to give you a detailed plan before they give you an output. And so we're working with the planning as being the mechanism to be able to have that recourse. 

00:29:01
Now, an agent might have gone to the level of detail of saying and then pointed to Zimbabwe, and then the images of Madagascar, that we're still working out the kinks. It's still very early stages, we're still building the protocol, but that's what we've been working with and it's got the most promising results in terms of what we're writing and what we'd be able to use to prove some kind of interaction. 

00:29:26
But maybe let me dive into a little bit the what we call the proof of contribution. So the proof of contribution is really about how one individual contributed to the collective. So there's a selection process of many itinerary agents and being able to say, "Okay, this is the one that I want to use because this one deals with my particular use case," and being able to automate that in a way. And then once that agent has been selected, having a mechanism for noting how much that agent contributed to the collective because the travel process is a collective activity. So there's a selection process and then there's a contribution process. And then after that contribution process, there's a collaboration process which is how well did that agent work well with other agents? 

00:30:16
And so all of those, so we have a proof of contribution or proof of collaboration, and I won't go into the details now, but those are all mechanisms within the protocol that let you dig deeper and really figure out how an agent worked, and therefore, so you can have recourse. But it's not just about recourse, it's also about rewarding the developer. So someone built this agent and the GPT Store and Hugging Face is open source, and there are mechanisms to be able to recognize the developer of an agent, but if we look at how popular agents are and how much time people are spending building them... We actually built one before called Council Analytics, which allows you to talk to your data set in Snowflake or AWS, wherever your data set is, even blockchain data sets, Council Analytics is able to speak to those different data sets in a more scaled way than if you just upload a spreadsheet to ChatGPT. 

00:31:13
Now, we know how complex it can be to build an agent and spend time really trying to make it better and make it accurate and reduce the amount of hallucination. And so if a developer has gone to the trouble of really investing in a high-quality agent, they should be able to get paid for that work or to be recognized for that work. So that's another way that we end up after having established the proof of collaboration and contribution, it's also a way that we're able to recognize the developer. And because the blockchain world is associated with digital currencies, it's also a mechanism by which to pay the developer for the agents that they've submitted onto the marketplace. It's not the only way to pay the agent, but certainly that infrastructure is built and we're using the blockchain for these other use cases. 

Jon: 00:32:06
I’ve had the pleasure of speaking at ODSC West many times over the years, and without question ODSC West one of my favorite conferences. Always held in San Francisco, this year it’ll be taking place from October 29th to 31st. As the leading technical AI conference, ODSC West brings together hundreds of world-class AI experts, speakers, and instructors. This year’s offering will feature hands-on sessions with cutting-edge techniques from Machine Learning, AI Agents, AI Engineering, LLM training and fine-tuning, RAG, and more. Whatever your skill level, beginner or experienced practitioner, you’ll leave ODSC West with new, in-demand AI skills. Register now at odsc.com/california and use our special code PODCAST for an additional 10% off on your pass! 

00:32:05
Gotcha. So a prominent example out there of being able to go to a single platform and have multiple different agents from multiple different providers tackle your request is You.com. So Y-O-U.com, co-founded by Richard Socher, who's a really well-known creator, former Stanford lecturer, and then led what was called the Einstein team at Salesforce. So Einstein was like this natural language application that Salesforce built. I can't remember if it was an LLM at that time because it was a little bit earlier.

00:32:54
Gotcha. So a prominent example out there of being able to go to a single platform and have multiple different agents from multiple different providers tackle your request is You.com. So Y-O-U.com, co-founded by Richard Socher, who's a really well-known creator, former Stanford lecturer, and then led what was called the Einstein team at Salesforce. So Einstein was like this natural language application that Salesforce built. I can't remember if it was an LLM at that time because it was a little bit earlier. 

00:33:31
But anyway, so You.com is this way that you can go to this website and you can put in a natural language request and then have many different agents ChatGPT, Claude in terms of proprietary models, but also open source ones like Llama 3. And so all of these different models can be addressing your query for you.

00:33:56
And so it sounds like what you're allowing at ChainML... And we'll get into Council Analytics in a second, but so is there a specific product at ChainML that is allowing for this tracking on the blockchain of this kind of information of which agents responded to a request, which agent ended up being the one whose response was actually used by the end user? So it sounds like that's the kind of capability that ChainML thinks about. Is there a product that we can go and use at this time that gives us- 

Shingai: 00:34:34
Yeah. So- 

Jon: 00:34:35
... that functionality? 

Shingai: 00:34:36
Well, Theoriq is the protocol. That's our product. 

Jon: 00:34:39
I see. 

Shingai: 00:34:40
That's our flagship product. 

Jon: 00:34:40
I see. 

Shingai: 00:34:41
So even as we say ChainML, our marketing team will be kicking me under the table. We should be referring to the Theoriq because Theoriq is sort of Alphabet's Google. So if you want to really think about, so what's enabling it? It's the protocol. It's the protocol that lets developers bring their agents and get recognized and potentially remunerated for bringing and developing those agents. It is also the mechanism by which once the agent is in the marketplace, it can get selected and it can then collaborate with other agents. And then moreover, once the agent is in the marketplace, there are other mechanisms like staking. So if I use the agent and I think it's fantastic, I have the opportunity to stake, to make a stake to tell the community that this is an agent that I've used, it did what it said it was going to do, I thought it worked really well, and therefore, there you go. 

00:35:36
Now, when you started a moment ago, you were talking about the models. We have to make the distinction between the agent and its capabilities and the models, which is separate. So you can imagine a world in which there is an agent with a pre-existing system prompt, and you have a dropdown that lets you select, oh, I'm going to use Anthropic today, or I'm going to use GPT-4o, et cetera, right? Or, I'm out of credits. Let me use 2.5. 

00:36:06
So in a world where it's almost the language model is important, but it's actually an option for the user to select which model that they want to use with the agent, then in that world, the agent is most important. And then there might be a preference towards a particular model. So up until recently, GPT-4.0 was the best one for analysis, but then the user has the option to try different models to work with a particular agent. So just want to make that distinction between the agent and which model it uses. 

Jon: 00:36:40
That is a really critical distinction. It's great that we have an expert like you on the show as our guest so that you can get me in line and correct me on these kinds of things because, yeah, I don't have... The blockchain stuff is really new territory to me and almost anything that I know about blockchain, pretty much everything I know from guests that I've had on this show, and we've only done a couple of blockchain episodes. 

Shingai: 00:37:08
But The Economist has actually done a really fair job of covering advances in blockchain technology. And there are a number of big Web2 companies, Walmart, Visa, that use blockchain quite significantly as part of their operations. So it's really, I'm going to call it that stigma where the underlying technology is really the potential here, and maybe that's the place to focus. And then these other tools or mechanisms like the coins, et cetera, that let you raise that, let you get paid, et cetera, those are also aspects of it, but really trying to focus on the technology side and the cryptography side. There's so many different things that are important about it.

Jon: 00:37:49
All right. So now I understand that when, from our research I was talking about ChainML, I should really have been saying Theoriq, that that's the brand that is related to any of these kinds of things that we've been talking about in the episode about allowing the blockchain to track metadata around agents, that's Theoriq. And so that's T-H-E-O-R-I-Q. Like theory, but it ends in I-Q. 

Shingai: 00:38:15
That's right. Yeah. 

Jon: 00:38:18
And so you can go to theoriq.ai to learn more, and you mentioned earlier in this episode, you talked about the Theoriq protocol light paper, that was just released and people can download. That was something I learned from you, also a Web3 thing apparently, to call white papers light papers, especially, I guess, if they're relatively light, there's not a huge amount of contact there. Because a white paper could be hundreds of pages. I guess the idea with a light paper is that it's shorter. Yeah. So there's a cool new terminology, a cool new term to add into my vocabulary.

00:38:54
Now, really quickly, however, Theoriq is not the only ChainML product or brand because you also did mention there Council Analytics, which I think is an older product, and you talked about that being a way to have... Like speaking to data, so using natural language to be able to, I guess, understand data that you've already collected. Is that kind of the idea with Council Analytics? 

Shingai: 00:39:18
Yeah. So we say analytics-ready data, and you're able to connect Council Analytics to your database system, whatever that system looks like. And I mentioned the typical ones, BigQuery, AWS, Snowflake, and even blockchain data. We've got a partner called Space and Time that collects blockchain data and we're able to talk to that data using a chatbot that we built for them that has Council Analytics underlying that chatbot. And that AI agent is called Houston, which I think is a great name for an agent, for a chatbot.

00:39:55
And so yes, Council Analytics lets you talk to data. And when we started thinking about, well, what is ChainML going to be, what kind of startup is it going to be, looking around at the team, it's over 100 years of data science, AI, engineering, analytics experience in the room with Ron and Ethan and Arnaud and Guillaume, our chief architect. So we've got this incredible team of people that have been working in the data space for many, many years, and so we really started to, well, let's build something and see.

00:40:31
Our initial product was Council open source. So we were looking for, well, how will people use AI agents? And we actually started building sort of the really... The initial ideas of the Theoriq protocol started with Council, the open source platform, and then that moved into Council Analytics where we said, "Well, let's specialize in an area." And many of us having worked in the data and analytics space fully understood that data analytics and getting insights from data was a problem before AI, and it's going to continue to be a problem, so let's pick a hard problem that is not going to get zapped with the next OpenAI announcement and is actually really tough for companies to do, and if we could solve it, that could be helpful. So that's why we started working on Council Analytics. 

00:41:20
And there are aspects of Council Analytics that really speak to the DNA of the company. So for example, we set it up with a robust testing suite that we run to see how the responses are reacting, right? And anytime you make a change to the system prompts, anytime you make a change to the model, et cetera, we have this testing suite that allows us to do really robust benchmarking, which it's totally a feature, and even now speaking to clients, they really value that because many of the other service providers just started tinkering, and many of them don't have a testing framework that they've set up. 

00:41:59
And if I put my educator hat on, I would say everybody needs to have a testing suite and you can do it today. And setting up robust tests for your agents is responsible. It's part of responsible AI and responsible AI governance. And what you want to be testing for is for maybe a predefined set of questions, how does your agent respond, and have those few-shot examples of, okay, well, these are really good answers, and then run your tests constantly to see as you make changes and updates that your agent is behaving appropriately.

Jon: 00:42:32
Nice, nice, nice. That was a great intro to Council Analytics. So yeah, so it sounds like this is a separate framework that is... I guess it's open source entirely. Is that right?

Shingai: 00:42:45
Well, Council is an open source framework and out of Council we started building Council Analytics, which was product. And so it's been a progression. Now, think about it is, okay, so we're saying we believe in multi-agent systems. We believe the future is agentic, which is the corporate challenge of the week is try and use the word agentic in every meeting. It's going to be great. So the future is agentic. Many prominent figures have written about it. Andrew Ng, Bill Gates, many people are starting to talk about an agentic future and how these multi-agent systems are... What AI might be. That's what the promise of AI might be. 

00:43:23
In order for us to have a viable protocol, we would need to have some agents on hand to be able to test that, to be able to make sure that our protocol does what it says it does. And so Council Analytics, having a really sophisticated agent that does sophisticated things and has this testing suite associated with it and a logging process that it's associated with is really a good foundation for us to make sure that the Theoriq protocol is working well. 

00:43:54
So all of it is a progression and it's a story that's brought us to this point, and really excited to get feedback on it because even just for me looking in my feed, I'm starting to hear people talking about multi-agent systems, and I'm like, "Hey, we've been working on that." So it's quite cool that this is now becoming far more prominent and many people are interested in it. 

Jon: 00:44:18
Did you know that the number one thing hiring managers look at are the projects you've completed? That's why building a strong portfolio in machine learning and AI is crucial to your success. At Super Data Science, you'll learn how to start your portfolio on platforms like Hugging Face and GitHub, filling it with diverse projects. In expert-led live labs, you'll complete an exciting new project every week. Plus, through community-driven projects, you'll tackle real world multi-week assignments while working in a team. Get hands-on experience with projects like retail demand forecasting, building an AI model from scratch, deploying your own LLM in the cloud, and many more. Start your 14-day free trial today and build your portfolio with superdatascience.com

00:44:59
Mm-hmm. It's really cool. And this agentic AI thing, think, yes, it is certainly buzzy with people like Andrew Ng, Bill Gates talking about it. Lots of corporations, like you say, trying to have that in as a word of the week or word of the month to fit into every conversation that they can, but it is a powerful thing. It's taking the LLMs, this generative AI thing that was the big story of last year of 2023 and allowing it to become ever more useful.

00:45:28
You quite rightly slapped my hand when I started blending this idea of LLMs and agents together. And so something that I think would be interesting for me and the audience would be if you could distinguish them a bit more. So LLMs, probably a lot of our listeners have a good sense of how most LLMs were taking this idea of deep learning, so lots of layers of artificial neurons, which can learn complex more and more abstract representations, a specific deep learning architecture is this idea of a transformer, and what makes the transformer so cool is that it can attend to relevant information over long stretches. So if you have a long document, now we're getting to a point where the way these things are implemented we have at the time of recording, the state-of-the-art in terms of proprietary models, you can have a, I think it's Gemini 1.5 Pro, you can look over two million token context windows. So about one and a half million words you can look over with this transformer and attend to the most important parts of that conversation. It works even better over shorter stretches.

00:46:41
And so large language models take that deep learning transformer architecture and scale it up to many layers, lots of attention heads, billions or maybe even trillions of parameters in some cases, training on data sets that have trillions of tokens. And so all of that scale makes these transformers very powerful inside of these large language models, and that's how we get through things like Gemini that I just mentioned, ChatGPT that a lot of people are familiar with, specifically something like GPT-4o in there, or for me personally, Claude 3.5 Sonnet is my go-to. Yes. Yes. 

Shingai: 00:47:22
Yeah, I'm interrupting you, but you've hit on so many important points. So everything that you described, and I love the summary, you're really sort of just picking up on all the steps, what it takes for us to have a large language model. You really start to see that it's a competitive advantage for the big companies who have access to the compute, who are able to build the large language models, but the majority of smaller startups and smaller companies are not going to be able to compete on that. 

00:47:55
So that's a very interesting idea where, once again, the Web3 community has been talking about decentralization for a long time. To me, you're starting to hit the nail on the head because you're talking about, well, is it the language model that makes an agent or is it the underlying almost prompt engineering that makes the agent? And we can have that debate. I don't have a definitive answer, but my instinct is I would like it to sit more on the prompt engineering side and less on the model side because I think that's more democratized. I think there are more business opportunities for more people to participate than if we say the most important thing about the agent is the service provider for the larger language model.

00:48:44
And also if you think about from a control perspective is what control do we have? We don't even have the best answers right now about what data was used to train many of these large language models. So from a user perspective, just a rebalance of power might be if we are able to leverage the existing ones, which right now it seems like only the big companies are able to generate, then we can at least have some sort of market, some sort of thriving economy that includes more people if the agent becomes the place of differentiation.

00:49:20
And there are papers on this, and you've probably seen them where people are saying some of the large language models are starting to converge and starting to look similar. I don't know. The jury for me is out on that. Or certainly I haven't seen enough evidence to that effect, but I think the balance of power is kind of where I lean or where I jump in to say I would prefer, my instinct is that let's make the sweet sauce what the developers bring to the agent versus the large language models that we have very little influence over. 

Jon: 00:49:52
Yeah, I agree with you that the LLMs are converging on capabilities, and more things end up being thrown on. Somebody introduces a new modality, or with the Claude 3.5 Sonnet introducing the Artifacts UI. So you get these things added on. But in terms of general, really broad strokes over long periods of time, in terms of the text in, text out capabilities, we're converging on similar capabilities across all of the big proprietary foundation model, frontier model LLMs that you have out there, like I mentioned, Gemini, Claude, and ChatGPT or GPT-4o as the kinds of big examples there. 

00:50:35
So yes, so I'm starting to get a better sense here of where... So with these large language models becoming more and more of a commodity, that also is driving down their margins. And so despite the huge investment, hundreds of millions of dollars in the current frontier models being invested to create them, I think you're absolutely right that more of the value, more of the margin can be in the creation of the agent where it's about the prompt engineering, it's about the scaffolding, like you described, that allows the agent to be able to be effective in the real world in a lot of different scenarios. So the LLM kind of becomes a commodity working in the background, and probably over time it matters less and less which LLM API you're calling. Does that sound like a fair assessment- 

Shingai: 00:51:33
That's- 

Jon: 00:51:34
... of where we are and where we're going? 

Shingai: 00:51:36
Yeah. That's certainly intuitive to me. But if I had spent billions of dollars trying to get a model up and running over a very long time, I would be looking for ways to monetize that. And we see that the service providers now have stores, so maybe they're seeing the wind blow in that direction as well. What good is this big large language model if no one can use it? So there will be some shifts in power. And the thing that I like to say generally is that the future of AI is not written. So there are many things still to change and evolve. And maybe if we can have some sort of influence or some peace in trying to decide on that direction, then agents is one place we can do that.

00:52:25
And also thinking ahead as well, okay, let's say we fast-forward into a future with multi-agent systems. What values do we want those multi-agent systems to have? So if we say all these models have objective functions in some way, shape, or form, like something they're trying to maximize or minimize or optimize. So if we have a multi-agent system that maximizes profits for a developer, you're going to get very different behaviors among the agents and among the agent collectives than you would if one is maximizing collaboration, for example, or if user satisfaction or staking. So really trying to think through those incentives that we provide for the models and for the collectives to make sure that they behave in responsible ways is going to be critical.

00:53:13
And there are multiple people, as I said earlier, that are thinking about multi-agent systems and building them and marketplaces, et cetera, but we genuinely believe in a responsible AI and a future that has those agents working together in responsible ways. And I'm not just saying that because it's a fashionable thing to say. If you look at all my work historically, the rest of the team at Theoriq, you will see there's been a thread about responsible AI throughout everything that we've done, and we think about these things, we care about them. 

00:53:43
So whichever service provider you use for your multi-agent system, really do think through what are the implications of some of the systems that they build in to how the agents are going to interact. 

Jon: 00:53:55
Yeah. And we can touch on this really quickly here, this multi-agent system idea. And so I did an episode, a short episode myself, just a solo episode dedicated to multi-agent systems a couple of months ago. So that was episode number 788. And in that episode, what I described broadly and that you've been talking about now, Shingai, in the last couple of minutes, is I think what's going to be big now after agentic AI.

00:54:21
So last year, 2023 was big for generative AI, agentic AI is big currently as a way to allow those LLMs from last year, that idea of generative AI to be really powerful and useful in a broad range of situations without us needing to necessarily prompt directly in real time to get some utility from that large language model. And it seems like multi-agent systems, maybe that's going to be second half of 2024 or maybe the kind of 2025 thing where it's multiple agents collaborating together on some goal, and then you're getting the best of... You could have multiple different styles of agents, multiple different LLMs that are being called maybe by each of those agents and them collaborating together to solve some problem for you. 

00:55:12
And so there are examples that I went into in that episode number 788, where I talk about specific multi-agent systems where based on the problem that you're solving, you say, okay... There was one, there was an example from one of the CIA or something, one of the big US intelligence agencies, and they were talking about they had built a multi-agent system for defusing bombs. And so there was this alpha agent, beta agent, gamma agent, and they were organically able to develop their own specializations through training through solving for that objective function that you mentioned there, which in this case would be safely defusing a bomb. And by having these different LLMs work together, they were able to develop their own specializations. 

00:56:02
And so in that scenario, you're coming into the multi-agent system with one style of agent and allowing them to develop their own specializations, but there were also circumstances where you could say, "Okay, I have some understanding as the human developer here about the problem that's being solved. And so I'm actually going to deliberately pick agents or maybe ask an agent to pick what might be appropriate agents based on what's available on the web or in some kind of agent store." But through either one of those cases, you're actually, you're coming into the multi-agent system with agents that have different specializations. So maybe you have one agent that specializes in code generation and you have another agent that specializes in interacting with a human user. And those can work together to have a more powerful system than just a single text-to-code LLM might operate.

00:56:59
Well, so this has been a fascinating episode, Shingai. We got to learn a ton about multi-agent systems, about agentic AI, about how the blockchain can allow us to have better multi-agent systems. And so it's great that Theoriq are on the forefront of that and that we were able to get time with someone like you who can go into so much detail about it. I really appreciate it.

00:57:24
We know, or I know from other things that you've done that you're a huge fan of Star Trek. So before I let you go, I've got a question for you. So this is related to Star Trek's ability to kind of predict where we are now to some extent reasonably well. So five decades ago, the original Star Trek series was already predicting technologies like touchscreens, Bluetooth headphones, 3D printing, virtual reality, CT scans. And the series was also a trailblazer in portraying a diverse workforce that in the present in most places, has not materialized in the way that Star Trek might have portrayed.

00:58:10
So you've spoken before about AI and the future of work in many places. What do you think is in store with AI that's really exciting for you, and what do you think we can do to also continue to move in the direction of a workforce more and more like what Star Trek envisioned? 

Shingai: 00:58:33
So I loved what you were saying and I got really excited because I do a lot of work in the future of work and trying to imagine what the future might look like. So just to sort of close the chapter on this agentic future, there are so many industries that are very much in their infancy and some haven't even touched AI at all. And so my dream for how the workforce will evolve in this agentic future is that we should have a period of adoption and participation among industries that haven't really had the opportunity to get involved yet.

00:59:14
So if you work in the trades, if you work in construction, if you work in design, if you work in hard industry, then there are many, many opportunities for an agentic future because those are processes that people know very well and there's still the opportunity to map some of those processes to make them more efficient or to make them easier to interact with, easier to learn, easier to train. Many different ways that people in those communities can start to interact with agents or build agents, et cetera. 

00:59:54
If you think about the SaaS models of before, where if you were an education institution, you'd have to have a separate system for your enrollments, for your recruitment, then your enrollment, then your staff, then your actual content delivery, then your content generation. All of those are multiple different software providers. And in an agentic future, consider what that might look like if it happens in a more seamless way. 

01:00:22
Now, obviously, all the caveats around that, that we obviously have to be careful how we're using the artificial intelligence, and just because we can use it for certain functions doesn't mean we should. But I would say certainly before we have this apocalyptic AI is going to take all our jobs and it's going to be terrible, there's absolutely a period where we have the opportunity for participation, and the no-code options that are provided by AI make it possible to have this kind of participation in ways that we haven't before. 

01:00:57
And so tying that into a diverse workforce like Star Trek envisioned, well, from an access perspective, that means you have people sitting in Zimbabwe, where I'm from, originally from, that are now able to start working with agents and build agents around things they care about. So it's an agro-based economy. Lots of minerals and mining going on there. If you now have people who can start to become specialists and how AI can advance those industries, including around food security and the genuine problems of day-to-day life, healthcare, then there's a promise there, there's a potential there that we need to explore. The technology exists today, so there's no reason why we're not trying to advance some of those sectors and improve the livelihoods of some of those people.

01:01:47
And then just more concretely, diversity in the more, let's call it the Western sense or the more classic sense of having female engineers and female heads of operations and female captains of star ships, and then alien heads of departments and everything in between where whatever kind of entity, being, person that you are and what you bring to the table, if you have the right skill set to participate as part of this team, then it's possible for you. And so that is the dream and the promise of Star Trek. And maybe for now, we're not there yet for sure, but maybe for now, if we can just see it in science fiction, that does something. Because I grew up seeing female engineers and captains of enterprises, and so for me, that's not unreasonable. It's possible. 

01:02:43
So as the world works through our issues, as we figure out how to treat each other like we come on normal distributions, all of us, whatever groups we belong to, then the future state is that if have an example of what it would look like if everybody was included and everybody could participate and bring their talent towards a goal, a community goal like running a ship, then if we can see it, maybe it's possible. 

Jon: 01:03:12
I love it. Great perspective there and really well delivered, as we'd expect from someone with your fluency, your amazing ability to explain complex concepts in colorful and creative ways. And yeah, cool to see that vision of the future and how AI could play a role in helping things out. 

01:03:34
And so before I let you go, Shingai, I did have one audience member, Adriana Salcedo, she wrote in to say that she's really looking forward to this episode with you, and she had a couple of questions. I think we've actually addressed most of it. So her first question was around what technologies and algorithms are behind agentic AI? So I think we've covered that. It relies a lot on LLMs, but then also prompt engineering and the scaffolding required to bring that to the real world. Companies like Theoriq are making it easier to track what's going on and which LLMs are being used. So I think that that question is answered. 

01:04:15
Her other question is what kinds of challenges and risks could be associated with the use of agentic AI? And we've talked about a lot of the positives, but I thought maybe I'd open the floor to you to talk a bit more about what challenges or risks there are associated with agentic AI now or in the future.

Shingai: 01:04:33
Yeah, so great question, and absolutely there are existential threats due to artificial intelligence, and I don't want to overpromise, but protocols like Theoriq are designed to be governance mechanisms of how agents working together will collaborate, how they will interact, and more importantly, is there alignment in their behavior. So that's where I'll leave it, is that there are big issues around responsible use and artificial intelligence, but there are also ways that individually through governance, the systems that we build, the agents that we build to make sure that they are acting responsibly. And I highly encourage you go read the Theoriq light paper. We do cover much of this in there, and it is truly how an agentic future can be a responsible AI future as well. 

Jon: 01:05:24
Very cool. Thank you, Shingai. So really quickly, hopefully we can squeeze this in. We've had internet connectivity issues with the power outage that you had there. And so we're doing this in a whole bunch of choppy recordings, but we'll try to squeeze one last thing in, which is if you have a book recommendation for us, we always ask our guests for those. 

Shingai: 01:05:41
Yeah. So I had mentioned I, Robot. I, Robot, Asimov, the original one. Don't watch the movie, get the book, the audio or the physical book. I highly recommend you read it now. And if you've read it before, go back and read it again because it's really fascinating to see it through today's eyes. And then I always recommend The Art of Possibility, Benjamin Zander, Rosamund Zander. It still speaks to our values and how we treat each other, and I think that's needed now more than ever, and it's part of our humanity. So as we work with artificial intelligence and agents, let's make sure that we're treating each other well and that we know how to behave. 

Jon: 01:06:25
Nicely said. And then how should people follow you after this episode? Clearly a ton to be learned from you. What are the best ways? LinkedIn? Twitter? 

Shingai: 01:06:34
Yes, absolutely. Reach out on LinkedIn. Tell me where we connected. So if you came from this podcast, it would be great to know that. And then that way I'll just say, yes. And then also on X, formerly Twitter, my handle is Tjido, Tango, Juliet, India, Delta, Oscar, and also follow Theoriq.ai. And you can also find Theoriq on LinkedIn, and we have a Discord channel. So come learn more about building this agentic future with our exciting protocol called Theoriq. 

Jon: 01:07:07
Awesome, Shingai. Thank you so much. Thank you for persevering through all these crazy power outages and continuing to press on, using a wifi hotspot to make it happen. Really appreciate it. Unbelievable effort. And I am expecting that despite all the choppiness for you and me, with the platform we have, the amazing editing we have, I think that this episode will have turned out fantastically for our listeners, so thank you for persevering through all of that. And yeah, hopefully we can catch up with you again on the show in the future. We'd love to hear your insights and whatever happens after agentic AI. 

Shingai: 01:07:41
Thanks, Jon. Really appreciate your time. And we didn't get to talk about a bunch of topics, but I just want to mention the last time that we spoke, we were talking about aliens landing in Zimbabwe, so let's make that an episode for next time.

Jon: 01:07:54
Oh yeah. For sure. Yeah, that's where it happened. All right. Thanks so much, Shingai, and yeah, catch you again soon. 

Shingai: 01:08:05
Thanks, Jon. 

Jon: 01:08:12
Outstanding episode with Shingai today. Particularly appreciate her persevering through power outages to press on recording with us. In today's episode, Shingai filled us in on how unlike LLMs on their own, AI agents act autonomously, for example, to serve us with a daily inspirational quote or make travel bookings for us. She also talked about how multi-agent systems allow agents to specialize and collaborate together, making them especially capable of tackling complex tasks. She talked about the Theoriq protocol developed by ChainML that allows blockchain technology to track metadata about AI agents and their inputs and outputs, providing a decentralized system through which humans can track, trust, and improve interactions with AI agents. And she talked about how Council Analytics allows humans to speak, quote, unquote, to their dataset with natural language. 

01:09:00
As always, you can get all the show notes including the transcript for this episode, video recording, and materials mentioned on the show, the URLs for Shingai's social media profiles, as well as my own at superdatascience.com/809. Thanks to everyone on the Super Data Science podcast team, our podcast manager, Ivana Zibert, media editor Mario Pombo, operations manager Natalie Ziajski, researcher Serg Masis, writers Dr. Zara Karschay and Silvia Ogweng, and founder Kirill Eremenko. Thanks to all of them for producing another stellar episode for us today. 

01:09:31
For enabling that super team to create this free podcast for you, I'm so grateful to our sponsors. You can support this show by checking out our sponsors' links, which are in the show notes. And if you, yourself, are ever interested in sponsoring, you can find out how to do that by making your way to jonkrohn.com/podcast. Otherwise, please share the show with folks you think might like to have it shared with them. Review the show on your favorite podcasting app or on YouTube. Subscribe if you're not a subscriber. But most importantly, just keep on listening. I'm so grateful to have you listening and hope I can continue to make episodes you love for years and years to come. Till next time, keep on rocking it out there, and I'm looking forward to enjoying another round of the Super Data Science podcast with you very soon. 

Show all

arrow_downward

Share on