This week, Krishna Gade, Co-Founder and CEO of Fiddler.AI, explores the challenges faced by Large Language Models (LLMs) in Generative AI, including inaccurate statements, biases, and privacy risks within an enterprise environment.
About Krishna Gade
Krishna Gade is the founder and CEO of Fiddler.AI, a Series-B enterprise company building a platform to address problems regarding bias, fairness, and transparency in AI. An entrepreneur and engineering leader with strong technical experience in creating scalable platforms and delightful consumer products, Krishna previously held senior engineering leadership roles at Facebook, Pinterest, Twitter, and Microsoft. Fiddler is the emerging standard for Enterprise AI Observability from statistical ML to Generative AI applications. It is a vital infrastructure component, providing the visibility, or “transparency,” layer to the entire enterprise AI application stack. With multiple iterations over product architecture, and managing many high-scale, paid customers in the enterprise, our team has shown operational excellence that’s expected of a high-performance observability system. The founding team of Fiddler came from Facebook AI Infra where they worked on Explainable AI for News Feed which was at the center of AI/ML at Facebook.
Overview
In this thought-provoking podcast episode, we delve into the challenges faced by Large Language Models (LLMs) in the realm of Generative AI. Fiddler co-founder and CEO explores how these powerful models are prone to generating inaccurate statements, exhibiting biases, and inadvertently exposing private data. Though these issues may not be catastrophically problematic within say, a consumer chatbot, in an enterprise setting, these answers must be accurate, unbiased, and secure against prompt-injection attacks that could compromise private client data.
Krishna emphasizes that building trust in AI begins with monitoring. One of the standout features of Fiddler is its explainability algorithms, which enable in-depth analysis to identify the root causes of model prediction errors, be it issues with the training dataset or feature engineering. Moreover, Fiddler offers a range of pre-built tools that effectively detect biases across various model types, making it a comprehensive solution for addressing bias-related concerns.
Items mentioned in this podcast: