![Defining Developer Relations with Angie Jones](https://images.hanselminutes.com/podcast/shows/954.jpg)
![Computer Science Visualizations with Sam Rose](https://images.hanselminutes.com/podcast/shows/953.jpg)
![Introducing .NET Aspire with Damian Edwards](https://images.hanselminutes.com/podcast/shows/952.jpg)
![DIY Insulin Pumps with Dr Martin de Bock](https://images.hanselminutes.com/podcast/shows/951.jpg)
![Scott on DotNetRocks episode 1900 with Carl Franklin and Richard Campbell](https://images.hanselminutes.com/podcast/shows/950.jpg)
![Cross-platform UIs for all with Avalonia CEO Mike James](https://images.hanselminutes.com/podcast/shows/949.jpg)
![Dr. Juan Gilbert in association with the ACM Bytecast](https://images.hanselminutes.com/podcast/shows/948.jpg)
![Making "Tales of Kenzera: Zau" with Abubakar Salim](https://images.hanselminutes.com/podcast/shows/947.jpg)
![Ethics, AI, and Human-centered Computing with Dr. Casey Fiesler](https://images.hanselminutes.com/podcast/shows/946.jpg)
![Community and content with Android Expert Madona Wambua](https://images.hanselminutes.com/podcast/shows/945.jpg)
![Open Core Open Source with Mermaid Chart's Knut Sveidqvist](https://images.hanselminutes.com/podcast/shows/944.jpg)
![Exploring Decentralized Tech with TBD's Rizel Scarlett](https://images.hanselminutes.com/podcast/shows/943.jpg)
![Innovation in Accessibility with Fable's Kate Kalcevich](https://images.hanselminutes.com/podcast/shows/942.jpg)
![Foundations of Design for Developers with Kathryn Grayson Nanz](https://images.hanselminutes.com/podcast/shows/941.jpg)
![Github Advanced Security with Jacob DePriest](https://images.hanselminutes.com/podcast/shows/940.jpg)
![Affective Computing with MIT's Dr. Rosalind Picard](https://images.hanselminutes.com/podcast/shows/939.jpg)
Camille Eddy has worked on Robotics and Hardware nearly her whole life. Now she's turning her gaze to how AI and machine learning. In this episode she gets Scott up to speed about how AI/ML work and how cultural bias can teach computers how to think...wrong. What can we do to prevent bias from creeping into our algorithms?