Machine learning models have become ubiquitous tools that are central to our everyday life. Their importance cannot be understated: they both inform our decision-making and hint at how those around us think. Yet, these seemingly objective “machines” are more human than they may seem. Their output is frequently impacted by consequential baked-in biases, leading them to produce output that fuels discrimination and inequality by disproportionately favoring some populations over others. In this talk, Dr. Osborne considers how organizations – and the people in them – can unintentionally input bias into these models. He then uses insights from psychology to discuss how they can prevent this from occurring.
Élida M. Bautista, PhD, Chief Diversity, Equity, and Inclusion Officer at Berkeley Haas, will introduce this session and share reflections on how Haas has, and continues to, advance its' commitment to diversity, equity, and inclusion.
This session will be recorded. 📹
Where
N100, Level 1, Chou Hall