Princeton Bias-in-AI Reading Group

We talk about bias and fairness issues that emerge in artificial intelligence.

Meets every other Friday 12:30-1:30pm EST. Sign up for the mailing list here (Princeton affiliates only please).

Fall 2021

9/17/21: Angelina Wang - It's COMPASlicated: The Messy Relationship Between RAI Datasets and Algorithmic Fairness Benchmarks
10/01/21: Sunnie S. Y. Kim - The Values Encoded in Machine Learning Research
10/15/21: Vikram Ramaswamy
10/29/21: Dora Zhao

Spring 2021

2/5/21: Olga Russakovsky - What’s Unique About Fairness Research in Computer Vision?
2/19/21: Eli Lucherini - Simulation to Study Algorithmic Bias.
3/5/21: Kenny Peng - The Lives of Three Datasets and the Ethical Implications.
3/19/21: Sunnie S. Y. Kim - Costs and Risks of Large Language Models.
4/2/21: Mihir Kshirsagar - Discussion on Benjamin Eidelson's Respect, Individualism, and Colorblindness in the Yale Law Journal
4/16/21: Kaiyu Yang - Challenges in Reliably Measure Algorithmic Fairness

(Previous) Fall 2020

9/18/20: Angelina Wang - REVISE: A Tool for Measuring and Mitigating Biases in Visual Datasets.
10/2/20: Felix Yu - How to Integrate FATE/CDS Into Data Science Curriculum.
10/15/20: Vikram Ramaswamy - Using Visual Grounding for Visual Question Answering.
10/30/20: Robin Lee - Testing Color-Blindness in Image Annotation Tasks.
11/13/20: Matthew Sun - Tyranny of the Majority? Exploring the Effects of Recommender Systems on Minority Populations through Agent-Based Modeling.
12/4/20: Dora Zhao - Understanding and Mitigating Racial Biases in Image Captioning Techniques.


Founded by Arvind Narayanan and Olga Russakovsky in Fall 2017

AY 2020-2021: Angelina Wang

AY 2019-2020: Felix Yu

AY 2018-2019: Haochen Li