EthixAI

EthixAI is an initiative based at MIT and Harvard that combats algorithmic bias to fight systemic racism and sexism in technology.

Our Story

During my graduate school career I served as MIT's Senate head of Diversity, Equity and Inclusion, ran the MIT AI Ethics Reading Group and was an active member of Harvard’s Ethical Tech Groups. Throughout my time in those three roles, I struggled to find a comprehensive guide to DEI on tech teams and a primer on algorithmic bias. I then set out on a yearlong project with a team of researchers to unpack these topics. The research here is a user-friendly result of that work.

Through EthixAI we are working hard to combat the hardcoding of social biases that already exist into code such that they are undetectable to the everyday user. This is not going to happen in some dystopian future — these biases are around us already. For example, when you google the word “family,” what types of families come up? Are they of a particular race? Is the couple only a man and a woman?  If the photos look homogenous, this is an example of algorithmic bias in that the system is “biased” since it over represents a certain type of image. 

Particularly, EthixAI is for those working in tech and those on tech teams that want to learn more about algorithmic bias,  how to bring the conversation to their teams, and debias their product.

Why does this matter?

In 2009, HP released a face detection software that “did not detect Black people as having a face.” The software could not recognize black faces in everyday lighting conditions. 
Source: When the algorithm itself ia racist. 

Google’s search results often reinforce existing stereotypes and prejudices. For example, “CEO” results in images of mostly men and “personal assistant” of mostly women. In 2015, Google Images auto suggested “ape” in a “related searches” section when given the keywords “Michele Obama.”
Source: Algorithms of Oppression

FICO scores, facial recognition software and other algorithmic methods all consistently give Blacks and Latinos lower scores for similar credit histories as other racial groups and predict them to be “more likely” to commit a particular crime.
Source: How I am fighting bias in algorithms TED talk

What can you do about this?

A Guide to Debiasing Your Product

code.jpg

The EthixAI team shares a guide for you to help you debias your tech product

A Guide to DEI on your Tech Team

convo bblue.jpg

The EthixAI team shares a guide to help you start a conversation on your team about DEI

Algorithmic Bias

algoirthm bias.jpg

To help guide your exploration, the EthixAI team is sharing a few things that have helped us learn about algorithmic bias.

Invite me to Speak

Invite me to speak at your event here. My talks focus on algorithmic bias, diversity/inclusion/equity in tech, entrepreneurship, and overcoming adversity.

Team

RianaShah+MIT+Headshot.jpg

Riana Shah, CEO

 
natasha.jpg

Natasha Markov-Riss, Research Analyst

christie.jpg

Christie Little, Research Analyst

 

Advisors

Rhodes-Kropf-Matt-scaled.jpeg

Matt Rhodes Kropf, MIT

sam.jpg

Samuel Rothstein, Research Analyst

esse.jpeg

Maria Fernanda Sampaio Ferreira, Research Analyst

 
kathy-pham.jpeg

Kathy Pham, Harvard and The White House

john akula.png

John Akula, MIT

21_smiling.png

Eishna Ranganathan, Research Analyst

Nhat Nguyen, Research Associate

 
Ben Mitchell.png

Ben Mitchell, Swarthmore College