Teaching Fairness to Machines

By: Sofia T. (Age 16, Chicago, IL)

When I first began studying artificial intelligence, I thought bias in algorithms was a problem that data scientists could simply “fix.” But the more I learned, the more I realized that bias isn’t just a glitch in code, it’s a reflection of us.

For my research project, I analyzed an open-source image-classification dataset often used to train AI models. I expected to spend most of my time debugging or running models. Instead, I found myself asking uncomfortable questions about how we label the world.

When the algorithm tried to identify people in white coats, it labeled men as “scientists” and women as “nurses” far more often. The data didn’t come from the algorithm being masogonistic, it came from history. Thousands of human-labeled images had taught the model that science looks one way and caregiving another.

At first, I thought the solution was to rebalance the dataset: add more photos of women scientists, rename labels, or retrain the model. But that only solved part of the problem. If our definition of “scientist” still depends on the images we’ve seen, and the assumptions we’ve made, then the bias isn’t gone. It’s just hidden in new ways.

This project changed how I see fairness. Fairness isn’t about making the machine neutral, it’s about making ourselves more aware. It’s about asking who gets to create the data, who gets represented, and whose stories are missing.

Working with my mentor, I explored different strategies for improving the model’s fairness metrics, from reweighting samples to introducing “counterfactual” examples: synthetic images that show diverse people performing the same task. These methods helped a little, but they also reminded me that technology alone can’t solve social bias.

“I realized AI reflects what it learns from us, which means fairness starts with people.”

I used to think teaching machines was about precision and control. Now I think it’s about empathy. The more we expect from AI, the more we must expect from ourselves as creators, as users, and as citizens.

If AI learns from us, then it’s our responsibility to give it something worth learning.

Previous
Previous

Predicting Drought: Using AI to Map Water Scarcity

Next
Next

When Robots Cross the Street