AI in Education: The Bias Dilemma

The age of AI has dawned, and it’s a lot to take in. eSpark’s “AI in Education” series exists to help you get up to speed, one issue at a time. The question of AI bias is next up. 

EdTech Evolved logo

 

AI is biased.  It’s not really a debate at this point, just a fact that those who build and use AI-powered technology have to live with and account for. But what does it mean for our classrooms, and what does a solution even look like?

The issue of biased algorithms has been a point of contention and frustration far longer than AI has been in the mainstream consciousness. As far back as the mid 1980’s, a British medical school was found guilty of discrimination after building a computer program to screen candidates. The program closely matched human decisions, but showed a persistent bias against women and applicants with non-European names. Amazon tried building a similar program to improve their hiring process and almost immediately realized the system was penalizing women because the dataset it was trained on reflected the male-dominant tech culture of the time.

OpenAI, the organization behind ChatGPT and the Dall-E image generator, has been particularly scrutinized since the public release of ChatGPT in November 2022. The company has had to address bias issues on the fly as its users surface unforeseen issues in their outputs, ranging from political ideologies to ethnic stereotypes. In February, OpenAI released a public statement on ChatGPT’s behavior and a welcome, transparent look at how the model works and where it’s going.

 

A screenshot of three news articles about bias in AI from Google search.

Just a small sampling of recent articles on this controversial topic

 

What are the underlying causes of bias in AI models?

1. The datasets the systems are trained on

A human’s perception of the world is shaped by their upbringing, the people around them, the social and cultural norms they’re exposed to, and everything else that makes up the “nurture” half of nature vs. nurture (there is certainly some element of nature to this as well, but it’s less applicable to the AI comparison). For AI models, everything that comprises the human experience is replaced with cold, hard data.

Generative AI tools like ChatGPT are trained on a combination of internet crawling, curated content, and other ingested media. As anybody who has ever spent time on the internet knows, it’s not exactly a bias-free zone. The same cultural issues that result in underrepresentation, stereotyping, and polarization across society are intrinsically present in the very makeup of an AI model’s “education.” It’s important to note that AI is not inherently more biased than other sources, it just reflects (and potentially amplifies) the bias that already exists in the world.

Using the aforementioned Amazon hiring example as a microcosm of the larger issue, we can see that it’s really hard to eliminate historical bias from the dataset. If your most “successful” job candidates have been men because you’ve mostly hired men, you’ll have to add a counterweight somewhere to balance out what the AI model sees as reliable data to draw conclusions from. When the dataset is disproportionately skewed in favor of one group, one region, or one ideology, the outputs will follow the same pattern. There is, unfortunately, no such thing as an unbiased dataset for any of these programs to start from.

 

2. The humans who fine-tune the system

The initial dataset is only the first step in ChatGPT’s training process. The data is supplemented by a group of human “reviewers” who rate and/or choose between two possible responses to a given prompt. They do this thousands of times over to align outputs with societal norms and values. This is known as “reinforcement learning with human feedback,” or RLHF.

RLHF works almost like a filter, training the AI model to respond more like a person while also instilling a kind of moral fiber and forced centrism into the model; e.g. preventing it from doing anything illegal or keeping it from taking a strong position on controversial topics.

The problem with RLHF is obvious—human beings are inherently biased. A response that one reviewer marks as preferred or acceptable might rub another reviewer the wrong way. The companies responsible for these models, like OpenAI, need to be cognizant of the demographics and regional base for their reviewers; e.g. if everyone reviewing the outputs is from the Silicon Valley area, the model will end up biased in favor of Silicon Valley politics, priorities, and demographics. Even if it were possible to devise a perfectly balanced group of reviewers, it’s hard to imagine how the model could effectively use such conflicting information.

 

3. Our inability to agree on what “bias” means

The issue of bias might be easier to address if it wasn’t so subjective to begin with. It’s easy for most people to explain what bias means to them and how it hurts those who are discriminated against. It’s much, much harder to agree on whose values, morals, and belief systems we should align the models with. Conflict is a core part of what it means to be human, and it doesn’t feel like that’s going to change any time soon.

Once you start filtering out every possible outcome that might offend any possible group of people, or every dataset that could be perceived as perpetuating harmful stereotypes, or every stance on either side of controversial issues, you’re left with a model that can’t say much of anything at all. With every new safeguard and moderation developers put in place, the models seem to become a little more boring, dry, and neutral.

 

What comes next?

OpenAI CEO Sam Altman has spoken at length about the issue of bias, what OpenAI is doing to address it, and where we go from here. To quote directly from his now-famous interview with Lex Fridman:

“There’s a little bit of sleight of hand, sometimes, when people talk about aligning an AI to human preferences and values. There’s like a hidden asterisk, which is the values and preferences that I approve of. Right? And navigating that tension of who gets to decide what the real limits are. How do we build a technology that is going to have a huge impact, be super powerful, and get the right balance between letting people have the AI they want…but still draw the lines that we all agree have to be drawn somewhere.”

Will the future of generative AI feature more customizable models that align with the goals and preferences of a given user? Is it even possible to eliminate bias without creating new echo chambers? How do we address this in the context of how AI will be used in schools? We can only hope to be a little closer to answering these questions by the end of this year.

 

What teachers can do to help

It’s impossible to predict how an AI model will respond to the infinite number of inputs humans might dream up, but responsible developers are already building in moderation capabilities to enable rapid response to unintended outcomes. Most of the tools we expect to see this school year will be highly dependent on teacher and student feedback to grow and evolve as we get a better grasp on the potential and the risks of this technology.

With the release of eSpark’s Choice Texts, for example, we are asking teachers to monitor the stories their students are creating and use the built-in feedback mechanism to report any problematic content that might be getting through the content filters, any biases that might be rising to the surface, or anything that might fall outside the bounds of age-appropriateness. Our goal is for this tool to provide a sense of ownership, agency, and wonder to even the most reluctant young readers, but we know we can’t do that at the expense of student safety and teacher confidence.

Media literacy has never been more important. When discussing AI with your students, make sure they understand that the models aren’t perfect, responses should not be treated as truth, and primary sources are still the best sources. Remember that kids under the age of 13 should never use generative AI tools like ChatGPT directly. And until we arrive at a place where this technology is better regulated, the default approach to AI should be “trust, but verify.” Change is coming whether we like it or not, but that doesn’t mean we can’t all do our due diligence along the way.

 

Additional resources

Want to stay in the loop on the topic of AI in schools? Subscribe to EdTech Evolved today for monthly newsletter updates and breaking news.

EdTech Evolved logo

Ready to see student-centered learning in action?