The Hechinger Report is a national nonprofit newsroom that reports on one topic: education. Sign up for our weekly newsletters to get stories like this delivered directly to your inbox. Consider supporting our stories and becoming a member today.

Editor’s note: This story led off this week’s Future of Learning newsletter, which is delivered free to subscribers’ inboxes every other Wednesday with trends and top stories about education innovation. Subscribe today!

In the last few months, AI-powered technologies like ChatGPT and BingAI have received a lot of attention for their potential to transform many aspects of our lives. The extent to which that will be realized remains to be seen.

But what seems to be missing from the conversation is how technologies — especially those powered by AI and machine learning — can worsen racial inequality, if we’re not careful.

In education, Black and Hispanic students face inequities in schools every day, whether through disciplinary actions, course placement or culturally irrelevant content. Thoughtless expansion of tech tools into the classroom can exacerbate the discrimination Black and Hispanic students already face, experts warn.

In other fields, the risks of racially biased tech tools are becoming relatively well known. Take facial recognition technology. Research has shown that facial analysis algorithms and datasets perform poorly when examining the faces of women, Black and Brown people, the elderly and children. When used by police for surveillance purposes, the technology can lead to wrongful arrests or even deadly violence. In the housing industry, mortgage lenders vet borrowers by relying on algorithms that sometimes unfairly charge Black and Latino applicants higher interest rates.

Experts say these technologies can be racially biased in part because they reflect the biases and vulnerabilities of their designers. Even when developers don’t intend for it to happen, their inherent biases can be coded into a product, whether through flawed algorithms, historically biased datasets or biases of the developers themselves.

In 2020, Nidhi Hebbar, a former education lead at Apple who later studied racial bias in ed tech at the Aspen Tech Policy Hub, co-founded the EdTech Equity Project. Its goal is to not only provide schools the resources they need to pick equitable ed tech products, but also to hold ed tech companies accountable for tools that could negatively affect historically underrepresented students.

“Oftentimes tech companies didn’t really seem to understand the experience of Black and Brown students in the classroom,” Hebbar said. When tech companies build products for schools, they either partner with schools that are in affluent, predominantly white suburban areas or lean on the educational experience of their employees, she said.

The rush to adopt tech during the pandemic, Hebbar said, has been problematic because school procurement officers didn’t always have time to properly vet tech tools or have rigorous conversations with tech companies.

Hebbar said she’s seen racial biases in some of the personalized learning software available for schools. Products that use voice assistant technology to measure a student’s language comprehension and creation skills are one example.

“Oftentimes tech companies didn’t really seem to understand the experience of Black and Brown students in the classroom.”

Nidhi Hebbar, co-founder, the EdTech Equity Project.

“If it wasn’t trained on students with an accent, for example, or [those who] speak at home with a different dialect, it can very easily then learn that certain students are wrong and other students are correct, and it can discourage students,” Hebbar said. “It can put students on a slower learning track because of the way that they express themselves.”

Issues like this are common when ed tech companies only rely on data provided by a certain set of schools that opt into a study, according to Hebbar. Tech companies often don’t collect data on race because of student privacy concerns, she said, nor do they tend to take a look at how a product works for students from different racial or language backgrounds.

Hebbar said tech companies’ claim that they don’t track race because of data privacy issues is a cop-out. “If they’re not confident that they can track data in a sensitive and careful way,” she said, “then they probably shouldn’t be tracking student data at all.”

Related: ‘Don’t rush to spend on ed tech’

Hebbar’s EdTech Equity Project, in collaboration with Digital Promise, launched a product certification program in 2021 to recognize ed tech companies that share plans to incorporate racial equity in their designs. Her group has also produced an AI in Education Toolkit for Racial Equity to help companies during their design process.

It was that toolkit that Amelia Kelly, chief technology officer of SoapBox Labs, used to examine her company’s work. The company, which in 2022 become the first company to receive the certification, develops speech recognition technology specifically built to recognize a child’s speech in their natural accent and dialect.* The company also provides its product to other ed tech companies and platforms, such as Scholastic.

Kelly said that as employees built the technology, they tried to acquire the “most diverse data pool we possibly could” so that the technology would work “not just for a small subset of children in affluent areas, but for all children.” Kelly said the SoapBox Lab’s team has introduced a monthly “assumption review,” in which they challenge their assumptions about everything from product design to testing.

She urged other tech companies to ensure their products aren’t going to harm students: “It’s very easy to trick yourself into thinking your system is working when it’s not if you don’t make the test representative enough.”

Hebbar said she also worries that technology designed to help school administrators, particularly in disciplinary decisions, is harming Black and Brown students. As more schools use facial recognition technology to protect against school violence and misbehavior, she said she’s concerned the software might erroneously pick out Black or Brown students for discipline because it was likely trained on historical data in which those students were disciplined at higher rates than white or Asian students.

But Hebbar and other experts say such concerns shouldn’t stop schools and educators from using technology or banning AI. The key, according to Jeremy Roschelle, executive director of the learning sciences research team with the nonprofit organization Digital Promise, is for educators to ask for documentation that tech companies are taking these issues seriously, and that they have a plan to address bias.

He encouraged school leaders to look to groups like the Institute for Ethical AI in Education, AI4K12 and EdSAFE AI Alliance, which have developed frameworks and ethical guidelines for schools to use when choosing emerging technologies for classrooms. The AI Alliance includes some 200 member organizations, including nonprofits and edtech businesses, that have come together to identify steps companies can take to assess bias in algorithms and support educators using AI, said Jim Larimore, co-founder and chair.

“It’s very easy to trick yourself into thinking your system is working when it’s not if you don’t make the test representative enough.”

Amelia Kelly, chief technology officer of SoapBox Labs

Roschelle advised educators to look at the areas in their school in which technology is being used, and if it’s being used to automate a process that could have inherent bias. Systems that are used to, say, detect cheating during a proctored exam, or to predict student behavior and recommend kids for discipline, might be biased – and that has real consequences for kids, he said.

The silver lining, Roschelle said, is that more companies are starting to take these issues seriously and are working to correct them. He said this is, in part, because of the work of ethical AI advocates like Hebbar’s Ed Tech Equity Project and of Renée Cummings, a University of Virginia professor.

Hebbar said schools can also proactively provide students and educators with the tools to understand how AI works and the risks associated with it. “AI literacy is going to be a really important part of information literacy,” she said. “Students are really going to have to know how to interact with and understand how these tools work.”

Younger generations need to be exposed to these tools and understand how they work, she said, so they can ultimately “go into these fields and build technology that works for them.”

*Correction: This sentence has been updated to clarify that SoapBox does not focus exclusively on ed tech.

This story about racial bias in edtech was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s newsletter

The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn't mean it's free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

Join us today.

Letters to the Editor

At The Hechinger Report, we publish thoughtful letters from readers that contribute to the ongoing discussion about the education topics we cover. Please read our guidelines for more information. We will not consider letters that do not contain a full name and valid email address. You may submit news tips or ideas here without a full name, but not letters.

By submitting your name, you grant us permission to publish it with your letter. We will never publish your email address. You must fill out all fields to submit a letter.

Your email address will not be published. Required fields are marked *