The Hechinger Report is a national nonprofit newsroom that reports on one topic: education. Sign up for our weekly newsletters to get stories like this delivered directly to your inbox. Consider supporting our stories and becoming a member today.

This summer, the White House persuaded seven major tech companies to make substantial commitments toward the responsible development of artificial intelligence; in early September, eight more joined in. The companies pledged to focus on researching the societal dangers of AI, such as the perpetuation of bias and abuse of privacy, and to develop AI that addresses those dangers.

This is a huge step forward, given AI’s potential to do harm through the use of biased and outdated data. And nowhere is this conversation more relevant than in K-12 education, where AI holds the promise of revolutionizing how teachers teach and students learn. Legislators must begin regulating AI now.

Take speech-recognition technology, for example, which has transformative applications in the classroom: Students can use their voices to demonstrate how well they can read, spell or speak a language and receive real-time feedback. The data generated helps educators tailor their lesson plans and instruction.

Related: ‘We’re going to have to be a little more nimble’: How school districts are responding to AI

However, AI tools can also heighten existing inequities, including when used in speech-recognition tools that don’t adequately reflect the unique speech patterns of many children or account for the breadth of dialects and accents present in today’s classrooms. If the datasets powering voice-enabled learning tools do not represent the diversity of student voices, a new generation of classroom technologies could misunderstand or inaccurately interpret what kids say and, therefore, what they know.

That’s why we must insist on transparency in how AI tools are built and ensure that the data used to build them includes persistent checks and balances to ensure accuracy and bias mitigation before these tools enter the classroom, along with rigorous and continuous testing thereafter.

This will require action from all sides — policymakers, education leaders and education technology developers themselves. As a first step, policymakers around the globe must prioritize writing and enacting policies that establish high bars for the accuracy and equity of AI systems and ensure strong protections for personal data and privacy.

When it comes to AI, we can’t afford the same wait-and-see approach many governments took to regulating social media.

Policy always lags innovation, but when it comes to AI, we can’t afford the same wait-and-see approach many governments took to regulating social media, for example.

Over the last year, I’ve been serving as Ireland’s first AI ambassador, a role designed to help people understand the opportunities and risks of an AI-pervasive society. I now also chair Ireland’s first A.I. Advisory Council, whose goal is to provide the government with independent advice on AI technology and how it can impact policy, build public trust and foster the development of unbiased AI that keeps human beings at the center of the experience.

I’ve been advocating for more than a decade for policies that apply strict safeguards around how children interact with AI. Such policies have recently been gaining appreciation and, more importantly, traction.

The European Union is moving closer to passing legislation that will be the world’s most far-reaching attempt to address the risks of AI. The new European Union Artificial Intelligence Act categorizes AI-enabled technologies based on the risk they pose to the health, safety and human rights of users. By its very nature, ed tech is categorized as high risk, subject to the highest standards for bias, security and other factors.

But education leaders can’t wait for policies to be drawn up and legislation enacted. They need to set their own guardrails for using AI-enabled ed tech. This starts with the requirement that ed tech companies answer critical questions about the capabilities and limitations of their AI-enabled tools, such as:

  • What’s the racial and socioeconomic makeup of the dataset your AI model is based on?
  • How do you continuously test and improve your model and algorithms to mitigate bias?
  • Can teachers review and override the data your product generates?

District leaders should only adopt technologies that clearly have the right safeguards in place. The nonprofit EdTech Equity Project’s procurement guide for district leaders is a great place to start — offering a rubric for assessing new AI-powered ed tech solutions.

And ed tech companies must demonstrate that their AI is accurate and without bias before it is used by young students in a classroom. In this case, by making sure that, when assessing a child for literacy skills, for example, the voice-enabled tools recognize the child’s skill challenges and strengths with as much if not more truth as a teacher sitting with the child. This means frequently testing and evaluating models to ensure they are accessible to and inclusive of a range of student demographics and perform consistently for each. It also means training product managers and marketers to educate teachers about how the AI works, what data is collected and how to apply new insights to student performance.

Independent assessment of bias is becoming recognized as a critical new standard for ed tech companies that use AI. To address this need, organizations like Digital Promise offer certifications to assess AI-powered tools and validate that they are bias-free.

Related: How college educators are using AI in the classroom

So, what’s the endgame of all this work by companies and district leaders? A whole new generation of AI-powered education tools that remove fallible and subjective human judgment when teaching and assessing kids of all backgrounds for reading and language skills.

Doing this work will ensure that educators have access to tools that support their teaching and that meet each child where they’re at in their individual learning journey. Such tools could level the playing field for all children and deliver on the promise of equity in education.

As AI and laws governing it come to fruition, we need to acknowledge just how much we still don’t know about the future of this technology.

One thing is crystal clear, however: Now is the time to be smart about the development of AI, and in particular the AI-powered learning tools used by children.

Patricia Scanlon currently serves as Ireland’s first AI ambassador and is the founder and executive chair of SoapBox Labs, a voice AI company specializing in children’s voices. She has worked in the field for more than 20 years, including at Bell Labs and IBM.

This story about regulating AI was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s newsletter.

The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn't mean it's free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

Join us today.

Letters to the Editor

At The Hechinger Report, we publish thoughtful letters from readers that contribute to the ongoing discussion about the education topics we cover. Please read our guidelines for more information. We will not consider letters that do not contain a full name and valid email address. You may submit news tips or ideas here without a full name, but not letters.

By submitting your name, you grant us permission to publish it with your letter. We will never publish your email address. You must fill out all fields to submit a letter.

Your email address will not be published. Required fields are marked *