The Hechinger Report is a national nonprofit newsroom that reports on one topic: education. Sign up for our weekly newsletters to get stories like this delivered directly to your inbox. Consider supporting our stories and becoming a member today.

Since the introduction of ChatGPT, educators have been considering the impact of generative artificial intelligence (GAI) on education. Different approaches to AI codes of conduct are emerging, based on geography, school size and administrators’ willingness to embrace new technology.

With ChatGPT barely one year old and generative AI developing rapidly, a universally accepted approach to integrating AI has not yet emerged.

Still, the rise of GAI is offering a rare glimpse of hope and promise amid K-12’s historic achievement lows and unprecedented teacher shortages. That’s why many educators are contemplating how to manage and monitor student AI use. You can see a wide range of opinions, including some who would like to see AI tools outright banned.

There is a fine line between “using AI as a tool” and “using AI to cheat,” and many educators are still determining where that line is.

Related: How AI can teach kids to write – not just cheat

In my view, banning tech that will become a critical part of everyday life is not the answer. AI tools can be valuable classroom companions, and educators should write their codes of conduct in a way that encourages learners to adapt.

Administrators should respect teachers’ hesitation about adopting AI, but also create policies that allow tech-forward educators and students to experiment.

A number of districts have publicly discussed their approaches to AI. Early policies seem to fall into three camps:

Zero Tolerance: Some schools have instructed their students that use of AI tools will not be tolerated. For example, Oklahoma’s Tomball ISD updated its code of conduct to include a brief sentence on AI-enhanced work, stating that any work submitted by a student that has been completed using AI “will be considered plagiarism” and penalized as such.

Active Encouragement: Some schools encourage teachers to use AI tools in their classrooms. Michigan’s Hemlock Public School District provides its teachers with a list of AI tools and suggests that teachers explore which tools work best with their existing curriculum and lessons.

Wait-and-See: Many schools are taking a wait-and-see approach to drafting policies. In the meantime, they are allowing teachers and students to freely explore the capabilities and applications of the current crop of tools and providing guidance as issues and questions arise. They will use the data collected during this time to inform policies drafted in the future.

A recent Brookings report highlighted the confusion around policies for these new tools. For example, Los Angeles Public Schools blocked ChatGPT from all school computers while simultaneously rolling out an AI companion for parents. Because there isn’t yet clear guidance on how AI tools should be used, educators are receiving conflicting advice on both how to use AI themselves and how to guide their students’ use.

New York City public schools banned ChatGPT, then rolled back the ban, noting that their initial decision was hasty, based on “knee-jerk fear,” and didn’t take into account the good that AI tools could do in supporting teachers and students. They also noted that students will need to function and work in a world in which AI tools are a part of daily life and banning them outright could be doing students a disservice. They’ve since vowed to provide educators with “resources and real-life examples” of how AI tools have been successfully implemented in schools to support a variety of tasks across the spectrum of planning, instruction and analysis.

AI codes of conduct that encourage both smart and responsible use of these tools will be in the best interest of teachers and students.

This response is a good indication that the “Zero Tolerance” approach is waning in larger districts as notable guiding bodies, such as ISTE, actively promote AI exploration.

In addition, the federal government’s Office of Educational Technology is working on policies to ensure safe and effective AI use, noting that “Everyone in education has a responsibility to harness the good to serve educational priorities” while safeguarding against potential risks.

Educators must understand how to use these tools, and how they can help students be better equipped to navigate both the digital and real world.

Related: AI might disrupt math and computer science classes – in a good way

Already, teachers and entrepreneurs are experimenting with ways that GAI can make an impact on teacher practice and training, from lesson planning and instructional coaching to personalized feedback.

District leaders must consider that AI can assist teachers in crafting activity-specific handouts, customizing reading materials and formulating assessment, assignment and in-class discussion questions. They should also note how AI can deter cheating by generating unique assessments for each test-taker.

As with many educational innovations, it’s fair to assume that the emergence of student conduct cases within higher education will help guide the development of GAI use policy generally.

All this underscores both the importance and the complication of drafting such GAI policies, leading districts to ask, “Should we create guidelines just for students or for students and teachers?”

Earlier this year, Stanford’s Board on Conduct Affairs addressed the issue and its policies, clarifying that generative AI cannot be used to “substantially” complete an assignment and that its use must be disclosed.

But Stanford also gave individual instructors the latitude to provide guidelines on the acceptable use of GAI in their coursework. Given the relative murkiness of that policy, I predict clearer guidelines are still to come and will have an impact on those being drafted for K-12 districts.

Ultimately, AI codes of conduct that encourage both smart and responsible use of these tools will be in the best interest of teachers and students.

It will, however, not be enough for schools just to write codes of conduct for AI tools. They’ll need to think through how the presence of AI technology changes the way students are assessed, use problem-solving skills and develop competencies.

Questions like “How did you creatively leverage this new technology?” can become part of the rubric.

Their exploration will help identify best practices and debunk myths, championing AI’s responsible use. Developing AI policies for K-12 schools is an ongoing conversation.

Embracing experimentation, raising awareness and reforming assessments can help schools ensure that GAI becomes a positive force in supporting student learning responsibly.

Ted Mo Chen is vice president of globalization for the education technology company ClassIn.

This story about AI tools in schools was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s newsletter.

The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn't mean it's free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

Join us today.

Letters to the Editor

At The Hechinger Report, we publish thoughtful letters from readers that contribute to the ongoing discussion about the education topics we cover. Please read our guidelines for more information. We will not consider letters that do not contain a full name and valid email address. You may submit news tips or ideas here without a full name, but not letters.

By submitting your name, you grant us permission to publish it with your letter. We will never publish your email address. You must fill out all fields to submit a letter.

Your email address will not be published. Required fields are marked *