AI in Education: Privacy and Security

The age of AI has dawned, and it’s a lot to take in. eSpark’s “AI in Education” series exists to help you get up to speed, one issue at a time. AI privacy and security is up next.

EdTech Evolved logo

 

Just when you thought you had this data privacy thing down, along comes a new technology to change the game. The rise of artificial intelligence has both opened up new opportunities and raised new questions about everything from safety and privacy to bias and ethics. Here’s what you need to know to keep yourself and your classroom secure.

 

Where does the privacy conversation start?

Throughout this article, we’ll talk a lot about “inputs.” Think of inputs as the information you enter into an AI system—anything from your side of a Google Bard conversation to the conversation you have with an AI transcription app to a large spreadsheet full of student assessment data. There are two types of inputs that are most relevant to the discussion of AI privacy and security in schools:

 

Bulk inputs

This category includes any data sets that might be associated with an entire classroom, school, or district. This will very likely not be a concern for most teachers, but may come into play if districts decide to use AI for any large-scale data analysis. In those cases, it will be important to ensure the data is completely siloed from external access. That means strong security protocols, dedicated servers, and reliable firewalls, all things that already top the priority list for most district tech leaders. One of the AI-specific concerns to be aware of is how uniquely adept the technology is at deanonymization (the ability to re-identify supposedly non-identifiable data by cross-referencing multiple sources).

As with any student data processor, AI systems that analyze large data sets will need to be accessible only by those with a legitimate educational interest in that data. This will also mean strict access control in the event that a system contains multiple types of data; e.g. attendance data, grades, and HIPAA-protected health information. If a school official has an interest in one part of the dataset, that doesn’t mean they should be able to see all of it. This kind of required attention to data segmentation is not new, it’s just likely to be exacerbated by AI productivity and data management tools.

 

Individual inputs

This is where teachers will need to be cautious, especially when interacting with ChatGPT and similar language models. As of now, there is ZERO expectation of privacy for any direct inputs into those systems. Once a prompt is sent, it becomes part of the model’s training data, where it can possibly be incorporated into future responses, seen by human reviewers, or otherwise accessed by outside interests.

Several organizations, most notably Samsung, have learned this lesson the hard way when employees included sensitive and proprietary information in their ChatGPT requests, including such benign tasks as generating meeting minutes or solving coding problems. It’s not hard to imagine certain educational scenarios that could result in similar issues—think of a teacher trying to use ChatGPT to generate an IEP based on a student’s eval notes.

 

How to stay safe with AI privacy and security

There are already so many ways ChatGPT and various other apps can make your life easier. From lesson planning to crafting compelling parent newsletters, savvy early adopters will be able to reclaim dozens of hours of their time this school year. But all those benefits come with a caveat—teachers will need to get up to speed quickly on the limitations of AI and what responsible use looks like.

 

1. Be careful what you input

This point can’t be emphasized enough. All use of ChatGPT and similar tools should be done without reference to any individuals, events, or locations that could be pieced together to identify the source of the prompt. Never use real names, especially student names. AI privacy is never a given.

 

2. Hold products to the same standard

There will be hundreds of new “AI-powered” products, features, and apps marketed to teachers over the next few years. If history tells us anything, it’s that not all of the developers behind those apps will be prioritizing privacy and security. Look closely at how a program is delivering a service—is personally identifiable information a part of the workflow? What safeguards are in place to protect it? Are the new products compliant with state and federal student data privacy laws?

This burden shouldn’t just fall on teachers. The best thing you can do when trying out new technologies is to loop your tech department in early and often. The education community will eventually have regulations and standards in place to address these concerns, but in the meantime it will be up to district tech teams to make the right decisions for their students.

Technical note: Unlike the AI privacy concerns mentioned above with direct inputs into ChatGPT’s user interface, there is an added layer of security when using products that leverage ChatGPT’s API. Data sent via the API is not shared with anybody and does not become part of the training dataset.

 

3. Find opportunities to teach AI literacy

Today’s students are going to use AI throughout their lives in ways we can’t even hope to anticipate. The next evolution of digital literacy will need to be centered around these tools and technologies, and there’s no time like the present to start laying the groundwork for what’s to come. Help your students understand the ethical, bias, and privacy concerns that come with AI. Emphasize to them that anything they put into these models will be out in the world forever. Model for them the best ways to safely use the tools in front of them.

 

Understand the terms of service

We would be remiss not to mention that ChatGPT’s terms of service require users to be at least 13 years old, and require parental consent for any user under 18. While you’re not going to be able to stop students from accessing the program on their own time, it is not ok for them to use it directly in the classroom. That doesn’t mean students can’t still benefit from generative AI, it just means they will need to do so indirectly through programs that leverage the technology without providing unfettered access to it.

In the meantime, the best you can do is to make sure any new technology you try out in the classroom meets the standard of privacy and security that’s already in place. Stay aware, stay in the loop, and do what you can to set your students up for success in an ever-changing world.

 

Additional resources

Want to stay in the loop on the topic of AI in schools? Subscribe to EdTech Evolved today for monthly newsletter updates and breaking news.

EdTech Evolved logo

Ready to see student-centered learning in action?