The privacy risks of AI use at school
Whether school system leaders realize it or not, voice-activated, artificial intelligence devices such as Alexa and Google Home are becoming a part of classrooms.
“I know of a lot of school districts where teachers use it, and in some districts the [school administrator] knows about it, but in other districts they don’t” says Eileen Belastock, director of academic technology and chief technology officer for Mount Greylock Regional School District in Massachusetts.
The tools are being used to create more immersive learning experiences in which students use their voices and critical thinking skills to access information. However, such usage comes with drawbacks—and a host of potential data privacy risks.
Belastock is a featured presenter for the Future of Education Technology Conference, January 27-30, 2019, in Orlando, Fla.
What are the primary privacy concerns in using AI devices, particularly voice-activated tools?
As a technology director, my first concern is student safety. How are we protecting the privacy of our students? I’m finding that teachers using these devices are not.
When I talk to my teachers about the Family Educational Rights and Privacy Act, the Children’s Online Privacy Protection Act and the Children’s Internet Protection Act, they don’t really understand what those laws are and how noncompliance can impact their students, themselves and the district.
On the district end, we need to start developing policy around having these devices in the classroom; otherwise it’s wide open for student voices to be recorded and used to develop algorithms. These devices also circumvent any school filtering and firewall. I had one teacher say to me that she turns it off, but Alexa is always on. As soon as you say one keyword, it activates.
Which aspects of student privacy laws must educators consider before using these devices?
They should consider what constitutes personally identifiable information under FERPA. The Department of Education is very clear that student names or any kind of identifying information, cannot be divulged. The [app providers] are not asking for parents’ permission to disclose personally identifiable information.
With CIPA, schools must ensure internet safety policies include monitoring that blocks harmful content. With COPPA, when it comes to the collection of personal information of children under 13, we need to get parental consent.
How can administrators build awareness and help teachers avoid pitfalls?
At the beginning of the school year, it should be addressed. Also, more teachers should have a voice on technology committees, which is where district regulations and policies should begin.
District leaders need to go into classrooms and hear what’s going on and see why these teachers want to use it. We need to create an approved list of apps that are vetted to make sure that they’re compliant.
So you don’t think an outright ban on these devices is necessary?
I don’t think we should say “No” but we need to work together. The teacher’s voice is as important as the district leader’s in policy decisions. If a teacher in my district were to bring one in, I’d probably have them take it home until we figured it out. I don’t believe these virtual personal assistants should be in the classroom—there are other AI tools available—but they’re coming.
Emily Ann Brown is an associate editor at DA.
Interested in edtech? Keep up with DA's Future of Education Technology Conference®.