AI is tracking student suicide risk. Here are 6 ways to improve the technology

"There is limited understanding about how such programs work, how they are implemented by schools, and how they may benefit or harm students and their families," researchers say.

Almost anyone you ask—educators, health care professionals, edtech developers and youth advocates—agree that AI monitoring tools can effectively identify students who are at risk of committing suicide or self-harm. Use of the technology can also reassure parents and educators that school leaders are taking action to address this most grave of public health threats.

Confirming these facts comes with a warning, however. AI-based suicide-risk prediction algorithms also “compromise student privacy and perpetuate existing inequalities,” according to the Rand Corporation’s latest analysis of the technology’s potential to protect young people.

“The adoption of AI and other types of educational technology to partially address student mental health needs has been a natural forward step for many schools during the transition to remote education,” the research nonprofit says in its report. “However, there is limited understanding about how such programs work, how they are implemented by schools, and how they may benefit or harm students and their families.”

AI monitoring tools track student activity on school-issued devices and school-administered accounts when used on personal devices. The applications analyze language, keywords and even sentiment to identify threats. Students are automatically opted-in to the tracking and they, or their parents, have to opt themselves out to block the software.

The companies that produce the software typically identify suicide risks and alert designated school personnel. One such provider, Gaggle, reported issuing more than 235,000 self-harm and violence alerts during the 2020–2021 school year, the report noted.

AI monitoring, however, is only one facet of student health and wellness, and most K12 schools and their communities still do not have sufficient resources to support the overall mental health of youths, the report asserts. Healthcare providers, parents and other caregivers are not fully aware of how schools are using AI monitoring tools, RAND added.


Rankings: Virginia has the highest high school graduation rates in the country


Ultimately, more data is needed to show how accurately AI algorithms can detect suicide risk and whether the technology is improving student mental health, the researchers concluded.

Installing AI monitoring: Next steps

For school leaders planning to adopt or improve AI monitoring efforts, RAND recommends:

  1. Engaging communities for feedback: Schools should involve parents, healthcare providers and other community members in developing policy around how AI monitoring alerts will be acted on and who will be informed that a student is at risk of harming themselves. “Through these broader consultations, the use of AI-based monitoring in schools might not be seen purely as a technical solution to a complex problem, but a part of a complementary set of interventions in the broader educational system,” the researchers advised. “
  2. Notifying caregivers and students about the surveillance: Districts should make clear what activity is being tracked on websites, email and other messaging platforms, and how alerts are triggered. Parents and students should be made aware of how they can opt out, what data is being collected, where it is stored and who has access to it.
  3. Establishing a consistent process for responding to alerts: Best practices include ensuring responses to alerts are coordinated between school IT personnel, safety staff, counselors and leaders. Schools and districts should also have implemented a crisis response plan that covers suicide threats, but administrators should limit reliance on law enforcement involvement, the report counsels.
  4. Track outcomes of risk alerts: Schools should review how personnel are intervening with students after alerts are triggered. Administrators should consider working with researchers or other experts to examine whether the process is benefiting student mental health and preventing risky behavior. Schools should also track outcomes such as law enforcement involvement, disciplinary actions and false positives.
  5. Help students understand mental health: Administrators can use the adoption of AI monitoring as an opportunity to have positive conversations with students about mental health and the support that is available to them. These conversations can take place during classroom instruction, on district websites, and at assemblies and parent-teacher events.
  6. Ensure district anti-discrimination policies guide AI tracking: Research has shown that this technology can disproportionately affect marginalized students based on race, gender and disability. Schools must train civil rights personnel, legal counsel and technology leaders, among others, to ensure AI tracking does not become a method of discrimination.
Matt Zalaznick
Matt Zalaznick
Matt Zalaznick is a life-long journalist. Prior to writing for District Administration he worked in daily news all over the country, from the NYC suburbs to the Rocky Mountains, Silicon Valley and the U.S. Virgin Islands. He's also in a band.

Most Popular