Members of the Senate Health, Education, Labor and Pensions Committee cast doubt recently on the benefits artificial intelligence can bring to education due to the increasing risks the technology poses to children’s health and wellbeing.
Top of mind for many at the hearing were a string of high-profile self-harm incidents. In one of the most recent and devastating stories to make national headlines, a 16-year-old boy took his own life after interacting for months with ChatGPT, confiding in it his suicidal ideation and asking it questions about making a noose.
“This is heartbreaking, it’s unacceptable and maybe it’s criminal,” said Chairman Bill Cassidy, R-Louisiana, who penned a letter last week to developers demanding better safeguards for kids.
He’s far from the first to do so.
In response to the recent tragedies, 44 U.S. state attorneys general issued a warning to the biggest tech firms in the industry, promising to “use every facet of our authority to protect children from exploitation by predatory artificial intelligence products.”
If the shot across the bow sounds familiar, it’s because it is. As the letter explicitly points out, we’ve been down this road before with the advent of social media.
“Broken lives and broken families are an irrelevant blip on engagement metrics as the most powerful corporations in human history continue to accrue dominance,” the state attorneys general wrote. “The potential harms of AI, like the potential benefits, dwarf the impact of social media.”
In the wake of the mounting lawsuits likening the AI technology to a “suicide coach,” OpenAI promised to continue improving how their models respond to signs of mental distress and connect people with care. The company said accounts for children under 18 will have some parental controls.
OpenAI co-founder Sam Altman also announced that his company is lobbying policymakers to treat personal conversations with AI tools as privileged, in the same way that information shared with a doctor or a lawyer is protected.
Leaving safety up to Big Tech is not enough. We witnessed what happened when social media platforms promised to self-police, and young people paid the price. We must make a different choice.
The good news is that many K12 schools already have access to tools that can detect signs of self-harm in real time.
AI inference engines can analyze patterns and flag suspicious signs of grief and self-harm. Once the inference engine narrows the alerts to those most critically urgent, human analysts can notify appropriate people. It can be done in under five minutes.
Unfortunately, many K12 schools often choose not to use these powerful assistants. There has been some reticence in embracing these tools due to privacy concerns, the risk of false negatives, and the onus it puts on districts.
AI changes the equation. What might have been optional before can’t be optional now. Don’t believe me? Since Aug, 1, we’ve identified 92 cases of extreme self-harm for my company’s school partners.
The risk is too high to stall. We need AI transparency.
I urge district and school leaders to start engaging staff and families on this issue. As you put together AI task forces and draft guidelines for acceptable use in classrooms, we cannot forget the most important piece of this new technological era: Ensuring the safety of our children.
We can and should be excited about this innovation, and also courageous enough to have frank conversations to establish safeguards as well.
Unfortunately, yesterday’s technological guardrails aren’t enough for generative AI. Ensuring student safety now requires real-time oversight so schools can respond immediately when something goes wrong. And unlike cellphone disruptions, AI is not something we can ban our way out of.
I’m no stranger to the complexity of multi-stakeholder politics that drives K12 decision-making. I’m empathetic to leaders and understand the complexities.
But as a mom who cannot even begin to understand the grief some parents are experiencing, I scratch my head on why we are not taking more aggresive action. Unfortunately, excuses after the fact of a tragic loss are simply not enough anymore, especially with AI in our children’s lives.

