California recently passed signature artificial intelligence legislation, SB243, requiring chatbot providers to adopt child safety protocols. Similarly, the “Statement on Superintelligence,” calls for a pause on the development of agentic AI systems until guardrails exist.
These are admirable steps, but they raise important questions about how we’re preparing students to use AI safely and productively.
Should we defer protections to local control, leading to a patchwork of approaches, or treat this as a Sputnik moment and develop a national framework to prepare students to lead in the global economy? And can we put aside our partisan differences and collaborate in an area where we share common ground: the well-being of children?
Building a national AI framework would require willpower and patience. Below is one possible path forward.
Laying the groundwork for a national approach
AI is rapidly transforming classrooms. Students now use AI to draft essays, solve problems and generate artwork. There is genuine hope that AI could bring about significant change.
But before there is widespread, unchecked AI adoption in education, let’s take a collective pause.
A one-year moratorium on unsupervised student use of AI would allow time to establish safety protocols. It would certainly require unprecedented buy-in from tech companies and parents, but could better protect students while we organize a collective approach.
This isn’t about fear, but foresight. AI holds enormous potential to transform learning—especially for students with disabilities and multilingual learners. Its ability to differentiate lessons at scale could be revolutionary, but only if introduced responsibly.
Social media entered children’s lives with little preparation or regulation and the consequences are undeniable. Rising anxiety, depression, and suicidal ideation have climbed sharply among young people, and academic performance is declining.
AI has its own risks. Deepfakes and voice clones threaten truth itself and can lead to bullying and sexual harassment.
Large language models sometimes produce biased or false content, raising issues of accuracy and fairness. Equally troubling, AI “companions” have provided dangerous advice to vulnerable youth. These aren’t hypothetical risks—they’re here now.
A year of preparation, not stagnation
Just as we require driver’s education before kids drive alone, we should provide AI education before students use it independently.
This moratorium should not be mistaken for inaction. Rather, it should be a year of diligent preparation, designed to lay the foundation for safe and effective use of AI in education. Here are seven important steps:
- Educate parents. Parents often have little visibility into their children’s social media habits. Recent data from Common Sense Media highlights that over 70% of students now interact with AI companions but only 37% of parents are aware it’s happening. A national awareness campaign could educate parents around the risks and possibilities of AI and provide them with the resources to talk with their kids and set limits together.
- Convene big tech and the medical community. Rather than each company determining safeguards for its offerings alone, convene a diverse group that includes leaders from big tech and the medical professions to help establish a national student AI safety policy.
- Prepare educators and administrators. Teachers must be equipped and encouraged to understand AI’s strengths, limitations and appropriate classroom applications. And, as they will have the ultimate responsibility to teach AI literacy, professional development is essential, especially for teachers who feel students know more about AI than they do.
- Pilot tools purposefully. Instead of unleashing AI across all schools simultaneously, administrators should identify a diverse set of pilot districts—urban, rural and suburban—to test tools under controlled conditions. These pilots can shed light on benefits and unintended consequences, offering valuable lessons before scaling up.
- Establish a national clearinghouse. Schools are being flooded with AI products, many untested. An independent clearinghouse is needed to evaluate these tools for safety and instructional effectiveness. Teachers and administrators deserve a trusted, evidence-based source of guidance before adopting products that will shape student learning for years to come.
- Listen to student voices. The conversation on AI must include the voices of young people themselves. Let’s give youth agency and include representative groups in our discussions and plans. Their lived experiences with technology are invaluable in shaping guardrails and best practices.
- Build a national AI literacy assessment that all students must pass in order to use AI. Since the PISA international test will start assessing AI literacy skills in 2029, it makes sense to get ahead of this issue. Indeed, with a comprehensive, coordinated effort on our part, America could lead the world in AI literacy.
A responsible path forward
Academic achievement in our country has been stagnant or declining for over 10 years. AI could help reverse this trend and usher in a new era of personalized, student-centered learning. But only if it is introduced with care.
A one-year moratorium on unsupervised student use of AI is a prudent first step. It provides an opportunity to inform parents, prepare administrators and teachers, listen to students, test tools and create safeguards before full-scale adoption.
Implementing such a pause would not be easy; it would require coordination, compromise and a willingness to learn as we go. But even an imperfect effort would move us closer to a thoughtful, evidence-based approach.
If we can come together as educators, parents, policymakers and technology leaders, we can shape an AI future that strengthens education and sets students up for lasting success.



