Just over a year since the introduction of ChatGPT to the public, educators in school districts of every size and demographic description are grappling with the massive implications of AI and large language models (LLMs) on their students, teachers and outcomes.
“Wait and see” isn’t an adequate response to the forces being unleashed, but fortunately there’s a solid new framework for progress that education leaders should consider examining and implementing. The recent US Office of Educational Technology (OET) report provides good AI guidelines to help districts adapt to one of the fastest-moving technology waves ever created.
The novel aspect of last year’s ChatGPT release was the unique ability for non-computer scientists to interact easily and extensively with an LLM, allowing access to an astonishing assemblage of much of the world’s existing knowledge base. That functionality was simple, fast and cheap, and that powerful combination has driven 180 million unique monthly users for ChatGPT alone.
The OET report recommendations start with a strong admonition for educators to “emphasize humans in the loop.” That’s an idea that calls to mind a current saying in AI circles that goes, “If you’re worried about your job being made obsolete by AI, you shouldn’t be. It will be made obsolete by a human using AI.” Informed, involved human design and oversight are necessary for effective AI deployment.
It also calls for education leaders to develop a shared vision for AI-assisted education, and suggests building that vision on top of modern, optimized learning frameworks is critical. The report’s other key recommendations include:
- Building trust with all constituencies in mind to avoid the traps that come with LLM tendency to “hallucinate.”
- Working diligently to involve front-line educators in policy decisions and keeping lines of communication wide open during this time of rapid change.
- Driving the development of “guidelines and guardrails” that promote the safety and transparency of both the technology and educators’ policies toward it.
AI optimism is running high
Against the backdrop of the OET report, early real-world AI educational use cases are emerging—as are the problems and questions they create. Optimism runs high on using AI for personalized learning. Khan Academy is pioneering this with Khanmigo, a chatbot capable of providing individualized guidance on math, science, and humanities problems, and Harvard recently released CS50 bot to assist students with their Introduction to Computer Science course.
At the same time, the extent of technology development required for these tutoring bots shows that the general capabilities of public LLMs are not sufficient to directly support one-on-one tutoring, and that specialized AI applications will need to be developed.
These applications will need custom content training of LLMs and capabilities to deliver content, assess learning progress, feedback for continuous improvement and safeguard against inappropriate or unexpected responses. Development of these AI applications is non-trivial and will require significant investment. Education technology vendors and large well-funded institutions are best suited to develop these applications.
That’s just the beginning, of course. LLMs hold the potential to make significant leveraging contributions in almost all phases of education, including:
- Curriculum design
- Educational content creation
- Task automation
- Broadening education access
- Systemic performance evaluation
- 24/7 teaching/tutoring assistance
“Creative destruction” ahead
Given the reach and potential of AI, the OET’s call for a shared vision for a given district is a particularly critical one. It will require a broad rethink of what the classroom will look like in a world where each student has their own personalized and tutored learning plan. Will the academic year be structured the same way? Will students be onsite 100% of the time? Will daily schedules be more customized? Will facilities change?
In the past 12 months educational institutions have responded in varied manners toward LLMs, ranging from banning its use to actively encouraging it. But as time goes by the bans simply are not an option for education leaders. Strategy becomes paramount as the technology becomes pervasive.
Education leaders will need to establish key policies and guardrails to govern the use of AI in education. They’ll also need to consider ensuring appropriate levels of investment across differently resourced districts to help them realize the technology’s potential and ensure the greatest degree of equity possible in its deployment.
AI technology has the potential to dramatically transform education in the US. The adoption of the technology and the use cases it drives will doubtless come in fits and starts, with a kind of creative destruction that will likely be both exhilarating and painful.
Education leaders will need to up their AI game to guide their local education agencies through this turbulent period, bringing clear-eyed strategy, thoughtful policies and appropriate training to help harness the technology’s potential and avoid its worst pitfalls.
Patrik Dyberg is a managing director with Alvarez & Marsal Corporate Performance Improvement in Washington, D.C. He specializes in driving substantial business outcomes through leveraging technology.
Paul Tearnen is a managing director with Alvarez & Marsal Public Sector in Seattle, Washington. Tearnen specializes in leading complex business and IT initiatives, standing up new organizations and turning around underperforming teams.