Artificial intelligence holds immense power to enhance the student learning experience. However, large language models are not search engines, asserts new research by Cognitive Resonance. Administrators and educators must understand the risks associated with AI as they find innovative ways to use it in their schools.
Interactions with large language models, also known as LLMs, feel conversational, which often causes users to assume that the technology “knows” or “understands” the particular topic at hand. However, this can make LLMs appear more “authoritative” than they are, the report contends.
“LLMs do not determine what response would be best suited to your particular needs and they do not necessarily produce responses that are true,” the researchers wrote. “By design, they are intended to be helpful and they try to do this by offering plausible responses to prompts they have been given. But their responses are often wrong.”
To better support educators leading AI-based initiatives in their districts, the report outlines four “hazards” of generative AI. Each hazard covers the impacts on lesson planning, generating instructional materials, grading, tutoring and administrative tasks. Let’s briefly cover those risks:
1. What are LLMs designed to do? Predict text
As mentioned previously, LLMs should not be treated like search engines. Instead, they’re designed to address the following scenario: “Here’s a fragment of text. Tell me how this fragment might go on. According to your model of the statistics of human language, what words are likely to come next?”
When it comes to lesson planning, for instance, LLMs may not correctly predict what sequence of lessons would effectively build students’ knowledge.
2. Do LLMs learn the way that humans do? No
By design, LLMs are exposed to a biased sample of cultural practices and values. They do not distinguish useful data from misleading data during their training.
“We do now know precisely how individual companies fine-tune their LLMs,” the report reads. “Numerous scholars have called attention to existing and potential biases that presently exist in LLMs that result from how they are trained.”
When it comes to tutoring, LLMs generally do not learn from their interactions with students, the researchers argue. Their capabilities almost entirely stem from their training data.
More from DA: Student mental health: Teens are feeling a little better
3. Can LLMs reason? Not like humans
Similarly, LLMs produce their responses based on pattern matching, not reasoning. Users should be wary of any definitive statement suggesting that this technology is capable of reasoning the way that humans do.
Administrative tasks like creating professional development on using LLMs would not focus exclusively on “prompt engineering.” Instead, leaders should devote time to helping educators build general knowledge about how LLMs function, and then evaluate the impact on learning outcomes.
4. Does AI make the content we teach in schools obsolete? No
Above all, the effective use of LLMs requires human knowledge and expertise, the report reads. Knowledge cannot be outsourced to AI and students who lack a broad base of knowledge will not be able to make the most out of AI.
Take, for instance, math. Unlike computers or calculators, LLMs generally do not solve math problems by applying formal rules of math. Instead, they treat math-related prompts as text and then predict what text to produce
“This can lead to outputs that ‘sound right’ but sometimes include mathematical or logical errors that students may miss,” the educators wrote. “Educators will need to monitor LLM output carefully if used for math instruction.”
For more insight, we encourage you to take a deeper look at the comprehensive report here.