If you haven’t gotten a chance—or perhaps you’ve been too hesitant—to experiment with artificial intelligence for education-related purposes, now is your chance. Here’s how.
Turnitin, a popular plagiarism detection service used by educators, has announced new offerings as students return to school for the 2024-25 academic year. These expansions will help teachers digitize their tasks with AI-powered grading, enhanced AI writing detection and customizable individual feedback for students.
These innovations come at a time when only 40% of instructors and administrators have adopted AI into their workflow, compared to 59% of students, recent research from Tyton Partners reveals.
“This school year, AI will likely be in every classroom, Chief Product Officer of Turnitin Annie Chechitelli, said in a statement. “Yet, there’s still a disconnect between students and educators about what constitutes acceptable generative AI use and access to technology to support learning.”
Here are some details about these resources:
A paper-to-digital add-on
In 2024, AI in education goes far beyond plagiarism detection. If used correctly, it can make the grading process much more streamlined for educators.
This new addition uses handwriting recognition, and AI-assisted question-by-question grading of paper quizzes, tests and assignments. This will help teachers provide more precise feedback.
Enhanced Similarity Report
Turnitin first launched its AI writing detection tool in April 2023. It’s received some upgrades since then.
The Similarity Report now helps instructors identify unoriginal or improperly cited student writing. It’s also been redesigned for use as a formative learning tool to bolster writing skills.
These updates come soon after Turnitin launched the AI paraphrasing detection feature, allowing educators to identify when it was used to paraphrase a piece of text in order to avoid detection.
More DA coverage
Check out District Administration’s latest coverage surrounding AI in K12:
Tackle these 4 big risks when experimenting with AI
Interactions with large language models feel conversational, which often causes users to assume that the technology “knows” or “understands” the particular topic at hand. However, this can make LLMs appear more “authoritative” than they are, a report contends.
AI-powered ransomware is inevitable. Heed this advice
In the next two years, AI-powered ransomware will be able to modify malware code in real time to avoid detection, predicts one expert. Here’s what you can do to prepare.