We’re in a new era of cybersecurity that requires more advanced defenses. Traditional tactics like double extortion and phishing will stick around, but they’ll become much more complex thanks to one significant disruptor: generative artificial intelligence.
AI’s accessibility allows for the most bush league of hackers to cause harm to K12 schools and individual educators. Pikesville High School Principal Eric Eiswert was framed by the school’s former athletic director Dazhon Darien who used an AI voice cloning service to create a recording of Eiswert muttering racist and antisemitic remarks.
The technology’s ease of access will ultimately boost cybercriminals’ capability to mount social engineering attacks, says James Turgal, vice president of cyber risk and board relations at Optiv, a cybersecurity services and solutions provider.
“In contrast to traditional phishing and ransomware delivered through social engineering, generative AI is being exploited to enable convincing interaction with victims, including the curation of lure documents, without the translation, spelling and grammatical mistakes that often accompany traditional phishing attacks,” he explains.
AI-powered ransomware can also adapt in real-time by modifying malware code to avoid detection, he adds. The same tactic can be used to alter the source code of a piece of malware to avoid antivirus software-triggering rules.
“This will absolutely increase over the next two years as models evolve and uptake increases,” he predicts.
AI requires a new approach to cybersecurity
Cybercriminals are already using the technology to become more efficient. Regardless of one’s cybersecurity skills, AI and large language models will make it difficult to assess whether an email or password reset request is genuine or a clever attempt to gain access to one’s sensitive information.
One of the biggest issues is IT teams now have less time to install new security updates before unpatched software is exploited, says Turgal.
“AI is highly likely to accelerate this challenge as reconnaissance to identify vulnerable devices becomes quicker and more precise.”
K12 leaders should be investing in robust cybersecurity defenses that leverage AI and machine learning for threat detection and response, Turgal advises.
“By analyzing network traffic, user behavior and endpoint activity in real-time, AI-powered solutions can help organizations identify and mitigate ransomware in step with the threat actors’ use of the same technology for bad intentions.”
Navigating 2024-25
In addition to AI-powered attacks, more common tactics will continue to cause headaches for technology leaders. Phishing attacks are likely to increase, and Turgal predicts an exponential increase in “vishing” deepfake synthetic media attacks using teachers’ or administrators’ voices and videos.
Cybercriminals with access to quality training data, significant expertise and resources are more likely to execute sophisticated AI cyberattacks, says Turgal.
“AI will almost certainly make cyberattacks against the education sector more impactful because threat actors will be able to analyze data faster and more effectively and use it to train AI models,” says Turgal.
More from DA: How to capitalize on AI’s 7 wicked opportunities for K12