
Artificial intelligence (AI) is a double-edged sword: The same technology that can draft emails or write software code can also be exploited for nefarious purposes. For example, today, cybercriminals are leveraging AI to enhance email-based attacks, such as phishing, by creating highly convincing scams or rapidly generating new malware variants.
This has sparked seismic change in cybersecurity, driving equal parts opportunity and risk. MSPs need to recognize that AI can dramatically increase the scale, sophistication, and personalization of any email attack, making it more challenging to detect and defend against compared to traditional phishing techniques.
How criminals exploit AI
Recently, a group of researchers from Columbia University and the University of Chicago worked with Barracuda to analyze a large dataset of unsolicited and malicious emails covering February 2022 to April 2025.
As part of the analysis, researchers developed detectors to automatically identify whether a malicious or unsolicited email was generated using AI. This approach was based on the assumption that emails sent before the public release of ChatGPT in November 2022 were likely authored by humans, which provided a reliable ‘false positive’ baseline for the detector.
Analysis of the dataset showed a sharp increase in AI-generated content after ChatGPT’s release. According to the researchers, AI is predominantly utilized in spam, with over 51 percent of spam emails being generated by AI as of April 2025. In contrast, AI use in BEC attacks, which are targeted and precise and often involve high-level victims, is growing more slowly, having reached 4 percent by April 2025.
Why this should be on every MSP’s radar
According to Gartner, by 2026, 60 percent of enterprises are expected to suffer major security incidents linked to GenAI misuse or vulnerabilities in AI pipelines.
Last year, the FBI’s Internet Crime Complaint Center (IC3) warned the public about the unlawful utilization of AI to facilitate financial fraud. The report highlighted that criminals are creating messages to send to victims faster, allowing them to reach a broader audience with credible content. Additionally, they are employing generative AI tools to assist with language translation and reduce grammatical and spelling errors among foreign criminal actors targeting victims in the United States.
This confirms what many MSPs already know: Combining traditional email security with GenAI-aware defenses is no longer optional – it’s mission-critical.
Building generative AI-resilient email security
To protect clients, MSPs must rethink email defense with AI threats in mind. MSPs should look for solution offerings that protect against AI-based email attacks. Features like automated incident response and LLM-aware threat detection can help MSPs:
- Stop AI-driven phishing and impersonation by analyzing tone, context, and unusual email formatting, not just keywords.
- Provide real-time threat intelligence and response at scale.
Education also remains a powerful and effective protection against AI-generated attacks. MSPs should invest in security awareness training for employees to help them understand the latest threats and how to spot them, and they should encourage employees to report suspicious emails.
The bottom line for MSPs
AI is revolutionizing corporate operations and the methods utilized by malicious actors. Managed Service Providers (MSPs) are strategically positioned to become heroes for their clients by implementing advanced email security solutions and demonstrating their value as trusted defenders within an AI-driven threat environment.
This article was originally published at Managed Services Journal.
Photo: metamorworks / Shutterstock
This post originally appeared on Smarter MSP.

