
CrazyHunter Ransomware
CrazyHunter is a ransomware campaign targeting healthcare that weakens endpoint defenses and escalates privileges before encrypting systems at scale.

The theme of Cybersecurity Awareness Month in October 2025 is Stay Safe Online. It draws attention to recognizing and reporting scams as one of the Core Four ways individuals can use to protect themselves from cyber threats — our key focus this week.
When it comes to scams, artificial intelligence (AI)-driven social engineering is changing the cyber threat landscape and making some defense strategies redundant. Many customers use the threat intelligence we gather through direct threat actor engagement to ground their cybersecurity awareness training in real-world adversary tactics and techniques. Intelligence-informed training and drills are critical for key roles at a higher risk of being targeted by specific threats (such as CEO-impersonation fraud) and for employees whose system privileges would result in a bigger impact if their credentials were phished.
The Impact of AI on Social Engineering
The widespread availability of AI tools has lowered barriers in cost, effort and skill to execute targeted phishing. In the past three years, we have seen AI transform social engineering from generic scams to highly sophisticated, personalized campaigns built to exploit specific human trust signals across text, audio and video/multimedia channels. AI has enhanced the scale, precision and psychological impact of social engineering attacks.
Threat actors are also expanding beyond credential harvesting, using AI-generated lures of senior executives for business email compromise (BEC), fake corporate announcements and public opinion manipulation. And AI tools are fueling remote work fraud, face-swap sextortion and scams against individuals. As detailed in our new white paper, Precision Deception: Rise of AI-Powered Social Engineering, these threats go beyond fraud, posing significant risks to psychological well-being, organizational reputation and societal integrity.
The broad application of AI makes it challenging to track how social engineering tactics and techniques are changing. Understanding this evolving landscape is essential to developing defenses against increasingly convincing spear-phishing, voice impersonation attacks, vishing and deepfakes, which traditional awareness training and automated detection struggle to detect.
Classic red flags, like spelling and grammar errors, are no longer reliable when large language models (LLMs) can instantly generate fluent, contextually relevant content using near-perfect brand mimicry. AI-driven polymorphic phishing that dynamically changes each email’s content and characteristics can beat phishing filters. Vishing and voice deepfakes face fewer digital obstacles, while flawless AI-generated faces help fraudsters defeat know-your-customer (KYC) controls.
New AI video generation tools will likely fuel deepfake threats. These changes are why we recommend (see below) organizations conduct regular drills for executives and staff using real examples of deepfake calls and AI-generated messages. Media literacy training can also help employees recognize synthetic media cues.
Why our Analysts Focus on Human Trust Signals
AI has shifted social engineering from volume-based attacks to campaigns focused on quality and adaptive tactics based on a target’s response. That is why our analysts have focused on how modern attacks exploit human trust signals — psychological cues and behaviors — across text, voice and visual channels. We can see that attackers leverage distinct human trust signals, such as language, tone and urgency or visual appearance. They employ different AI capabilities to achieve this on each channel.
| Channel | Focus and Target | Key AI-Powered Tactics/Examples |
|---|---|---|
| Text-based Attacks | Mimics language-based trust signals, focusing on language and context for broad or personalized deception via emails, chats or messaging platforms. | AI enables high-volume, polymorphic, and personalized campaigns. LLMs generate fluent, multilingual, and brand-consistent content for spear-phishing and polymorphic phishing that evades detection. |
| Voice-based Attacks | Exploits trust in the human voice—tone, urgency, real-time interaction—targeting high-value individuals through impersonated calls or messages. | Neural voice cloning (audio deepfakes) replicates voices with minimal data, enabling vishing and CEO impersonation fraud. Real-time voice conversion tools allow live impersonation. |
| Visual and Multimedia Deception | Targets trust in appearance, using synthetic images and deepfake videos. | Deepfake videos mimic facial expressions and speech for executive impersonation or disinformation. AI-generated images create hyper-realistic profiles for BEC and credential theft. |
Our analysts used this framework to analyze the tools, tactics, and targeting strategies threat actors use in AI-driven social engineering as well as to anticipate how AI adoption will evolve. We also provide channel-specific recommendations to address gaps in traditional detection systems and user awareness training.
AI Adoption in Underground Services
While generative AI may have transformed social engineering attacks, our analysis through July 2025 shows that, in phishing, most threat actors still rely on familiar, cost-effective phishing-as-a-service (PhaaS) platforms. The recently dismantled LabHost PhaaS offered end-to-end phishing management for about US $200 per month. The new white paper explores the new PhaaS SheByte, a possible successor to LabHost, which allows users to create and generate templates with AI.
Recent underground offerings include jailbroken LLMs and prompts, AI-driven call center platforms for phishing, voice bots to elicit one-time passcodes for payment fraud and deepfake video generation services. Numerous threat actors now offer deepfake videos, fake documents, and media manipulation services for face-swapping, lip-syncing, voice-overs, facial manipulation and photo-to-video conversions, often for bypassing KYC controls.
Despite this diversity of offerings, our analysts found that AI is mainly used for content drafting and localization. There remain significant barriers to full AI-driven automation, including the cost and complexity of training and integrating AI models into attacker infrastructure and delivery systems. Meanwhile mass phishing, BEC and traditional social engineering remain highly effective against users with no awareness training.
AI-assisted vishing and voice deepfakes present unique challenges. Voice impersonation relies on social cues and psychological pressure, facing fewer digital obstacles than email phishing. Voice clones are employed to manipulate victims into transferring funds, divulging credentials or granting access to sensitive information — often under the guise of urgency or authority.
What is the Outlook for AI-driven Social Engineering?
Our analysts expect selective escalation rather than mass adoption of AI. Deepfake-enabled impersonation calls targeting executives and AI-voiced fraud against high-value targets will likely increase, along with a surge in synthetic media during elections, geopolitical events, and social debates.
Widespread adoption in cybercrime will depend on lower model hosting costs and the emergence of “state-of-the-art” AI kits akin to today’s PhaaS offerings. Until then, generative AI will continue to refine existing tactics for financially motivated actors.
Recommendations
To counter the growing threat of AI-powered impersonation, organizations should implement a multi-layered defense strategy:
By adopting these measures, organizations can significantly reduce their risk from AI-driven social engineering and impersonation attacks.
.

CrazyHunter is a ransomware campaign targeting healthcare that weakens endpoint defenses and escalates privileges before encrypting systems at scale.

DevMan Ransomware is a newly emerging ransomware operation observed in 2025 that has been assessed as a derivative of the DragonForce ransomware family.

Gootloader resurfaced with enhanced capabilities, building on the multi-stage loader malware first seen in 2020.
Stay informed with our weekly executive update, sending you the latest news and timely data on the threats, risks, and regulations affecting your organization.