The ransomware landscape has grown even more complex and volatile in 2025. Groups like FunkSec and Black Ba sta have reportedly used generative artificial intelligence (GenAI) and large language models (LLMs) to create ransomware code and enhance their social engineering attacks, respectively. Although adversaries are using multiple extortion techniques — sometimes even quadruple extortion — double extortion remains the attackers’ favored method. Despite high-profile law enforcement takedowns, threat actors show strong resilience — regrouping, rebranding, or forming new groups to quickly fill any vacuum created by the dissolution of a dominant group.
Adversaries are rapidly integrating AI and LLMs to increase the scale, sophistication, and efficiency of their operations. Ransomware groups such as FunkSec use GenAI to generate malicious code, create new ransomware variants, and deploy chatbots to negotiate with victims. Attackers also employ AI to craft convincing phishing emails and conduct voice phishing (“vishing”) attacks that impersonate company personnel. Advanced persistent threat groups have also begun using GenAI on a limited scale. Forest Blizzard (aka Fancy Bear) and Emerald Sleet reportedly leveraged LLMs to mimic official documents in phishing campaigns and to conduct vulnerability research, respectively, and emerging tools like WormGPT, DarkGPT, and FraudGPT are helping cybercriminals increase both the scale and effectiveness of their attacks.