REPORTS

Adversarial Intelligence: Red Teaming Malicious Use Cases for AI

March 19, 2024

Recorded Future threat intelligence analysts and R&D engineers collaborated to test four malicious use cases for artificial intelligence (AI) to illustrate “the art of the possible” for threat actor use. We tested the limitations and capabilities of current AI models, ranging from large language models (LLMs) to multimodal image models and text-to-speech (TTS) models. All projects were undertaken using a mix of off-the-shelf and open-source models, without fine-tuning or training, to simulate realistic threat actor access.

Based on the availability of current tools and the outcome of these four experiments, we assess that malicious uses of AI in 2024 will most likely emerge from targeted deepfakes and influence operations. Deepfakes can already be made using open-source tools and used to impersonate executives in social engineering campaigns combining AI-generated audio and video with conference call and VOIP software. The cost of producing content for influence operations will likely decrease by 100 times, and AI-assisted tools can help clone legitimate websites or spin up fake media outlets. Malware developers can abuse AI along with readily available detections like YARA rules to iterate malware strains and avoid detection. Threat actors of all resource levels will also likely benefit from using AI for reconnaissance, including identifying vulnerable industrial control system (ICS) equipment and geolocating sensitive facilities from open-source intelligence (OSINT).

Current limitations concentrate on the availability of open-source models performing close to state-of-the-art (SOTA) models and bypass techniques for security guardrails on commercial solutions. Given the diverse applications of deepfake and generative AI models, multiple sectors are anticipated to make significant investments in these technologies, enhancing the capabilities of open-source tools as well. This dynamic was previously observed in the offensive security tool (OST) space, where threat actors extensively adopted open-source frameworks or leaked, closed-source tools such as Cobalt Strike. Significant decreases in cost and time will likely lead to a wider variety of threat actors of all technical levels using these attack vectors against a growing number of organizations.

In 2024, organizations need to widen their conception of their attack surface to include their executives’ voices and likenesses, website and branding, and public imagery of facilities and equipment. Furthermore, organizations need to begin preparing for more advanced uses of AI, such as developing self-augmenting malware capable of evading YARA detections, which would require the adoption of stealthier detection methods such as Sigma or Snort.

SHARE:
Price: FREE

About the Provider

Recorded Future
Recorded Future is a privately held cybersecurity company founded in 2009 with headquarters in Somerville, Massachusetts. The company specializes in the collection, processing, analysis, and dissemination of threat intelligence.

TOPICS

Adversarial Intelligence, Artificial Intelligence