Adversarial Intelligence: Red Teaming Malicious Use Cases for AI
Recorded Future threat intelligence analysts and R&D engineers collaborated to test four malicious use cases for artificial intelligence (AI) to illustrate “the art of the possible” for threat actor use. We tested the limitations and capabilities of current AI models, ranging from large language models (LLMs) to multimodal image models and text-to-speech (TTS) models. All […]
Adversarial Intelligence: Red Teaming Malicious Use Cases for AI Read More »







