Jump to content

Red teaming

From Hackerpedia
Revision as of 00:24, 20 January 2026 by imported>Unknown user
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Languages: English | Français

Red teaming

In the AI context, means a structured testing effort, often adopting adversarial methods, to find flaws and vulnerabilities in an AI system, including unforeseen or undesirable system behaviors or potential risks associated with the misuse of the system.


Source: NIST AI 100-2e2025 | Category: