Red teaming: Difference between revisions
Appearance
imported>Unknown user No edit summary |
imported>Unknown user No edit summary |
(No difference)
| |
Revision as of 02:53, 15 January 2026
Red teaming
In the AI context, means a structured testing effort, often adopting adversarial methods, to find flaws and vulnerabilities in an AI system, including unforeseen or undesirable system behaviors or potential risks associated with the misuse of the system.
Source: NIST AI 100-2e2025 | Category: