Jump to content

Red teaming: Difference between revisions

From Hackerpedia
imported>Unknown user
No edit summary
 
imported>Unknown user
No edit summary
 
(2 intermediate revisions by the same user not shown)
(No difference)

Latest revision as of 00:24, 20 January 2026

Languages: English | Français

Red teaming

In the AI context, means a structured testing effort, often adopting adversarial methods, to find flaws and vulnerabilities in an AI system, including unforeseen or undesirable system behaviors or potential risks associated with the misuse of the system.


Source: NIST AI 100-2e2025 | Category: