Jump to content

Artificial intelligence red-teaming

From Hackerpedia
Revision as of 00:12, 15 January 2026 by imported>Unknown user
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Languages: English | Français

Artificial intelligence red-teaming

A structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.


Source: NIST SP 800-218A | Category: