Jump to content

Artificial intelligence red-teaming: Difference between revisions

From Hackerpedia
imported>Unknown user
No edit summary
imported>Unknown user
No edit summary
 
(No difference)

Latest revision as of 00:24, 20 January 2026

Languages: English | Français

Artificial intelligence red-teaming

A structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.


Source: NIST SP 800-218A | Category: