Jump to content

Poisoning attacks: Difference between revisions

From Hackerpedia
imported>Unknown user
No edit summary
imported>Unknown user
No edit summary
 
(No difference)

Latest revision as of 00:24, 20 January 2026

Languages: English | Français

Poisoning attacks

Adversarial attacks in which an adversary interferes with a model during its training stage, such as by inserting malicious training data (data poisoning) or modifying the training process itself (model poisoning).


Source: NIST AI 100-2e2025 | Category: