Jump to content

Poisoning attacks

From Hackerpedia
Revision as of 00:12, 15 January 2026 by imported>Unknown user
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Languages: English | Français

Poisoning attacks

Adversarial attacks in which an adversary interferes with a model during its training stage, such as by inserting malicious training data (data poisoning) or modifying the training process itself (model poisoning).


Source: NIST AI 100-2e2025 | Category: