adversarial perturbations
"Adversarial perturbations" refer to deliberately introduced alterations or distortions made to digital inputs, such as images or text, in order to deceive or mislead artificial intelligence systems. These perturbations are carefully designed to be imperceptible to humans, but can cause significant misclassifications or misleading results when processed by machine learning models.
Requires login.
Related Concepts (1)
Similar Concepts
- adversarial anomaly detection
- adversarial attacks
- adversarial autoencoders
- adversarial deep learning
- adversarial detection and defense
- adversarial examples
- adversarial feature learning
- adversarial image synthesis
- adversarial input synthesis
- adversarial machine learning
- adversarial privacy attacks
- adversarial reinforcement learning
- adversarial risk analysis
- adversarial robustness
- adversarial training