adversarial attacks
Adversarial attacks refer to deliberate attempts to manipulate or deceive artificial intelligence systems by inputting carefully crafted data, which may appear harmless to humans but can mislead or trick the machine learning algorithms into making incorrect or unintended predictions or decisions.
Requires login.
Related Concepts (1)
Similar Concepts
- adversarial anomaly detection
- adversarial deep learning
- adversarial detection and defense
- adversarial examples
- adversarial examples and attacks on ai systems
- adversarial feature learning
- adversarial image classification
- adversarial machine learning
- adversarial perturbations
- adversarial privacy attacks
- adversarial risk analysis
- adversarial robustness
- adversarial training
- adversary profiling
- targeted attacks