adversarial robustness
Adversarial robustness refers to the ability of a system or model to resist or defend against malicious attempts to fool or manipulate it. It involves designing or training algorithms that can accurately handle unexpected or adversarial inputs, even if they are specifically crafted to deceive or disrupt the system's performance.
Requires login.
Related Concepts (1)
Similar Concepts
- adversarial anomaly detection
- adversarial attacks
- adversarial deep learning
- adversarial detection and defense
- adversarial examples
- adversarial feature learning
- adversarial image classification
- adversarial machine learning
- adversarial perturbations
- adversarial privacy attacks
- adversarial reinforcement learning
- adversarial risk analysis
- adversarial training
- adversarial transferability
- robustness