adversarial robustness

Adversarial robustness refers to the ability of a system or model to resist or defend against malicious attempts to fool or manipulate it. It involves designing or training algorithms that can accurately handle unexpected or adversarial inputs, even if they are specifically crafted to deceive or disrupt the system's performance.

Requires login.