adversarial perturbations

"Adversarial perturbations" refer to deliberately introduced alterations or distortions made to digital inputs, such as images or text, in order to deceive or mislead artificial intelligence systems. These perturbations are carefully designed to be imperceptible to humans, but can cause significant misclassifications or misleading results when processed by machine learning models.

Requires login.