adversarial examples

Adversarial examples refer to specially crafted inputs that are deliberately designed to mislead or deceive machine learning models. These input variations may appear indistinguishable from normal inputs to humans but can cause the model to produce incorrect or unexpected outputs.

Requires login.