adversarial transferability

"Adversarial transferability" refers to the phenomenon in machine learning where a model trained to perform a specific task can be deceived by adversarial examples generated for a different but related task, even if the model has never been exposed to those examples during training. In simpler terms, it means that an attack on one model can also be successful on another model.

Requires login.