Adversarial Example
Adversarial examples refer to inputs into a neural network that cause the network to output incorrect results.
The input samples are formed by deliberately adding slight interference in the data set. The interfered input causes the model to give wrong output with high confidence. The input samples are called adversarial samples. This behavior is usually regarded as an adversarial attack on the neural network model.
It was first proposed by Christian Szegedy et al. in their ICLR2014 paper.