HyperAI

Adversarial Attack

Adversarial attacks are a technique aimed at finding small perturbations that can alter the predictions of machine learning models. These perturbations are almost imperceptible to human vision but can effectively influence the model's output, thereby revealing the vulnerabilities and potential security risks of the model, which has important research and application value.