HyperAIHyperAI

Command Palette

Search for a command to run...

Adversarial Purification

Adversarial Purification is an adversarial defense method that leverages generative models to remove adversarial perturbations from input data, restoring the original clean state of the data. Its primary goal is to enhance the robustness and security of models, ensuring they maintain accurate prediction performance even when faced with malicious attacks. This method has significant application value in protecting deep learning systems from adversarial sample attacks.

No Data
No benchmark data available for this task