HyperAIHyperAI

Command Palette

Search for a command to run...

Backdoor Attack

Backdoor attacks refer to the malicious injection of data into training sets, causing the trained model to misclassify inputs with backdoor triggers during testing, recognizing them as the target class. The goal of such attacks is to manipulate the model's prediction results in a covert manner, which has significant research value and practical implications, especially in the field of computer vision, where it can help assess and enhance the security and robustness of models.

No Data
No benchmark data available for this task
Backdoor Attack | SOTA | HyperAI