Command Palette
Search for a command to run...
Model Posioning
Model poisoning refers to the act of an attacker injecting malicious samples into the training data, causing the machine learning model to deviate from its normal course during training, thereby affecting the model's performance and decision accuracy. The goal is to make the model produce incorrect outputs on specific tasks or reduce its overall generalization ability. The application value of model poisoning lies in enhancing system security by researching and preventing such attacks, thus increasing the robustness and credibility of the model.