HyperAI

Model Extraction

Model extraction attacks, also known as model stealing attacks, refer to the process of extracting parameters from a target model through a series of technical means. The goal of the attacker is to steal and replicate a new model that closely mimics the performance of the target model. This type of attack is significant in adversarial machine learning, as it can assess the security and robustness of models, while also posing challenges to the protection of model intellectual property.