Command Palette
Search for a command to run...
LLM Jailbreak
LLM Jailbreak refers to the use of adversarial techniques to enable large language models to break through their original limitations and constraints, thereby achieving broader functionality and higher performance. The goal is to explore the potential capabilities of the model, improve its adaptability and flexibility in complex tasks. In practical applications, LLM Jailbreak helps to tap into the deeper potential of the model, optimize its output, and enhance its practical value in specific scenarios.