HyperAIHyperAI

Command Palette

Search for a command to run...

Anthropic Unveils Structured Outputs for Reliable JSON Responses in API

Anthropic has introduced Structured Outputs for its top models, a new feature now available in public beta for Claude Sonnet 4.5 and Opus 4.1. This advancement ensures that model responses strictly adhere to a defined JSON schema or tool definition, solving a long-standing challenge for developers who need reliable, machine-readable output from LLMs. The key benefit is predictability. When integrating AI into systems, it's crucial that the output format remains consistent. Without this feature, developers often had to use fragile regex patterns or post-process text to extract data. Now, with Structured Outputs, the model is guaranteed to return valid JSON that matches the specified schema—though it may still hallucinate content, so accuracy is not guaranteed, only format correctness. To get started, set up a dedicated development environment. Using Miniconda on WSL2 Ubuntu, create a new environment and install the required packages: anthropic, httpx, jupyter, beautifulsoup4, requests, and pydantic. Then, obtain an Anthropic API key from the console. The feature supports two ways to define output structure: raw JSON schema or a Pydantic model class. The latter is more Pythonic and easier to manage. In the first example, a script scrapes Wikipedia pages for famous scientists and uses the model to extract structured data—name, birth details, fame, Nobel Prize year, and death info—into a consistent JSON format. The code defines a JSON schema with required fields and uses the output_format parameter with a schema definition. The API call includes the anthropic-beta header to enable the beta feature. The response is parsed directly into a Python dictionary, ready for use. Running this on Einstein, Feynman, Maxwell, and Guth produced accurate, well-formatted summaries, confirming the model’s ability to return structured results reliably. The second example demonstrates a practical application in software engineering: automated code review and refactoring. A flawed Python function with an SQL injection vulnerability is fed to the model. Using a Pydantic model class, the developer specifies that the output must include a safety verdict, a list of bugs with severity and descriptions, refactored code, and an explanation. The model returns a clean JSON object matching the schema. The script parses it and displays the findings clearly—flagging the SQL injection as Critical, noting resource leaks and poor naming. It then prints the securely rewritten code using parameterized queries and proper context management. This approach enables seamless integration into CI/CD pipelines. For example, a GitHub Action could reject pull requests if the model detects critical issues. The power of this feature lies in its ability to turn LLMs from conversational tools into reliable, deterministic components. It enables clean separation of data and metadata, supports strict typing, and eliminates the need for error-prone text parsing. Anthropic’s Structured Outputs mark a major step forward in making AI more usable in production systems. Developers can now build robust, automated workflows with confidence that outputs will match expected formats—opening new possibilities for AI-powered applications across industries. For full details, refer to Anthropic’s official documentation on structured outputs.

Related Links