HyperAI
Back to Headlines

Harmonic's AI chatbot Aristotle claims to deliver hallucination-free math answers with formal verification

4 days ago

Harmonic, an AI startup co-founded by Robinhood CEO Vlad Tenev, has launched the beta version of its AI chatbot app, available on iOS and Android. The app gives users access to the startup’s AI model, named Aristotle, which the company claims provides “hallucination-free” answers to math-related questions. This is a notable assertion, as many current AI models struggle with reliability and accuracy in complex reasoning tasks. Harmonic is dedicated to building what it calls “mathematical superintelligence” or MSI. The startup’s long-term goal is to develop a system that can assist users across disciplines requiring strong mathematical skills, such as physics, statistics, and computer science. “Aristotle is the first product available to people that does reasoning and formally verifies the output,” said Tudor Achim, Harmonic’s CEO and co-founder, in an interview with TechCrunch. “Within the domains that Aristotle supports, which are quantitative reasoning areas, we actually guarantee that there’s no hallucinations.” The company plans to expand access to Aristotle beyond the app by releasing an API for enterprise use and a web version for consumers. The beta launch follows a recent $100 million Series B funding round led by Kleiner Perkins, which valued the startup at $875 million. Achim said the investment reflects confidence in Harmonic’s vision and its progress toward achieving MSI. Several major tech companies are working to improve AI’s ability to solve mathematical problems. While AI that can handle math is valuable on its own, the field is also seen as a testbed for core reasoning skills. Systems that excel in math reasoning may later be applied to broader domains. Harmonic claims that Aristotle’s accuracy comes from its use of the open-source programming language Lean, which allows the model to generate solutions that can be algorithmically verified. Before delivering an answer, the system checks its correctness using a non-AI process, a method similar to those used in critical fields like medical devices and aviation. Despite the challenges, achieving accurate, hallucination-free AI in a specific domain is a significant milestone. Research has shown that even top AI models frequently produce incorrect or misleading information, and the issue remains unresolved. For example, OpenAI’s latest reasoning models have been found to hallucinate more than their predecessors. Harmonic claims that Aristotle achieved a gold medal performance on the 2025 International Math Olympiad (IMO) through a formal test, meaning the problems were converted into a machine-readable format. Google and OpenAI have also developed AI models that scored gold in the same competition, but through informal tests conducted in natural language.

Related Links