HyperAI
Back to Headlines

Multiverse Computing Secures $215M to Shrink AI Models Without Losing Performance

13 days ago

Spanish startup Multiverse Computing has secured a substantial Series B round of €189 million (approximately $215 million) to support the development and expansion of its innovative technology called "CompactifAI." This technology promises to drastically reduce the cost and size of large language models (LLMs) without sacrificing performance. CompactifAI is inspired by quantum computing principles and can compress existing LLMs by up to 95%, making them highly efficient. Multiverse currently offers compressed versions of popular open-source LLMs, such as Llama 4 Scout, Llama 3.3 70B, Llama 3.1 8B, and Mistral Small 3.1. The company plans to release a slim version of DeepSeek R1 in the near future, with more open-source and reasoning models to follow. Notably, proprietary models from companies like OpenAI are not supported. These "slim" models, as Multiverse refers to them, are available via Amazon Web Services (AWS) and can also be licensed for on-premises use. According to the company, its compressed models offer a speed improvement of 4 to 12 times over their uncompressed counterparts, leading to a significant reduction in inference costs—ranging from 50% to 80%. For example, running the compacted Llama 4 Scout Slim on AWS costs 10 cents per million tokens, compared to 14 cents for the non-compressed version. Moreover, Multiverse claims that some of its models can be miniaturized and made energy-efficient enough to run on a variety of devices, including personal computers, smartphones, cars, drones, and even the popular low-cost Raspberry Pi. This opens up a wide range of possibilities, from enhancing consumer devices to creating interactive AI applications in everyday settings, such as AI-powered talking Santas during the holiday season. The company's success is underpinned by the expertise of its founders. Co-founder and CTO Román Orús is a professor at the Donostia International Physics Center in San Sebastián, Spain. Orús is renowned for his groundbreaking work on tensor networks, computational tools that simulate quantum computers on classical hardware, often used for compressing deep learning models. CEO Enrique Lizaso Olmos, a mathematician with a background in academia, has spent much of his career in banking and previously served as the deputy CEO of Unnim Bank. Together, they have built a team that has achieved impressive results, securing 160 patents and attracting over 100 global customers, including major corporations like Iberdrola, Bosch, and the Bank of Canada. The recent Series B funding was led by Bullhound Capital, a notable investor in successful technology companies such as Spotify, Revolut, DeliveryHero, Avito, and Discord. Additional investors include HP Tech Ventures, SETT, Forgepoint Capital International, CDP Venture Capital, Santander Climate VC, Toshiba, and Capital Riesgo de Euskadi – Grupo SPR. With this latest investment, Multiverse has raised around $250 million in total capital to date. The potential impact of CompactifAI on the AI landscape is significant, as it addresses one of the most pressing challenges in the field: the high computational and financial costs associated with running large, powerful models. By making these models more accessible and affordable, Multiverse could empower a broader range of businesses and developers to leverage advanced AI capabilities, driving innovation and democratizing access to cutting-edge technology.

Related Links