Cerebras Inference Cloud Now Available in AWS Marketplace, Enabling Faster and More Efficient AI Applications
At the RAISE Summit in Paris, France, Cerebras Systems announced the launch of Cerebras Inference Cloud on AWS Marketplace, making its ultra-fast AI inference capabilities instantly available to enterprise customers. This move aims to enable a new era of high-performance, interactive, and intelligent agentic AI applications. “Now customers can easily procure Cerebras’s ultra-fast inference through their AWS accounts and workflows, allowing them to tackle problems that were previously out of reach,” said Chris Grusz, Managing Director of Technology Partnerships at AWS. By integrating with AWS Marketplace, Cerebras Inference Cloud offers AWS customers a streamlined process for purchasing and managing high-speed inference solutions. This integration supports the development of agentic applications that are not only faster to build and deploy but also significantly more responsive. Customers can now combine Cerebras’s powerful inference capabilities with advanced frameworks and developer tools to create cutting-edge AI systems. “We are excited to bring the power of Cerebras inference to millions of builders and enterprises in AWS Marketplace,” said Alan Chhabra, Executive Vice President of Worldwide Partnerships at Cerebras. “From financial services to large language model (LLM)-powered developer tools, this expansion opens up possibilities for building the fastest and most efficient AI applications ever seen.” Babak Pahlavan, Founder and CEO of NinjaTech AI, added, “With Cerebras on AWS Marketplace, the world’s fastest AI computing system is now available with the simplicity of AWS cloud services. This will make it easier for NinjaTech AI and other companies to develop and deploy AI agents that perform at unprecedented speeds.” Cerebras Systems is a leader in AI supercomputing, known for its groundbreaking Wafer-Scale Engine-3 (WSE-3), the largest and fastest commercially available AI processor. The company’s CS-3 system simplifies the complexity of distributed computing, allowing businesses to build and deploy large AI models with ease. Cerebras Inference Cloud delivers groundbreaking inference speeds, enabling customers to create and run sophisticated AI applications. Leading organizations, including research institutions, corporations, and government entities, rely on Cerebras for developing proprietary models and training open-source models that have been downloaded millions of times. Cerebras solutions are available both through the Cerebras Cloud and on-premises installations, offering flexibility and performance to meet diverse needs. For more information on Cerebras Systems and its products, visit cerebras.ai or follow the company on LinkedIn, X, and Threads.