Lightning AI and Voltage Park Merge to Form AI-Optimized Cloud Platform
Since 2024, Lightning AI—formed by the merger of Lightning AI and Voltage Park—has grown from $18 million to over $500 million in annual recurring revenue (ARR), powering the rise of a new generation of AI development. The combined company now serves over 400,000 developers, startups, and enterprises seeking a unified, efficient way to build and run AI applications. The merger unites Lightning AI’s developer-first AI software platform with Voltage Park’s large-scale GPU infrastructure, creating a single, end-to-end AI cloud designed specifically for the demands of generative AI. Traditional cloud providers like AWS were built for CPU-based workloads such as websites and web services, not the GPU-intensive tasks required by modern AI. This mismatch has led to a fragmented ecosystem of single-purpose tools—each handling one piece of the AI lifecycle like training, inference, or data prep—forcing teams to juggle multiple platforms, vendors, and costs. “Imagine using an iPhone but having to carry a separate calculator, flashlight, and radio,” said William Falcon, founder and CEO of Lightning AI. “That’s where AI tooling is today.” Lightning AI solves this by integrating purpose-built AI software with on-demand, high-performance GPU infrastructure. The platform offers access to over 35,000 H100, B200, and GB300 GPUs, enabling virtually unlimited burst capacity. Users can train models, deploy them into production, and run large-scale inference—all within one seamless system. This eliminates the need to stitch together tools from different vendors or manage complex procurement. The merger brings significant value to both customer bases. Voltage Park users gain built-in AI software—including model serving, team collaboration, observability, and large-scale inference—without needing third-party tools. Meanwhile, Lightning AI users benefit from enterprise-grade infrastructure at neocloud price points, combining the reliability of hyperscalers with the simplicity and cost-efficiency of modern AI-native platforms. “We’re software-first and infrastructure-native,” said Saurabh Giri, CPTO of Lightning AI and former CPTO of Voltage Park. “Most neoclouds sell raw GPUs without deep software. Most AI platforms rely on third-party clouds. We’re the first to build the stack end-to-end for AI.” The platform is built on PyTorch Lightning, a framework used by over 5 million developers and enterprises. Lightning AI’s vision—created from Falcon’s experience pretraining large models at Facebook AI Lab—has always been to make AI development accessible to everyone, from students to Fortune 100 companies. The merger marks a pivotal step toward that goal. Industry leaders see this as a paradigm shift. “The next phase of AI will be won by teams that control the entire stack,” said Timo Mertens, CTO of Cantina Labs. “Tight integration between software, optimization, and owned compute is no longer optional—it’s essential.” Misha Laskin, CEO of Reflection AI, added that the combined platform fills a critical gap for frontier research, enabling faster, more efficient AI development. Existing customers face no disruption. Lightning continues to support multiple cloud providers, allowing teams to use the platform alongside AWS or other clouds. When needed, they can burst workloads into Lightning’s own GPU fleet for extra capacity. The company plans to expand its GPU marketplace through deeper partnerships. With its unified, AI-native architecture, Lightning AI is redefining what a cloud platform can be—designed from the ground up for the generative AI era. As the industry moves toward vertical integration, Lightning AI positions itself at the forefront of a new computing age.
