Dell Embraces On-Premises AI with New Servers and Cooling Solutions
Dell Technologies, which has been making substantial efforts to bring AI within reach of more businesses. During the opening keynote of Dell Technologies World 2024 in Las Vegas, founder and CEO Michael Dell emphasized the importance of enterprise AI adoption in realizing AI's economic potential. Dell highlighted collaborations with AI tech firms like CoreWeave, G42, and Mistral, and discussed a massive AI project involving 110,000 GPUs, 27,500 GPU nodes, and 6,000 network switches. While such projects are ambitious, Dell noted that most businesses won't need such extensive setups but can still benefit from AI by integrating smaller, domain-specific models into their operations. To facilitate this, Dell launched the Dell AI Factory last year, a program that provides enterprises with the necessary hardware to quickly design and manage their AI infrastructures. Over 3,000 AI factories have been deployed, offering a 60 percent cost-efficiency advantage over public clouds. This year, Dell is expanding its offerings with new Intel-powered PowerEdge servers and enhanced storage solutions. The new PowerEdge XE9780 and XE9785 are air-cooled systems supporting up to 192 Nvidia Blackwell Ultra GPUs, with the option to customize up to 256 GPUs per Dell IR7000 rack. These servers offer up to four times faster LLM training with the eight-way Nvidia HG B300 accelerators. The PowerEdge XE9712 includes Nvidia’s GB300 NVL72 liquid-cooled rack, while the PowerEdge XE7745, set to launch in July, will feature Nvidia’s RTX Pro 6000 Black Server Edition GPUs and be supported in Nvidia’s Enterprise AI Factory validated design, accommodating up to eight GPUs in a 4U chassis. On the storage front, Dell’s ObjectScale object storage portfolio supports AI deployments with a dense software-defined system and integrated Nvidia BlueField-3 and Spectrum-4 networking for better performance and scalability. The portfolio will also support S3 over RDMA, promising 230 percent higher throughput and 80 percent lower latency compared to traditional S3. A new reference architecture combining PowerScale storage and Project Lightning, which aims to be the world's fastest parallel file system, will be paired with PowerEdge XE servers. One of the primary challenges for on-prem AI adoption is managing the heat and power requirements of AI workloads. Seamus Jones, director of server engineering at Dell, noted that these concerns echo longstanding data center issues. Dell’s latest solutions, such as the PowerCool enclosed rear door heat exchanger (eRDHx), which captures 100 percent of IT-generated heat and reduces cooling costs by 60 percent, and its liquid-cooling technologies, are designed to address these challenges. Armando Acosta, a Dell product planner, explained that customers often face power thresholds, limiting their ability to support high-density AI systems. For example, the PowerEdge XE9780 alone draws 12 kilowatts, and multiple units can quickly surpass facility limits. By improving cooling and power efficiency, Dell aims to lower the barriers for enterprise AI adoption and make the technology more accessible. Industry insiders view Dell's efforts positively, seeing them as essential steps in democratizing AI for a broader range of enterprises. The company's history of innovative hardware solutions and its focus on cost-efficiency and data security align well with the needs of businesses looking to integrate AI into their processes without the heavy overhead of public clouds. As Dell continues to enhance its AI Factory and develop more sustainable cooling methods, it is positioning itself to play a pivotal role in the evolving enterprise AI landscape.