IBM تطلق عائلة معززات "سباير" الذاتية وتوسع شراكتها مع أنثروبك لتعزيز الذكاء الاصطناعي في الشركات
IBM has launched its proprietary "Spyre" AI accelerators and strengthened its AI strategy through a strategic partnership with Anthropic, marking a significant step in its enterprise AI ambitions. Though IBM missed the initial wave of large language model development, its deep expertise in high-performance computing (HPC), mainframe systems, and software integration positions it uniquely to help enterprises adopt generative AI without starting from scratch. At the techExchange2025 conference, IBM unveiled the Spyre XPU — a custom AI accelerator developed by IBM Research — now ready for commercial deployment. The accelerators will ship in eight-card bundles, initially as sidecars for System z mainframes starting October 28, with Power Systems support launching December 12. Each bundle includes Red Hat Enterprise Linux, the RHEL.AI Inference Server, and future integration with OpenShift.AI and Watsonx.data governance tools by Q1 2026. Spyre supports multiple precision formats (INT4, INT8, FP8, FP16) and can be combined into a virtual card with 1 TB of memory and 1.6 TB/sec bandwidth, delivering over 2.4 petaops of performance at FP16. A standout feature is Spyre’s ability to live-migrate inference workloads alongside CPUs on Power Systems — a capability not available with Nvidia or AMD GPUs. This enables seamless, high-availability AI operations, a major advantage for mission-critical enterprise environments. Complementing the hardware is Project Bob, an AI-powered integrated development environment (IDE) designed to modernize legacy code. Built using Anthropic’s Claude Sonnet 4.5, along with Granite, Llama 3, and Mistral models, Project Bob replaces older, siloed code assistants for COBOL and RPG. Internal testing shows 6,000 IBM developers using it, with half using it daily. IBM reports a 45% average productivity boost, with engineers treating it like a junior developer. The partnership with Anthropic — led by CEO Dario Amodei and IBM’s Dinesh Nirmal — appears to be a strategic distribution play. IBM likely secures discounted licenses to embed Claude models into its Watsonx platform and tools, turning Anthropic into a white-label provider. Unlike Amazon and Google, IBM isn’t taking equity, suggesting a focus on ecosystem integration rather than direct model training. IBM’s broader vision centers on delivering usable, trustworthy AI — not just access to models. As Nirmal noted, only 5% of enterprise AI projects yield ROI, largely due to usability gaps despite model availability. Spyre and Project Bob aim to bridge that gap by offering a full-stack, enterprise-ready platform. With prebuilt AI agents for IT operations, cybersecurity, supply chain forecasting, and industry-specific use cases in finance, healthcare, and public services, IBM is targeting real-world deployment. The company emphasizes flexibility, letting customers choose entry points and scale incrementally. While pricing details remain under wraps, IBM claims competitive rates and early adopter satisfaction. Whether Spyre can compete with GPU-based solutions remains to be seen, but its integration with IBM’s dominant mainframe and Power Systems infrastructure gives it a unique edge in the enterprise AI race.
