Tech Leaders from GM, Zoom, and IBM Discuss AI Model Trade-offs for Enterprise Adoption
Choosing between open, closed, or hybrid AI models is both a technical and strategic decision for enterprises. At the VB Transform conference on July 9, 2025, AI architecture experts from General Motors, Zoom, and IBM shared their insights on the trade-offs involved in these choices. Barak Turovsky, who became GM’s first chief AI officer in March, highlighted the constant noise surrounding each new model release and leaderboard update. Despite this, he acknowledged the significant impact of open-sourcing AI model weights and training data, which he credits for helping OpenAI and others launch their groundbreaking models. Turovsky noted the cyclical nature of AI models transitioning between open and closed states, underscoring the dynamic nature of the field. Enterprises must consider various factors when choosing AI models, including cost, performance, trust, and safety. Turovsky recommended a flexible approach, suggesting that companies might use open models for internal applications while opting for closed models for production and customer-facing solutions, or vice versa. IBM's AI Strategy Armand Ruiz, IBM’s Vice President of the AI platform, explained that IBM initially developed its own large language models (LLMs) but soon realized this approach was insufficient as more powerful models emerged. The company responded by integrating platforms like Hugging Face, offering customers a wide range of open-source models to choose from. IBM recently launched a new model gateway, providing enterprises with an API to switch between different LLMs seamlessly. Despite the growing number of models available, choice can sometimes lead to confusion. Ruiz emphasized that IBM focuses on the feasibility of use cases during the initial phases, rather than which specific model is being used. This approach helps to simplify decision-making and ensures that the solution meets the customer's requirements before delving into detailed customization or distillation processes. “First, we try to simplify the analysis paralysis with all those options and focus on the use case,” Ruiz said. “Then we figure out the best path for production.” Zoom’s Hybrid Approach Xuedong Huang, Zoom’s Chief Technology Officer, described the company’s AI Companion, which offers two primary configurations. The first integrates Zoom’s own small language model (SLM) with larger foundation models, while the second allows customers to use only Zoom’s model if they are wary of managing too many third-party solutions. Zoom recently partnered with Google Cloud to implement an agent-to-agent protocol for AI Companion, enhancing its enterprise workflow capabilities. Zoom’s SLM, which has 2 billion parameters, was developed without using customer data. Although relatively small, it can outperform some industry-specific models, particularly when handling complex tasks in tandem with larger models. This hybrid approach leverages the strengths of both types of models, ensuring optimal performance across a variety of scenarios. “The small model will perform very specific tasks, while the larger model handles more complex, general tasks,” Huang said. “It’s like Mickey Mouse and the elephant dancing together—each has its unique strengths, but they work harmoniously as a team.” These discussions underscore the evolving landscape of AI models in enterprise settings, where flexibility and a tailored approach are essential to navigate the balance between innovation and practicality.