OpenAI Enhances Security Measures to Shield AI Models from Corporate Espionage
OpenAI has reportedly tightened its security measures to shield its intellectual property from corporate espionage. According to the Financial Times, the company accelerated its security protocols after Chinese startup DeepSeek released a competing AI model in January, which OpenAI claims was improperly developed using “distillation” techniques to copy its models. The enhanced security includes a range of stringent policies and practices. One such measure is "information tenting," which restricts staff access to sensitive algorithms and new products. During the development of OpenAI’s Model O1, for instance, only verified team members who were explicitly briefed on the project could discuss it in shared office spaces, according to the FT. Further, OpenAI has isolated proprietary technology in offline computer systems, implemented biometric access controls (such as fingerprint scanning) for secure office areas, and adopted a "deny-by-default" internet policy. This policy requires explicit approval for any external network connections, significantly limiting the risk of data breaches. The company has also bolstered physical security at its data centers and expanded its cybersecurity team. These measures appear to be driven by concerns about foreign adversaries attempting to steal OpenAI's intellectual property. However, they also reflect a broader effort to address internal security issues amidst the intense talent poaching in the American AI industry and frequent leaks of CEO Sam Altman's comments. OpenAI has not yet responded to requests for comment on these security enhancements.
