HyperAIHyperAI

Command Palette

Search for a command to run...

OpenAI’s Greg Brockman Reveals Internal GPU Battles: "Pain and Suffering" in Quest for Computing Power

OpenAI president Greg Brockman described the internal process of allocating GPUs within the company as an emotionally taxing ordeal, calling it "pain and suffering." Speaking on the "Matthew Berman" podcast, Brockman explained that deciding which teams receive access to the company’s limited computing resources is one of the most difficult challenges in running OpenAI. He noted that every team at OpenAI has ambitious projects, and when new ideas emerge, the response is often enthusiastic — but the reality of scarce resources makes prioritization unavoidable. "It's so hard because you see all these amazing things, and someone comes and pitches another amazing thing, and you're like, yes, that is amazing," Brockman said. The company divides its GPU capacity between two main areas: research and applied products. The chief scientist and research lead manage internal allocations within the research division, while senior leadership — including CEO Sam Altman and Fidji Simo, head of applications — determine the overall balance between research and product development. At the operational level, a small internal team handles the day-to-day logistics of shifting GPU assignments. Kevin Park, a key figure in this effort, is responsible for reallocating hardware from projects that are winding down to new initiatives that need immediate support. Brockman recounted the typical exchange: “You go to him and you're just like, 'OK, like we need this many more GPUs for this project that just came up.' And he's like, 'All right, there's like these five projects that are sort of winding down.'" This constant reallocation underscores the extreme scarcity of computing power that OpenAI has repeatedly warned about. Brockman emphasized that access to GPUs directly impacts team productivity and that the emotional stakes are high. "People really care," he said. "The energy and emotion around, 'Do I get my compute or not?' is something you cannot understate." The struggle for compute is not unique to OpenAI. The company has long stressed that every new GPU it acquires is immediately consumed. Kevin Weil, OpenAI’s chief product officer, previously said on the "Moonshot" podcast that "the more GPUs we get, the more AI we'll all use," comparing the effect to how increased bandwidth enabled the video revolution. Altman recently announced that OpenAI is launching new compute-heavy products, some of which will be restricted to Pro subscribers or come with additional fees due to their high infrastructure costs. He framed the push as an experiment in testing the limits of AI systems: "We also want to learn what's possible when we throw a lot of compute, at today's model costs, at interesting new ideas." Other tech giants are facing similar challenges. Meta’s Mark Zuckerberg recently stated that the company is making "compute per researcher" a core competitive advantage, investing heavily in both GPUs and custom-built infrastructure to maintain its edge.

Related Links