HyperAIHyperAI

Command Palette

Search for a command to run...

What's Next for AlphaFold?

In 2017, shortly after completing his PhD in theoretical chemistry, John Jumper learned that Google DeepMind was quietly shifting from its well-known mission of building AI that could beat humans at games to a new, secret project: using artificial intelligence to predict protein structures. He applied immediately. Three years later, he stood at the center of a scientific revolution. Working alongside CEO Demis Hassabis, Jumper co-developed AlphaFold2—a system capable of predicting the 3D structure of proteins with near-atomic precision. The accuracy matched that of years-long experimental methods, but in just hours. It solved a 50-year-old grand challenge in biology. “This is why I founded DeepMind. It’s also why I’ve spent my entire career in AI,” Hassabis once said. In 2024, Jumper and Hassabis were awarded the Nobel Prize in Chemistry for this breakthrough. Five years ago, the world was stunned by AlphaFold2’s debut. Now, the initial excitement has settled, but its impact endures. How is it being used today? And what comes next? I sat down with Jumper to find out. “This past five years have been surreal,” Jumper laughed. “I can’t even remember what it was like not to be interviewed by so many journalists.” Since AlphaFold2, DeepMind has released AlphaFold Multimer, which predicts the structures of multi-protein complexes, and AlphaFold3, which can model not just proteins but also DNA, RNA, and small molecules. The team has also applied AlphaFold to UniProt, the world’s most comprehensive protein database, predicting the structures of around 200 million proteins—nearly all known proteins. Yet Jumper remains cautious: “This doesn’t mean every prediction is certain. It’s a database of predictions, and it carries all the limitations of prediction.” Why is solving protein structure so hard? Proteins are life’s molecular machines. They make up muscles, feathers, horns, transport oxygen, send signals, fire neurons, aid digestion, and power the immune system. All of this function depends on their 3D shape. But predicting that shape from a chain of amino acids is incredibly difficult. A protein can fold into a number of possible structures in a range that’s astronomically large. Finding the correct one is like searching for a single coin in the universe. Jumper and his team used the Transformer neural network architecture—same as that behind large language models—to capture long-range relationships in protein sequences. But he credits their real breakthrough to speed: “We built a system that could give wrong answers incredibly fast. That allowed us to experiment boldly.” They fed the model every piece of relevant information—evolutionary data from diverse species—yielding results far beyond expectations. “We knew we had a paradigm shift. We knew we’d cracked it.” What surprised him most was how quickly researchers began using AlphaFold. “Typically, real impact comes from later generations, after problems are ironed out. But I’ve been amazed by how responsibly scientists use it—neither over-trusting nor under-trusting. They match their confidence to the system’s reliability.” One standout application? Bee disease research. A team used AlphaFold to study a protein linked to colony collapse disorder. “I never imagined AlphaFold would end up in bee science,” Jumper said. Then there are uses beyond its original purpose. One is protein design. David Baker, a 2024 Nobel laureate, used AlphaFold’s predictive power to accelerate synthetic protein design. His team’s RoseTTAFold and experiments with AlphaFold Multimer now allow them to test whether a designed structure is feasible before lab work—cutting design time by tenfold. “If AlphaFold is confident, go ahead. If it’s uncertain, skip it. That alone changes the game.” Another unexpected use: AlphaFold as a structural search engine. Two research groups sought the protein on sperm that binds to an egg. They knew the egg protein but not its partner. Using AlphaFold, they predicted how it might interact with all 2,000 sperm-surface proteins. The model confidently identified one. Experiments confirmed it. “No one would have done 2,000 structure comparisons before. Now, you can. That’s how AlphaFold is changing science.” I reconnected with Kliment Verba, a molecular biologist at UCSF, who used AlphaFold early on. “It’s indispensable. We use it every day.” But he’s also clear about its limits. Predicting interactions between multiple proteins or proteins and small molecules remains challenging. “Sometimes you get a prediction and wonder: is this real or not? It’s like ChatGPT—confidently saying both truth and nonsense.” Still, they use AlphaFold for virtual experiments: testing promising leads on the computer before investing in lab work. “It hasn’t replaced experiments, but it’s massively boosted efficiency.” Now, a wave of new tools is emerging. Startups and labs are building on AlphaFold for drug discovery. MIT and AI drug company Recursion recently launched Boltz-2, which predicts not just protein structure but also how drug molecules bind to targets. Last month, Genesis Molecular AI released Pearl, an interactive model that incorporates user-provided data to guide predictions—claiming superior accuracy on drug-related tasks. Will this speed up drug development? Jumper remains cautious. “Protein structure prediction is just one step in biology. We’re not missing just one structure to cure disease.” He compares it to a time when determining a protein structure cost $100,000—“if it were that easy, someone would’ve done it already.” Still, he hopes the tool will keep expanding its reach: “Now that we have a powerful hammer, let’s use it to hit more nails.” His next goal? Merging AlphaFold’s deep structural expertise with the broad reasoning abilities of large language models. “We already have systems that read scientific literature and reason through problems. And we have models that predict protein structures at superhuman levels. The real challenge is making them work together.” That echoes DeepMind’s AlphaEvolve—where one LLM generates hypotheses, another evaluates them. It’s already led to real discoveries in math and computer science. When I asked if he’s working on something similar, he smiled. “I can’t say much. But if LLMs play a bigger role in scientific discovery in the future, I won’t be surprised. It’s a vast, open frontier.” As for his own path? “It’s a little unsettling,” Jumper said. “I’m probably the youngest chemistry laureate in 75 years.” He added: “I’m likely at the midpoint of my career. My strategy is to start small, follow the threads, and let ideas unfold. My next paper doesn’t need to be a Nobel contender. That’s a trap.”

Related Links