HyperAIHyperAI

Command Palette

Search for a command to run...

GPT-5 Boosts Wet Lab Efficiency 79x by Designing Novel Cloning Protocol, Showcasing AI’s Potential in Biological Research

A recent experiment demonstrates the potential of advanced AI models like GPT-5 to accelerate biological research in real laboratory settings. Working with Red Queen Bio, researchers tested how well the model could improve a standard molecular biology protocol—Gibson assembly—through iterative, data-driven optimization with minimal human input. The goal was to enhance cloning efficiency, a core technique in genetic engineering, protein design, and functional genomics. Over multiple rounds, GPT-5 autonomously proposed changes to the protocol, analyzed experimental results, and refined its approach. The model introduced a novel method called RecA-Assisted Pair-and-Finish HiFi Assembly (RAPF-HiFi), which incorporates two proteins not previously used together in standard cloning: RecA from E. coli and phage T4 gene 32 single-stranded DNA-binding protein (gp32). These proteins work in sequence—gp32 untangles DNA ends, and RecA guides them to their correct partners—before the standard Gibson enzymes complete the assembly. The result was a 79-fold increase in cloning efficiency compared to the baseline HiFi Gibson protocol. This improvement was validated across replicates and confirmed through control experiments that eliminated alternative explanations. When RecA or both RecA and gp32 were omitted, performance dropped significantly, proving their essential role. The AI also optimized the transformation step—the process of getting DNA into bacterial cells—by proposing a simple but effective change: pelleting cells at 4°C, removing half the volume, and resuspending before adding DNA. This manipulation increased transformation efficiency by over 30-fold, despite the fragility typically associated with high-efficiency chemically competent cells. Both innovations emerged without human guidance beyond clarifying questions. The model used a fixed prompting strategy and relied solely on feedback from experimental outcomes to refine its proposals. This closed-loop system demonstrated AI’s ability to engage in genuine scientific reasoning, propose novel mechanisms, and iterate toward better solutions. The experiments were conducted under strict biosecurity controls. A benign experimental system was used, the scope was limited, and model behavior was monitored to inform risk assessments and safeguard development. The findings align with the broader Preparedness Framework for AI safety, emphasizing the need for robust evaluation and mitigation strategies as AI systems interact more deeply with physical science. To scale the process, the team developed a robotic lab system called Robot on Rails. It translates natural language protocols into robotic actions, uses real-time vision to locate labware, and plans safe, accurate movements. While robot-executed experiments produced lower absolute colony counts—about ten-fold less than manual runs—fold-changes were similar, suggesting room for improvement in precision and handling. These results highlight a transformative vision: AI systems that continuously learn from real-world experiments, collaborate with scientists, and accelerate discovery. Though current systems still require human oversight and refinement, they signal a future where AI acts not just as a literature assistant, but as a co-investigator in the wet lab. The work also raises important questions about safety and control. The ability of AI to design and execute complex biological procedures underscores the need for strong, adaptive safeguards. As models grow more capable, the focus must remain on responsible development, rigorous evaluation, and transparent risk management.

Related Links