HyperAIHyperAI

Command Palette

Search for a command to run...

Amazon's $68M AI PhD Fellowship: 100 Scholars, 100% Innovation

Amazon has launched a $68 million AI PhD Fellowship Program, announced on October 21, which will provide funding to over 100 doctoral students across nine leading universities over the next two academic years—2025–2026 and 2026–2027. The participating institutions are Carnegie Mellon University, Johns Hopkins University, Massachusetts Institute of Technology, Stanford University, the University of California, Berkeley, the University of California, Los Angeles, the University of Illinois Urbana-Champaign, the University of Texas at Austin, and the University of Washington. The program offers substantial support: $10 million annually in direct student funding, plus $24 million in Amazon Web Services (AWS) cloud computing credits. Each university will receive $1.1 million per year to support its fellows, with the number of recipients determined by individual institutional arrangements. According to Rohit Prasad, Senior Vice President and Chief Scientist of Amazon’s AI organization, the initiative focuses on research with real-world impact in key AI domains such as machine learning, computer vision, and natural language processing. Special emphasis is placed on emerging frontiers including agentic systems, large language models and other generative AI, machine learning system infrastructure, and automated reasoning. Each fellow will be paired with an Amazon senior scientist—known as a research liaison—who shares their research interests and will provide mentorship and guidance on practical applications. Fellows also have the opportunity to intern at Amazon during the summer, applying their academic work in real-world settings. Among the early recipients from MIT, CMU, UC Berkeley, and UT Austin, several Chinese-origin scholars stand out for their groundbreaking work in AI. At MIT, Jenny Huang, a PhD candidate in Electrical Engineering and Computer Science, is advancing data-centric machine learning, uncertainty quantification, and efficient AI development. She previously earned dual degrees in statistics and computer science from Duke University and is also a recipient of the MIT Presidential Scholarship and the Quad Fellowship. Songyuan Zhang, a PhD student in Aeronautics and Astronautics and member of the Reliable Autonomous Systems Lab (REALM), graduated from Tsinghua University’s Qian Xuesen Class. His research focuses on safe multi-agent systems, reinforcement learning, control theory, and robotics. His paper on distributed multi-agent optimal control was shortlisted for the best paper award at the 2025 Robotics: Science and Systems (RSS) conference and won the best student paper prize. David Jin, pursuing a PhD in Computational Science and Engineering at MIT, holds a dual degree in Information and Data Science and Physics from Caltech. His work centers on GPU-accelerated and distributed optimization for AI-driven decision systems in robotics and energy, aiming to push the boundaries of scalable computation. At UC Berkeley, Dacheng Li, a PhD student in Electrical Engineering and Computer Sciences and member of the Sky Computing Lab and BAIR, has contributed significantly to visual and text generation models and distributed systems. He is a co-leader of NovaSky and a core member of lmsys, the team behind Vicuna and Chatbot Arena—widely used benchmarks for evaluating large language models. Hao Wang, also at UC Berkeley, works under renowned security researchers Koushik Sen and Dawn Song. His research, “Practical Secure Code Generation via Controlled Secure Reasoning,” tackles critical security vulnerabilities in AI-generated code, particularly through type-constrained decoding and active security agent development. Melissa Pan, or Zhiyang Pan, a PhD student in EECS and part of the Sky Computing Lab, previously worked at IBM for three years on the Db2 database engine, focusing on high-availability features. Her current work integrates sustainability as a first-class objective in large-scale machine learning and datacenter systems, with a focus on energy, power, and carbon-aware optimization. Shiyi Cao, a PhD student in Computer Science at UC Berkeley, is a key contributor to the S-LoRA system, enabling high-throughput concurrent service of thousands of LoRA adapters. She also co-developed MoE-Lightning, a solution for efficient inference of mixture-of-experts models on memory-constrained GPUs, addressing key deployment challenges. Shuo Yang, another UC Berkeley PhD student, is working on efficient long video generation and has contributed to S-LoRA and other major projects. At CMU, Yuxiao Qu, a machine learning PhD student advised by Aviral Kumar and Ruslan Salakhutdinov, is exploring how to instill human-like curiosity in AI agents, combining reinforcement learning and foundation models to create systems capable of hypothesis generation and autonomous experimentation. Danqing Wang, a PhD student at the Language Technologies Institute, is working with Professor Lei Li. A former member of Fudan University’s NLP group, her research focuses on improving the reliability and safety of large language model agents in complex environments, particularly in multi-agent collaboration. Mengdi Wu, advised by Zhihao Jia, specializes in machine learning systems, particularly compilers and superoptimization. Her work aims to automate the learning and tuning of computational kernels for optimal performance across diverse hardware platforms. Xinyu Yang, a PhD candidate in Electrical and Computer Engineering, is developing a novel generative model architecture that enables scalable multi-agent workflows within a single model—offering new pathways for complex agent systems. Zeji Yi, also at CMU’s Electrical and Computer Engineering department, is applying generative models to general-purpose robotics, including humanoid robots and dexterous hands. His work has direct relevance to Amazon’s warehouse automation and fulfillment operations. Zichun Yu, a PhD student in the Language Technologies Institute, is addressing the data scarcity problem in LLM pretraining by designing systems to generate high-quality synthetic data, helping to build more robust and reliable models. Xinran Zhao, also at CMU’s Language Technologies Institute, is improving Retrieval-Augmented Generation (RAG) systems to better handle uncertain and dynamic information sources, enhancing the accuracy and traceability of LLM outputs. At UT Austin, Haoyu Li, a PhD student in the Networked Systems group, is working on AI-driven performance and availability improvements in modern systems, with a focus on data pipelines, LLM caching, and edge computing in autonomous systems. He holds a degree from Peking University’s Turing Class. Junbo Li, from the VITA research group, is advancing reasoning-driven LLM agents and reinforcement learning, with a focus on self-evolving systems that can interpret instructions, use tools, and adapt in real-world environments. Kiazhao Liang, also in the VITA group, specializes in efficient training methods, sparse neural networks, and large language models. He previously served as a lead engineer at SambaNova Systems and holds a degree from UIUC. Chutong Yang, advised by Kevin Tian, is interested in theoretical computer science, with a focus on learning theory, differential privacy, and algorithmic fairness. He holds degrees from UC San Diego and Stanford. Xiao Zhang, also in the Networked Systems group, is working on cross-layer telemetry and resource management in 5G edge systems to ensure predictable AI performance, bridging the gap between real-world deployment and AI infrastructure. The program underscores Amazon’s growing investment in foundational AI research and its strategic effort to cultivate the next generation of AI leaders, while deepening ties with top academic institutions and talent.

Related Links