HyperAIHyperAI

Command Palette

Search for a command to run...

Stanford Faculty Navigate AI in Research: Balancing Innovation, Ethics, and Scholarly Integrity

As artificial intelligence tools grow more prevalent in academic research at Stanford, faculty across disciplines are carefully evaluating how to integrate the technology without undermining scholarly integrity, ethical standards, or the essential role of human judgment. Kathryn Olivarius, an associate professor of history, approaches AI with caution. Her work on 19th-century U.S. history depends on deep engagement with physical archives and original interpretation—many of which remain undigitized. “ChatGPT or generative AI is not the archive plugged in,” she said, noting that students often assume all historical sources are available online. While she has used AI as a “very good copy editor,” she firmly rejects using it to generate research drafts. “For most academics, your thinking and good ideas come through the hard work of writing,” she said. “I don’t see myself ever outsourcing that part of my research.” Olivarius also raised ethical concerns, calling the current moment in AI adoption “the Wild West.” She warned that if an idea isn’t genuinely your own, it may amount to plagiarism. She tested AI by having it write an essay in her field and found it made serious interpretive errors—errors only a subject-matter expert could detect. “The problem is, when you’re not an expert, you won’t know what it got wrong,” she said, underscoring the risk of misinformation in non-specialist hands. In response, other faculty are helping researchers use AI more thoughtfully. Jooyeon Hahm, head of Data Science Training and Consultation at the Center for Interdisciplinary Digital Research, has seen a shift in consultation requests—from general AI awareness to specific technical guidance. Researchers are now asking about cost-effective API usage, model performance, and how to avoid common pitfalls. “This shift reflects a maturing relationship with AI,” Hahm said, moving from curiosity to critical, ethically grounded integration. There is also growing demand for deeper understanding of how AI works, including the mechanics of transformer architecture and the limitations of large language models. In qualitative research, scholars are using LLMs for coding and data extraction, but remain vigilant about hallucinations and inaccurate outputs that could compromise results. “I don’t think AI is fundamentally reshaping the skills researchers need,” Hahm said. “If anything, the core skills of critical reading, writing, and thinking have become more important, not less.” Jef Caers, a professor of earth and planetary sciences, works in a field where AI is already a key tool. His research on decision-making under uncertainty in mineral exploration and geothermal energy relies on AI to process massive, complex datasets that would be impossible to analyze manually. He sees AI not just as a time-saver, but as a way to improve decision quality. “When people do mineral exploration, they often overlook environmental impact,” he said. By incorporating sustainability and community concerns early, AI helps create more responsible, long-term solutions. “AI helps optimize operations not just for productivity, but for broader values like environmental protection and social equity,” he explained. While generative AI gets much attention, Caers believes its role in research is often exaggerated. “AI won’t understand the full complexity of these systems,” he said. “It can’t replace human judgment.” As Stanford develops policies and best practices for AI use, faculty agree the real challenge isn’t adopting the technology, but ensuring it enhances, rather than weakens, the core values of academic research: rigor, accountability, and intellectual collaboration.

Related Links