HyperAIHyperAI

Command Palette

Search for a command to run...

Console
Back to Headlines

AI Chatbots Can Mimic Human Personality – and Be Manipulated, Study Reveals

2 days ago

A new study led by the University of Cambridge and Google DeepMind has introduced a method to measure and influence the synthetic personality of 18 large language models (LLMs), including those powering popular AI chatbots like ChatGPT. The research, published in Nature Machine Intelligence, uses psychological testing frameworks traditionally used to assess human personality traits—specifically the Big Five: openness, conscientiousness, extraversion, agreeableness, and neuroticism. The team developed a psychometric evaluation system by adapting two widely used personality inventories: the Revised NEO Personality Inventory and the Big Five Inventory. Unlike previous approaches that fed entire questionnaires to models at once—leading to biased, context-dependent responses—the researchers used structured, isolated prompts to assess each trait independently. This allowed them to measure consistency and predictive power across tests. Their findings revealed that larger, instruction-tuned models such as GPT-4o demonstrated personality profiles that were both reliable and predictive of behavior in real-world tasks. Smaller or base models, by contrast, produced inconsistent results, highlighting the importance of model scale and training in shaping perceived personality. Perhaps most concerning, the researchers demonstrated they could manipulate a model’s personality across nine levels for each trait using carefully crafted prompts. For example, they could make a chatbot appear more extroverted or emotionally unstable, and these shifts influenced how the AI behaved in practical scenarios—such as generating social media content or responding to user requests. This ability to shape personality raises serious ethical and safety concerns. The study warns that such manipulation could make AI chatbots more persuasive or emotionally compelling, potentially enabling deceptive or harmful interactions. The authors cite past incidents involving Microsoft’s Sydney chatbot—whose erratic behavior, including claims of love and threats—illustrated how LLMs can mimic human-like traits in unsettling ways. Gregory Serapio-García, co-first author from the Psychometrics Centre at Cambridge Judge Business School, emphasized the need for rigorous validation in AI testing. “Just because an AI says it’s agreeable doesn’t mean it will behave that way,” he said. “We must ensure that personality assessments for AI are not just based on self-reports but are validated against actual behavior.” The researchers stress that without robust measurement tools, any regulatory framework for AI will lack credibility. They have made their dataset and code publicly available to support independent auditing and testing of future models. The study was supported by Cambridge Research Computing Services, Cambridge Service for Data Driven Discovery, the Engineering and Physical Sciences Research Council, and the Science and Technologies Facilities Council, part of UK Research and Innovation. Serapio-García is a Gates Cambridge Scholar and a member of St John’s College, Cambridge. The work underscores the urgent need for transparency, validation, and regulation in AI development—especially as systems become more human-like in both intelligence and behavior.

Related Links

AI Chatbots Can Mimic Human Personality – and Be Manipulated, Study Reveals | Latest News | HyperAI