Study Shows ChatGPT Essays Lack Personal Touch Compared to Student Work, Raising Ethical Concerns in Education
AI-generated essays, particularly those produced by ChatGPT, do not yet match the quality and engagement of work produced by actual university students, according to new research from the University of East Anglia (UEA) in the UK. In a study published today, researchers compared 145 essays written by real students with an equal number of essays generated by ChatGPT. While the AI essays demonstrated impressive coherency and grammatical accuracy, they significantly lagged behind in terms of personal engagement. Prof Ken Hyland, from UEA’s School of Education and Lifelong Learning, highlighted the growing concern among educators about the potential misuse of AI tools like ChatGPT. Since its public release, ChatGPT has sparked significant anxiety, as teachers worry that students might use it to complete their assignments, facilitating academic dishonesty and weakening essential literacy and critical thinking skills. To address these concerns, the study focused on "engagement markers," such as questions, personal commentary, and direct appeals to the reader. These elements are crucial for creating interactive and persuasive essays. The analysis revealed that student-written essays consistently employed a variety of engagement strategies, enhancing clarity, connection, and the strength of their arguments. These essays included rhetorical questions, personal insights, and direct addresses to the reader, all of which contributed to a more engaging and nuanced piece of writing. In contrast, ChatGPT-generated essays, while linguistically fluent, were notably impersonal. They adhered to academic writing conventions but failed to inject a personal touch or demonstrate a clear stance. The AI essays generally avoided questions and minimized personal commentary, resulting in less engaging and less persuasive content. "ChatGPT's inability to create a personal connection or convey a strong perspective stems from its training data and statistical learning methods, which prioritize coherence over conversational nuance," explained Prof Hyland. The study's findings could help educators develop better strategies to identify machine-generated essays, thereby preventing cheating and promoting integrity in academic assessments. However, the researchers emphasize that AI tools should not be outright banned but instead integrated as educational aids. Prof Hyland stated, "When students come to school, college, or university, we’re not just teaching them how to write; we’re teaching them how to think. This is a skill that no algorithm can replicate." The study, titled "Does ChatGPT Write Like a Student? Engagement Markers in Argumentative Essays," was conducted in collaboration with Prof Kevin Jiang from Jilin University in China and is published in the journal Written Communication. By understanding the limitations of AI in generating truly engaging and personal content, educators can better guide their students to develop critical literacy and ethical awareness in the digital age.
