HyperAIHyperAI

Command Palette

Search for a command to run...

Tech Companies Ignore AI Cheating by Students, Pushing Responsibility to Educators

Tech companies are largely ignoring the growing use of their AI tools for cheating in schools, despite mounting evidence that students are leveraging advanced AI agents to complete assignments, take quizzes, and even impersonate classmates online. These companies, aware that young users represent their future customer base, actively court students through free trials, promotional giveaways, and referral incentives. OpenAI offered free yearlong access to ChatGPT Plus during finals season, while Google and Perplexity provide premium AI tools at no cost to students. Perplexity has gone further, openly embracing its reputation as a cheating aid. In October, the company ran Facebook and Instagram ads showing students using its AI browser Comet to complete multiple-choice homework. One ad features a student admitting peers use the tool to cheat, while another shows an actor claiming Comet can take quizzes on their behalf. When a video surfaced online demonstrating an AI agent completing a real homework assignment, Perplexity CEO Aravind Srinivas reposted it with the comment, “Absolutely don’t do this,” underscoring the irony of promoting a tool while simultaneously warning against its misuse. Perplexity’s spokesperson Beejoli Shah dismissed concerns by comparing AI tools to the abacus, arguing that all learning aids have been misused throughout history. “Cheaters in school ultimately only cheat themselves,” she said. But educators argue that the scale and sophistication of today’s AI agents make this a fundamentally different problem. AI agents can now autonomously navigate websites, submit assignments, and mimic human behavior—making them difficult to detect. College instructional designer Yun Moh demonstrated this by showing how an AI agent pretending to be a student completed a class assignment on Canvas, a widely used learning platform owned by Instructure. Moh reached out to Instructure’s team, requesting that AI agents be blocked from impersonating students. After nearly a month, the company responded, acknowledging the issue but stating it could not fully prevent external AI agents from operating on students’ devices or bypassing platform safeguards. Instructure’s spokesperson Brian Watkins emphasized that AI agents running locally on student devices cannot be controlled by the platform. While the company has some measures to verify third-party access, it admits it cannot stop unauthorized AI use. IT teams have struggled to detect patterns like rapid-fire submissions, as agents adapt their behavior to avoid detection. In September, Instructure did take action against another AI tool—Google’s “homework help” button in Chrome—which allowed students to use Google Lens to snap pictures of quiz questions and instantly search for answers. Educators raised alarms, prompting Google to pause the feature temporarily. Google said the button was just a test of an existing shortcut and that student feedback influenced the decision. Still, a company blog post written by an intern praised the tool as “a lifesaver for school,” suggesting future iterations are likely. Meanwhile, OpenAI has introduced a “study mode” in ChatGPT that avoids giving direct answers, and vice president Leah Belsky has stated that AI should not function as an “answer machine.” But these efforts come after widespread misuse has already taken root. Educators like Moh and Mills are calling for tech companies to take responsibility. The Modern Language Association’s AI task force has urged companies to give teachers control over how AI tools are used in classrooms. Yet, despite these appeals, enforcement remains in the hands of teachers, who are left to manage the fallout from tools released before ethical guidelines exist. Ultimately, the burden falls on educators to navigate a landscape shaped by corporate innovation, not oversight. As AI tools evolve faster than policies can keep up, the promise of AI in education risks being undermined by its misuse—while the companies behind it remain largely unaccountable.

Related Links