HyperAI
Back to Headlines

KPMG Survey Reveals 57% of Workers Conceal AI Use from Bosses, Citing Job Security and Lack of Training

20 hours ago

A recent KPMG study in collaboration with the University of Melbourne has shed light on a concerning trend in the workplace: a significant number of employees are hiding their use of artificial intelligence (AI) tools from their bosses and colleagues. The "Trust, attitudes, and use of artificial intelligence: a Global Study 2025" surveyed 48,340 people across 47 countries between November 2024 and January 2025, revealing that 57% of workers have concealed their AI usage and presented AI-generated content as their own. The study highlights how deeply AI has permeated the modern workplace, with 58% of respondents intentionally using AI for work tasks and about one-third using it at least once a week. According to Nicole Gillespie, a professor of management and chair of trust at the University of Melbourne's business school, the covert use of AI is driven by several factors. One primary reason is the pressure to stay competitive and keep up with peers, who may be leveraging AI to enhance productivity and efficiency. Employees fear falling behind and jeopardizing their job security if they do not adopt AI tools. Additionally, the allure of AI's benefits can be tempting, leading some to continue using it even when it conflicts with company policies. However, the lack of transparency and proper training poses significant risks. The study found that only 47% of employees globally have received AI training, indicating a widespread knowledge gap. This lack of guidance has led to irresponsible practices: 66% of employees use AI tools without verifying the accuracy of the outputs, 48% have uploaded company data to public AI platforms, and 56% have made mistakes due to AI-generated content. These behaviors can expose organizations to errors, data breaches, and regulatory issues, which are critical concerns for business operations. Sam Gloede, the global trusted AI transformation leader at KPMG International, emphasized that the lack of transparency can erode trust in AI systems, which is essential for their effective and safe implementation. Trust, she noted, is a crucial strategic asset for organizations, enabling innovation and growth. For AI to be trusted, it must demonstrate robust technical capabilities, be tailored to specific purposes, and be reliable. To mitigate these risks, the study recommends that organizations improve AI literacy and governance. Employees need foundational training to understand what AI is and its ethical implications, along with role-specific training to optimize its use for their tasks. The experts suggest creating an environment where employees can openly use AI, share their experiences, and experiment safely. Such an approach would foster a culture of transparency and continuous learning, reducing the likelihood of hidden and risky practices. Interestingly, the study found that trust levels in AI are higher in emerging economies compared to advanced ones. In countries like India, Nigeria, and Saudi Arabia, 82% of respondents expressed high trust in AI, compared to 65% in more developed nations. This higher trust correlates with better literacy and training, underscoring the importance of these elements in successful AI adoption. Industry insiders and company profiles add context to the findings. KPMG, a leading professional services network, positions itself as a consultant for organizations navigating the complexities of AI integration. The firm's expertise in this area emphasizes the need for structured AI strategies to build trust and mitigate risks. Gillespie and Gloede's insights highlight the dual challenges of employee resistance to transparent AI use and the critical role of training and governance in ensuring safe and effective AI adoption. Their recommendations offer a roadmap for organizations to harness the benefits of AI while maintaining integrity and security.

Related Links