In a striking case that underscores rising concerns over the misuse of artificial intelligence in professional settings, a senior partner at KPMG Australia has been fined A$10,000 for using AI tools to cheat on an internal training course about responsible AI use. The partner uploaded official course materials into an external AI platform to generate answers for an internal assessment on ethical AI practices, breaching company policy and prompting disciplinary action.
The incident came to light during routine monitoring of internal compliance checks introduced by the firm in 2024. According to company sources, the breach was detected after the partner’s use of AI was flagged by KPMG’s internal detection systems, leading to the retesting of the individual. This case is part of a wider pattern at the firm, with 28 staff members found to have used generative AI tools to complete internal training exams since July. Most of these involved employees at manager level or below, but the involvement of a senior partner and registered company auditor has amplified scrutiny.
Andrew Yates acknowledged that the rapid adoption of AI tools across industries has made monitoring and regulating their use within organisations increasingly challenging. He noted that while some employees may view AI as a convenient shortcut, such misuse undermines the intent of internal training designed to promote ethical and responsible AI deployment. KPMG plans to disclose incidents of AI related cheating in its annual reports and is stepping up efforts to tighten monitoring, improve detection technologies and strengthen enforcement of its internal AI policies.
The case highlights broader concerns about AI assisted cheating across sectors, not just in corporate training but also in academic and professional examinations. Globally, institutions are grappling with how to uphold integrity when AI is used to automate answers, complete tasks and generate content. Independent studies have shown that AI generated responses can go undetected in traditional assessments, raising questions about effective detection and enforcement mechanisms. As generative AI becomes more widespread, firms and regulators may need clearer guidelines and stronger oversight to prevent misuse and protect the credibility of professional qualifications.
Also read: Viksit Workforce for a Viksit Bharat
Do Follow: The Mainstream formerly known as CIO News LinkedIn Account | The Mainstream formerly known as CIO News Facebook | The Mainstream formerly known as CIO News Youtube | The Mainstream formerly known as CIO News Twitter
About us:
The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.



