The Hidden Risk of Shadow AI: A Case Study
They say what you don’t know can’t hurt you, but CPAs know better. Often it is the unknowns, the hidden risks, that pose the greatest threat.
One such risk is the growing use of “shadow AI,” the practice of employees using AI without the proper oversight or organizational awareness. This represents a clear and present danger to the reputation of the CPA in question, their employer, and the profession. A profession that is built on the pillars of transparency, trust, technical excellence and a commitment to ethical practice.
According to a survey by KPMG, 48% of Canadians acknowledged using AI in ways that may not fully align with workplace guidelines, often due to uncertainty about appropriate usage. The same survey found that 55% have relied on AI outputs at work without evaluating the information.
The explosion of generative AI presents a unique opportunity for CPAs to focus on strategic and advisory services and increase productivity. However, progress on its adoption has been uneven, at best. In a survey of CPA Ontario members, only 29% of Ontario CPAs said their organizations currently use generative AI frequently or occasionally, while 49% reported their organizations use it rarely or never.
Time, tide and innovation wait for no one. The absence of awareness, guidelines and official policy on the use of AI doesn’t necessarily mean the technology isn’t being used.
Given their important governance role, CPAs have a part to play in establishing robust internal AI controls. The following case study lays out a scenario where the unsanctioned use of AI leads to very real consequences, and what could have been done to prevent the issue from arising in the first place.
Case Study:
*Note that this case study is fictitious and used for illustrative purposes only.
Herbert, Clarke and Heinlein (HCH) LLP are a medium-sized CPA firm that specializes in serving clients active in international shipping. While HCH LLP’s management has been debating how to incorporate AI into the firm’s work, no final decisions have been made on which tools to implement, the training that would be required or the governance of its use. Thus far, the firm has issued no clear guidance on the use of AI to any of its staff.
Despite this lack of official guidance, one HCH LLP staff person has begun using generative AI tools to assist with data analysis and summary for audit and compilation engagements. Without the knowledge of senior leadership, confidential business information for one of HCH LLP’s clients have been uploaded into an open-source large language model (LLM). The CPA in question was not performing any due diligence on the results produced by the AI product in question, which resulted in data hallucinations and errors being introduced into the firm’s work.
The firm’s routine quality management reviews began uncovering substantial errors in the audit work, prompting an internal investigation. Once it was discovered that the CPA has been uploading client data into an open source LLM, the firm has no choice but to disclose the issue to the client, who submits a complaint to CPA Ontario’s Standards Enforcement team.
The firm is concerned that the complaint could lead the CPA Ontario Professional Conduct Committee to open an investigation and a possible referral to the Discipline Committee, which could result in sanctions and publicity damaging the reputation of both the firm and the profession.
What could the firm have done differently?
While HCH LLP had not made a final decision on how to incorporate AI tools into its work, the same obligations that are set out in the CPA Code of Professional Conduct that govern disclosure of confidential information apply to artificial intelligence. The same accountabilities and responsibilities apply as well.
HCH LLP should have put AI policies in place for all staff members, with clear guidelines for its use. That policy should include restrictions on confidential client data, and disclosure requirements for when AI is used in the course of day-to-day work. HCH LLP should have also implemented training for staff to ensure that the policy is understood by every member of the firm.
Artificial intelligence is no longer a hypothetical. It is the here and now. As its adoption continues to accelerate at a breakneck pace, it is incumbent on CPAs to ensure its use is grounded in ethics, governance and trust.

