AI, ethics and the critical role of today’s accountant

Accountants have a crucial role to play in verifying data derived from Generative AI at a time when ethical concerns surrounding the technology continue to grow, writes Philip Maguire

The term artificial intelligence (AI) has been around since 1956 with “traditional” AI focusing primarily on analysing data to make predictions. More recently, Generative AI (GAI) technologies such as OpenAI’s ChatGPT and Google’s Gemini have shifted the focus to generating content with results often indistinguishable from human content.

GAI does not think—but rather guesses—the most likely answer to a question. These models generate human-like text, and realistic images and videos.

For the accounting profession, AI’s strength lies in data collection, data analytics and report-writing capabilities—but there are also numerous concerns.

AI: implications for accountants Accountants are struggling to grasp the implications of AI, with ethical considerations among our most pressing concerns.

The bedrock of AI is controversial; the collection of data is built on the abuse of copyright laws.

The right of the author to be compensated, or at least acknowledged, when their work is being referenced, has been in place in Britain and America for over 300 years.

AI has overridden the principles of copyright protection through its voracious accumulation of data. Many AI models depend on Large Language Models (LLMs) that absorb millions of books, for example, to understand patterns in centuries of human literature.

In addition to the ethical concerns of copyright, there are concerns that AI is not advancing society’s problems but rather, in Western democracies at least, exacerbating the distribution of misinformation or “fake news”.

This in turn contributes to society’s mistrust of those in authority. Will AI eliminate low-wage jobs while providing employment to higher educated employees such as programmers? If so, this will increase the income divide in society.

Dario Amodei, co-founder of Anthropic and former Vice President of Research at OpenAI, has stated that the cost of training GAI models increased from tens of millions of dollars in previous years to US$1 billion in 2024.

This figure is expected to rise to $10 billion per year with the majority of spend dedicated to data centres, computer chips and electricity.

The International Energy Agency expects that, by 2026, data centres will consume 1,000 terawatt hours of energy annually. This is equivalent to Japan’s yearly energy needs.

AI computing requires water to cool the computers. By 2027, AI could require between 4.2 billion and 6.6 billion cubic metres of water annually—equivalent to the water requirements of four to six Denmarks.

Jim Covello, Goldman Sachs’ Head of Global Equity Research, has noted that companies using AI have yet to experience any resulting improvement in revenue.

The only companies making money from the technology are those providing the infrastructure, such as chipmaker Nvidia.

The unique risks of AI

Despite this, many organisations today are using, or experimenting with, AI tools to replace tasks or processes.

Since the finance department is charged with safeguarding an organisation’s assets, the following questions should be asked by the accountant as outlined in the topics below:

  • Governance: Who in the organisation is responsible for AI? How will the organisation formulate policies to address the appropriate use of GAI? Do the Board of Directors and Audit Committee understand technology risks such as GAI, cyber threats, cloud computing and blockchain?
  • Regulatory: How will the organisation comply with the laws and regulations governing GAI? This is a particular challenge given how quickly these requirements are evolving.
  • Business case for GAI: Has the organisation identified appropriate processes where GAI can add value? Has the organisation defined parameters on the application of GAI?
  • Selection of AI technology: Does GAI correspond with the company’s strategic plan and information technology environment? If GAI is outsourced to another entity, does this third party provide a service auditors’ report? How easily can the GAI software be modified to suit the unique requirements of the organisation?
  • Staff competency: Can the organisation hire competent employees who understand GAI? How are staff trained to maintain their understanding of GAI?
  • Fraud: GAI creates additional fraud risks such as the ability to create fictitious vendors or inflate revenues. How will internal controls adapt to prevent, or detect in a timely manner, the unique risk profile of GAI?
  • Data privacy: What sensitive data is GAI using, or creating, and what controls are in place to protect this data?
  • Prompts: How is access to GAI restricted so that individuals cannot initiate unauthorised prompts to the software?
  • Security: Does GAI increase vulnerability to cyberattack? What additional controls are required to prevent data poisoning or malicious prompts?

Model performance: Is the GAI technology periodically evaluated to ensure it continues to add value? How will hallucinations, which can lead to unreliable results, be identified? What independent sources have been identified in order that GAI-generated results can be verified?

How to address these risks

It is in relation to this last point—model performance—that the accountant can add most value.

A unique skill accountants possess is the ability to reconcile. For example, the bank reconciliation and inventory counts verify that the balances in the general ledger agree to third party data.

A common question US recruiters today are asking C-level candidates—Chief Executive Officer, Chief Financial Officer etcetera—is “what can you do that GAI cannot?” A typical answer might be “think, apply judgement, distinguish between right and wrong”.

While this is a valid response, the volume and complexity of data generated by GAI makes it essential that this data can be independently verified—calling upon the accountants’ unique ability to provide assurance through reconciliation.

Although systems contain various preventive controls such as usernames and passwords, detective controls are the most effective in identifying anomalies.

Below are two examples of the efficacy of detective controls. Although these examples do not relate to AI systems, both demonstrate the importance of confirming data to independent sources.

Example 1: The largest fraud in corporate America, Bernie Madoff’s Ponzi scheme, was detected by analyst Harry Markopolos.

Markopolos, who worked for a firm competing with Bernard Madoff Investment Securities LLC, reconciled the financial returns of Madoff securities to the stock market returns.

Markopolos realised these returns were inflated to such an extent that there was no possible explanation other than fraudulent reporting.

He uncovered this fraud nine years before it was eventually acknowledged by the US Securities and Exchange Commission.

“BY 2027, IT IS ESTIMATED THAT AI COULD REQUIRE BETWEEN 4.2 BILLION AND 6.6 BILLION CUBIC METRES OF WATER ANNUALLY—EQUIVALENT TO THE WATER REQUIREMENTS OF FOUR TO SIX DENMARKS”

Example 2: A more recent example is the sub-postmaster scandal in Britain. This scandal originated with errors in the software program used by the Post Office in the UK.

The Post Office concluded, despite may indications to the contrary, that its sub-postmasters were stealing money. This led to the prosecution of close to 900 sub-postmasters who were convicted of fraud between 1999 and 2015.

Many lost their wealth, several served time in jail and 15 people committed suicide. What is particularly upsetting about these cases was the role of the judges presiding over their court proceedings.

An important attribute of adjudicating a case is to seek independent evidence that supports or refutes the charges.

The prevailing mindset of most of these judges was along the lines “if the system says that this is the number, then it must be right”.

Had they requested evidence of wrongdoing, it would have become obvious that the funds were not stolen and that the fault lay with the software program.

Although some proponents of GAI are suggesting a system can reconcile itself, this is not possible. To be effective, a reconciliation must balance to an independent source for verification.

Once the accountant identifies what data have been generated or materially influenced by GAI, it is simply a matter of reconciling the data to a trusted, third-party source.

The accountant, therefore, has an essential role to play as a custodian of data in the age of AI. It is up to our profession to speak up now and take charge of this new technology as it continues to evolve and proliferate.

Philip Maguire, FCA/FCPA, is a principal in Glenidan Consultancy Ltd in Canada. His practice focuses on internal controls over financial reporting for publicly listed companies on the Toronto Stock Exchange. Maguire also teaches continuing professional development courses in Canada, England and Wales and Ireland