Caribbean Digital Compass

Caribbean Digital Compass Profile Photo

AI auditing for MSMEs: Is it up to the job?

#

Michele Marius

AI is everywhere. Moreover, organisations are being continually encouraged to integrate generative AI models into their systems and processes, as greater efficiency, increased productivity and cost savings could be realised. However, what is not always appreciated is that not only is your organisation using AI, but companies from which you purchase goods and services may also be using AI, which raises several issues that ought to be considered.

A recent article by the Association of Chartered Certified Accountants (ACCA) highlighted how digital technology is transforming the audit process and advocated for the adoption of technology in auditing, particularly by smaller firms. Though there might be the perception that smaller audit firms are less likely to engage with advanced technology, they tend to be more agile than their larger counterparts in selecting and using software solutions that are becoming increasingly powerful. Further, various audit software solutions were suggested in the article that could improve audit quality and the bottom line for these firms.

But again, if audit firms, which by the nature of their function are supposed to verify compliance, what risks (if any) could clients be exposed to when AI is used? And since AI use is virtually inevitable, how can clients ensure that risks are being managed by their auditors?

How can AI be used in the audit process?

Although the ACCA article would have focused primarily on financial auditing, AI tools have the potential to revolutionise both financial auditing and internal auditing.

First, AI tools can facilitate enhanced data analysis, such as detecting fraud, assessing risk as well as being used in the audit planning process. In detecting fraud, AI algorithms can analyse vast datasets to identify unusual patterns, anomalies, and potential fraud indicators, including suspicious transactions and money laundering schemes. Further, it can help evaluate risks more accurately by analysing historical data, market trends, and regulatory changes, thus allowing auditors to prioritise their efforts and focus on areas of higher risk and potential control weaknesses, along with providing insights and recommendations based on the data analysed, thus enhancing auditors’ judgment and decision-making.

Second, the considerable processing capability of AI also allows routine tasks to be automated, such as data entry, document review, and accounts/transactions reconciliation, freeing up auditors to focus on more complex and value-added activities and resulting in a more efficient (and possibly faster) auditing exercise. Finally, unlike human workers, who can make errors or make poor judgements, AI can help ensure consistent application of auditing standards and reduce the risk of human error – provided the platforms are properly trained.  

Possible drawbacks to using AI when auditing

On the other hand, the use of AI ought to be carefully managed. First, AI platforms are only as good as the data upon which they have been trained, in other words, there is a heavy reliance on high-quality data, especially to contextualise the analysis that must be performed. Inaccurate or biased data can lead to erroneous outputs that auditors and their clients could be relying on to make high-stakes decisions.

Second, it is crucial to appreciate that although AI models are manmade, to some degree, they are also black boxes, as it can be difficult to explain how they arrive at certain outputs. As a result, it can be challenging to understand the rationale behind a model’s decisions, especially complex models, which are crucial for audit transparency and accountability.

Third, tied to the datasets used to train AI models, there is always a risk of bias. For example, the models used may have been developed and trained based on datasets generated in developed countries, which may not make allowances for contextual differences in developing regions such as the Caribbean. Hence, if the data used to train AI models is biased, the models themselves can perpetuate and even amplify those biases.

Fourth, in light of the vast datasets AI models use, the fact that it can be difficult to expunge information entered into a model, and the growing concerns about data privacy and security, the systems and protocols needed to address such issues must be carefully considered. For example, the handling and processing of sensitive financial data would require robust data security measures and governance structures to be established to prevent unauthorised access and use.

Finally, AI models and the environments in which they are used are continually evolving. They thus require ongoing maintenance and updates to ensure their accuracy and effectiveness and, consequently, access to expertise that may not be resident locally, especially in (certain) Caribbean countries.

Questions organisations ought to ask when engaging auditors

Although the drawbacks outlined are worrying, if there is a legal or regulatory obligation to have audits regularly conducted, they cannot be avoided. Instead, clients ought to ensure that they can ask their auditors the right questions to ensure that potential challenges have been addressed or, at the very least, have been considered.

First, cognisant of the important role that data and datasets play in developing accurate and robust AI models, data quality and data integrity are crucial. Robust data quality controls are essential to generate reliable AI-driven insights. Further, biased data leads to biased AI outputs, potentially having an impact on financial decisions and regulatory compliance. Questions that can help you probe this with the auditing team include the following:

Second, noting that both your organisation and the auditors may be relying on the outputs produced from the AI models, the confidence that can be placed on the outputs ought to be interrogated. It is thus important to have some understanding of – and the auditors ought to be able to explain – how the models were developed, to have some insight into the robustness of the outputs and the extent to which they could be trusted. It is also crucial to understand whether appropriate steps are being taken to keep the models accurate and up to date. To that end, there ought to be regular model monitoring and retraining. Questions that could be asked include:

Third, a natural follow-up to having confidence in an AI model’s outputs is the auditors’ ability to proffer some explanation on how the model reached certain conclusions. Being able to explain AI outputs is crucial for understanding and trusting such results, especially when high-stakes decisions must be made. Further, to ensure responsible AI use, it is important that the auditors can identify and address when an AI model produces ‘off outputs’, that is, outputs that may be erroneous, suggest bias, or are outside expected norms. Questions that could be asked to explore these matters include:

Fourth, consistent with the emphasis being placed on AI governance, there ought to be strong internal controls to ensure the responsible and ethical use of AI. More importantly, the adopted governance policy must be implemented, with the requisite systems established to monitor its implementation and to update and improve the policy and processes as appropriate. The following two questions can help you start a conversation on this issue:

Finally, one of the stated benefits of using AI is its automation capability, which can reduce human involvement in the data entry, processing and analysis phases. However, human oversight is still crucial, especially since the expertise needed to manage the AI models may far be removed from the auditing personnel with whom you will be interacting. AI models must be comprehensively managed. The processes and outputs must be closely monitored so that appropriate and early interventions and corrections can be made to maintain the responsible and ethical use of AI, especially since outputs may be used to respond to complex or high-stakes situations.

Although the above questions may appear intimidating, it is vital to be proactive in ensuring that you are well informed about how AI will be used in the auditing process, and that the auditors have put the requisite systems in place to facilitate ethical and responsible AI use. Your organisation’s relationship with its auditors is one built on trust. In being called upon to fairly and correctly evaluate your organisation and to provide advice or recommendations as considered appropriate, auditors wield considerable power. They must thus be seen to be meticulous and prudent in how they exercise their influence and be able to authoritatively manage and defend the use of any assistive tools upon which they choose to rely.