Can AI transform the way we work with compliance?
Compliance has been fundamental in all human activity and societal structures of all ages, and it is easy to forget how omnipresent this is in our daily lives where virtually all the things we surround us with and services we consume has been through some kind of a compliance process.
Compliance is ambiguous, on the one hand it is about living up to the demands and requirements, saving lives and taking care of people and the planet—the value part. On the other hand it is about Proving that you are compliant, fulfilling demands and requirements.
The process of aligning production and operations to standards and requirements can be painstaking and costly, but also rewarding in that it helps to professionalize the company and set it up to deliver on high standards and compete on equal terms. A lot of this work is about building a culture and educating workers, but there is also a lot of effort going into generating vast amounts of procedures, documentation and data.
No doubt, the new capabilities of generative AI like Chat GPT can play a part to reduce the efforts here.
And what about the proving part, demonstrating that compliance is achieved and maintained and that the data can be trusted?
Most people would agree that the process of proving that your company, product, process etc. is compliant is generally manual, time-consuming, slow, costly, lacking in transparency, and lacking in trust in the underlying data, especially throughout large complex supply chains. And apart from the key outcome which is the ticket to trade, most do not think that the process itself adds much value.
So what can AI offer to take away these pains? Clearly if AI can help to automate, make more efficient, enable transparency and trust, it will help to change the game.
As industrial assets and production lines are becoming more digital and readings from core processes are generated by machines, a lot of the checks can be automated using traditional mathematical models. Especially where there is also high quality external data available like tracking data, weather data, grid data etc.. For the models to work, they need access to high quality data with enough contextual information. And this is not always readily available from existing operational processes. Operational data needs to be invested in like most organizations do to control the quality of their financial data. Also data integration between multiple systems to capture and aggregate data can be very costly.
But even so, there is a growing number of examples of automation of assurance of reporting and compliance, especially in the finance sector where transaction events are captured by machines. But we also see it in other industries like pharma where regulation drives digitization of quality control, in energy where data from smart meters is used to document energy savings and emissions, in transport where data from onboard systems are used to document emissions etc.
And even if traditional mathematical models takes you a long way, machine learning (a part of AI), can create totally new opportunities from it’s ability to detect and even predict patterns in vast amounts of data. So for this type of data, traditional approaches can be augmented, and in some cases replaced by AI approaches.
It is perhaps more challenging to automate validation and compliance checks in processes that involve less tangible data sources, like documents, files, images, physical inspections, interviews etc. Here you will need competent humans in the loop in the foreseeable future.
But here AI can make a difference to help the humans process , for example in processing claims and evidence much more effectively, as we have seen in our own work turning documents into analyzable data to support advanced search, auto classification, data extraction etc..
But with the latest advances in AI and large Language models (like ChatGPT), there is a new paradigm available for the human in the loop for example to query and interrogate their claims and evidence and produce arguments for their claims. This can potentially lead to another step towards automation of some types of validation and checks.
However, processing claims and evidence is just a part of the challenge, specially across large complex supply chains. Regulators are now targeting supply chains with new regulations to drive transparency and trust, introducing requirements for digital passports, independent assurance and due diligence. So just to orchestrate collection of claims and evidence, keeping track of suppliers through all tiers, and products and all their parts and ensuring that every bit of information can be trusted, Is a tall order for everybody involved, from brands, to suppliers, to auditors, to consultants etc.
So there are many pieces in the puzzle that need to come together, but this will gradually happen, driven by regulations and need to comply and keep the ticket to trade, and AI, in different forms, offer whole new possibilities.
So, what about predicting the next non-compliance event, to intervene before it happens? It is a compelling vision, and it is already feasible where you have large amounts of structured data of high quality. But there will also be a lot of business critical, ethical and privacy challenges to tackle. Compliance is a highly regulated function, and some of the regulations serve to regulate matters of life and death, typically in safety related areas, so there is no room for error.
It will also be critical to document the AI, both the data it is trained on, but also how the algorithm works to for example be aware of biases.
EU is close to issue the world’s first law on artificial intelligence (The AI Act) that both regulates how the AI models shall be documented, but also regulates which applications it can be applied to. For example, are applications classified as unacceptable risk, such as government-run social scoring of the type used in China, banned, as may also the case for predictive AI systems in policing and criminal justice . Equally there may be applications in compliance processes that need to be restricted for example due to privacy concerns.
But in sum, compliance professionals should be excited by the potential of AI-enabled solutions and tools to automate highly manual tasks and enable more transparency and trust, even if the ‘minority report for compliance’ will remain science fiction for a while yet (if it is not banned by the EU AI Act).