Currently a pilot accreditation project for UKAS, the sole National Accreditation Body for the United Kingdom, ISO/IEC 42001 provides a framework for organisations to design, implement, and maintain effective Artificial Intelligence Management Systems.
To find out more about ISO/IEC 42001 and how LeftBrain is helping our clients to prepare for the standard, we spoke with LeftBrain’s Information Security Analysts, Matthew Bensley and Lucas Jensan.
—
It’s interesting that AI is something that has to be more seriously considered. Here at LeftBrain we’re already accredited and helping our clients with ISO 27001 which is the world’s best known standard on information security management, and ISO 9001 which ensures quality management. Both of these standards have been around for a while and are regarded as industry benchmarks. But with the rapid rise of AI, the international community realised the need for a similar framework to manage its uses safely, ethically and transparently.
At LeftBrain, we’ve recently done an internal audit of all the AI platforms we’re using and carried out a risk assessment to help inform our clients to do the same. Until the ISO/IEC 42001 framework is introduced, there isn’t a standardised approach to deciding which AI models to adopt and which are acceptable to use. It’s really muddy waters right now. The question is, how do we manage this? That’s what the ISO/IEC 42001 standard aims to fix.
By far the greatest risk is ignorance. Many organisations haven’t even begun to think about the acceptable and non-acceptable business uses for AI. Beyond ignorance, we’ve boiled it down to five key risks for UK small to medium businesses who want to use AI to get ahead:
Most of the clients we’re currently advising are in the tech industries (MarTech, FinTech, HealthTech and more), where the standard is highly relevant for companies developing AI-based products and services. However, as AI becomes more integrated into day-to-day business operations across the board, ISO/IEC 42001 will eventually be applicable to organisations of any size or industry, regardless of their focus.
The simplest control businesses can put in place right now, without having to adopt a whole standard, is to outline which AI tools are acceptable and which ones aren’t. We don’t fully understand what AI models are doing with our data, so the biggest risk is users inputting sensitive information that could be stored, shared, or misused without their knowledge. It ultimately comes down to your risk tolerance and whether you trust your employees to be cautious with their prompts, or whether you decide to limit them to specific, vetted tools.
There are also some effective technical controls you can implement. For example, some of our clients have opted to block access to AI tools at the DNS level. This prevents employees from navigating to those websites on company devices. Alternatively, simply having a clear policy that states whether AI tools are allowed, and if they are, specifying which ones are permitted, can be highly effective. By communicating this to employees, you have established your stance. If someone deviates from that policy, the responsibility lies with them, not the organisation.
My top tip: don’t bury your head in the sand! It’s better to have a rough and ready policy in place than risk an AI-related data breach that could cost you.
Would you like to find out more about preparing for ISO/IEC 42001:2023 and mitigating against AI-related risks for your business?