Back to Blog

Getting ahead of Artificial Intelligence risks: preparing for ISO/IEC 42001:2023

AI
InformationSecurity
Two men are engaged in a friendly conversation in a modern, brightly lit café with colourful, curved window panels. The man on the right is smiling warmly, wearing a cream cable-knit sweater and holding a takeaway coffee cup. The man on the left, seen from behind, is wearing a black hoodie and glasses. The background features vibrant plants, orange and green decor, and bar-style seating.

With the rapid development and adoption of Artificial Intelligence, it feels like the wild west out there. Most businesses are struggling to keep up with the changes and have little visibility into how or why their employees are using AI, let alone the potential business risks involved. That’s where ISO/IEC 42001:2023 comes in.

Currently a pilot accreditation project for UKAS, the sole National Accreditation Body for the United Kingdom, ISO/IEC 42001 provides a framework for organisations to design, implement, and maintain effective Artificial Intelligence Management Systems.

To find out more about ISO/IEC 42001 and how LeftBrain is helping our clients to prepare for the standard, we spoke with LeftBrain’s Information Security Analysts, Matthew Bensley and Lucas Jensan.

Lucas, why do you think there’s now a need for the ISO/IEC 42001 standard, and what issues is it designed to address?

It’s interesting that AI is something that has to be more seriously considered. Here at LeftBrain we’re already accredited and helping our clients with ISO 27001 which is the world’s best known standard on information security management, and ISO 9001 which ensures quality management. Both of these standards have been around for a while and are regarded as industry benchmarks. But with the rapid rise of AI, the international community realised the need for a similar framework to manage its uses safely, ethically and transparently.  

At LeftBrain, we’ve recently done an internal audit of all the AI platforms we’re using and carried out a risk assessment to help inform our clients to do the same. Until the ISO/IEC 42001 framework is introduced, there isn’t a standardised approach to deciding which AI models to adopt and which are acceptable to use. It’s really muddy waters right now. The question is, how do we manage this? That’s what the ISO/IEC 42001 standard aims to fix.

Matt, we’re already helping some of our clients prepare for when ISO/IEC 42001 becomes approved by UKAS as an industry standard. What are the most common AI-related risks you’ve come across so far?

By far the greatest risk is ignorance. Many organisations haven’t even begun to think about the acceptable and non-acceptable business uses for AI. Beyond ignorance, we’ve boiled it down to five key risks for UK small to medium businesses who want to use AI to get ahead:

  1. Data privacy and security – AI tools could accidentally expose sensitive customer data, financial details, or business strategies if not handled securely.
  2. Inaccuracy, bias and hallucinations – AI content may be misleading, biased, or factually incorrect, leading to poor business decisions. AI can make up facts or behave unpredictably due to limitations in its training data.
  3. Workforce and productivity impact – AI automation could improve efficiency but may also require reskilling staff.
  4. Over-reliance on AI – Depending too much on AI for customer interactions, marketing, or decision-making could reduce human oversight and damage customer trust.
  5. Compliance and legal risks – AI use must align with GDPR, intellectual property laws, and industry regulations, or the business could face fines or reputational damage.
Matt, who is ISO/IEC 42001 for?

Most of the clients we’re currently advising are in the tech industries (MarTech, FinTech, HealthTech and more), where the standard is highly relevant for companies developing AI-based products and services. However, as AI becomes more integrated into day-to-day business operations across the board, ISO/IEC 42001 will eventually be applicable to organisations of any size or industry, regardless of their focus.

Lucas, do you have any top tips around what businesses can be doing now to mitigate against AI-related risks? 

The simplest control businesses can put in place right now, without having to adopt a whole standard, is to outline which AI tools are acceptable and which ones aren’t. We don’t fully understand what AI models are doing with our data, so the biggest risk is users inputting sensitive information that could be stored, shared, or misused without their knowledge. It ultimately comes down to your risk tolerance and whether you trust your employees to be cautious with their prompts, or whether you decide to limit them to specific, vetted tools.

There are also some effective technical controls you can implement. For example, some of our clients have opted to block access to AI tools at the DNS level. This prevents employees from navigating to those websites on company devices. Alternatively, simply having a clear policy that states whether AI tools are allowed, and if they are, specifying which ones are permitted, can be highly effective. By communicating this to employees, you have established your stance. If someone deviates from that policy, the responsibility lies with them, not the organisation.

My top tip: don’t bury your head in the sand! It’s better to have a rough and ready policy in place than risk an AI-related data breach that could cost you. 

Would you like to find out more about preparing for ISO/IEC 42001:2023 and mitigating against AI-related risks for your business?

Schedule a call
Green arrow

Read Next

Two animated Memoji characters are shown. The character on the left has brown hair styled upward, wears round glasses, and has a cheerful, open-mouthed smile with bright blue eyes. The character on the right has neatly styled brown hair and is posing thoughtfully, resting his chin on his hand with a curious, pondering expression.
Matthew Bensley and Lucas Jensan
Friday 28th February 2020