Generative AI tools have moved from novelty to mainstream workplace tool in an astonishingly short time. ChatGPT, Microsoft Copilot, Google Gemini, and dozens of specialist AI tools are now in regular use across organisations of all sizes — often with little formal governance around how they’re used or what data is being shared with them.
Almost every small business I engage with is asking ‘What about ISO 27001 and AI?’. They want to know how AI needs to be considered, where do AI tools fit in the ISMS? What risks do they introduce? And what do auditors expect to see?
Why AI Creates Information Security Risks
AI tools — particularly large language model (LLM) tools — introduce risks that many organisations haven’t fully considered:
Five AI Information Security Risks Organisations Need to Manage
Generative AI tools introduce risks that most organisations haven’t formally assessed — yet
When staff paste client data, confidential documents, or sensitive information into a public AI tool, that data may leave your control entirely — and could be used to train future models.
Using a public AI tool to process personal data may constitute data processing under GDPR — requiring a lawful basis and a data processing agreement. Most free AI tools offer neither.
AI tools hallucinate — producing confident-sounding but incorrect information. If staff use AI-generated outputs in reports, client work, or business decisions without verification, integrity risks follow.
Just as shadow IT saw staff adopt unauthorised cloud tools, shadow AI is now a real governance challenge. Staff may be using AI tools the organisation doesn’t know about, hasn’t assessed, and has no agreement with.
AI tools make it dramatically easier for attackers to craft convincing phishing emails, impersonation attacks, and social engineering scripts at scale — with near-perfect grammar and personalised context.
Consumer AI vs Enterprise AI: Information Security Comparison
Not all AI tools carry the same risk — the version matters as much as the tool
How ISO 27001 Helps You Manage AI Risks
ISO 27001 doesn’t mention AI specifically — the 2022 version predates the mainstream adoption of generative AI tools. But the framework provides exactly the right structure for managing AI risks:
ISO 27001 Controls for Managing AI Risks
ISO 27001:2022 doesn’t mention AI — but its existing controls provide exactly the right framework
Documenting Your AI Governance for ISO 27001
For ISO 27001 audit purposes, you’ll want to be able to show:
- Risk assessment entries covering AI-related risks, with treatment decisions
- Policy coverage — your acceptable use policy (or a dedicated AI usage policy) addresses AI tools
- Supplier assessments for any AI tools in use, distinguishing between approved enterprise tools and unapproved consumer tools
- Training records showing staff have been made aware of AI-specific risks
- A list of approved AI tools — even a simple register of which tools are in use and have been assessed
You don’t need a 20-page AI governance framework. A few clear policy additions, relevant risk register entries, and evidence of staff awareness is a solid starting point.
The Emerging Regulatory Picture
Beyond ISO 27001, organisations in the UK and EU should be aware of:
EU AI Act — the EU’s comprehensive AI regulation, which came into force in 2024. It introduces risk-based requirements for AI systems, with higher obligations for “high-risk” AI applications (e.g. AI used in hiring, credit scoring, healthcare). UK organisations that sell into the EU or use EU data may be affected.
UK Government AI approach — the UK is currently taking a lighter-touch, pro-innovation approach to AI regulation, but this landscape is evolving. NCSC guidance on AI security is worth monitoring.
ICO guidance on AI — the ICO has published detailed guidance on the use of generative AI tools and GDPR compliance. If you use AI to process personal data, this is essential reading.
A Pragmatic Approach
For most small and medium-sized organisations, the immediate priorities are:
- Understand what AI tools are in use — conduct a quick shadow AI audit. Ask people what tools they’re using.
- Define a permitted use policy — establish which tools are approved and what data can be used with them.
- Add AI to your risk assessment — it’s a material risk for most organisations now.
- Train your staff — make sure everyone understands the risks and the rules.
- Assess your key AI suppliers — particularly if you’re using enterprise AI tools that process business data.
This doesn’t need to be a heavyweight programme. Proportionality applies as much to AI governance as to any other ISMS element.
Get Started
Free Templates
Free
The 14 mandatory documents. The starting point for any ISO 27001 project.
A great way to get started without the commitment.
Templates
Full Toolkit
£85
130+ documents; policies, risk register, audit pack, staff communications and everything else you need to build a working ISMS.
Buy now →Do-It-Yourself
DIY Course
£285
The Do-It-Yourself course introduces the standard, its requirements, and then shows you how to implement it, stage by stage.
Includes the full toolkit & email consultancy.
More support?
Coaching
~£3,500
I can guide you through the standard and help you tailor it to your business through a series of coaching workshops.
Includes the full toolkit, personal consultancy, and first-pass guarantee.
Related Guides
- ISO 27001 Control 5.10 — Acceptable Use
- ISO 27001 Control 5.12 — Classification of Information
- ISO 27001 vs GDPR
FAQs
Does ISO 27001 cover AI tools, or do we need a separate framework?
ISO 27001:2022 doesn’t mention AI specifically — the standard predates mainstream generative AI adoption. But it doesn’t need to. The existing framework covers AI risks through controls you already have: your risk assessment (Clause 6.1.2) should include AI-related risks, your acceptable use policy (Control 5.10) should address approved tools, and your supplier management process (Controls 5.19–5.22) should assess AI tools just like any other third-party service. You don’t need a separate AI framework — you need to apply your existing ISMS to a new risk area.
Can our staff use ChatGPT for work tasks?
It depends entirely on what they’re doing with it. Using the free consumer tier of ChatGPT to summarise publicly available information or draft internal communications that contain no sensitive data is generally lower risk. Using it to process client data, personal data, confidential documents, or anything classified as Internal or above is a different matter entirely — the free tier offers no data processing agreement, and your data may be used to train future models. Your acceptable use policy should define this clearly, with specific rules about which data categories are permitted with which tools. Without that policy, staff are making judgement calls you haven’t authorised them to make.
What’s the difference between Microsoft Copilot and ChatGPT for ISO 27001 purposes?
Substantially different from a compliance standpoint. Microsoft Copilot for M365 is an enterprise product covered by Microsoft’s data processing agreement — your prompts are not used to train models, data retention is documented, and the service holds security certifications including ISO 27001. The free tier of ChatGPT is a consumer product with none of those contractual protections. This distinction matters both for your ISO 27001 supplier assessment and for GDPR compliance. The tool name matters less than which tier and agreement you’re operating under.
What is shadow AI, and how do we find out if it’s happening in our organisation?
Shadow AI is staff using AI tools that the organisation hasn’t approved, assessed, or even knows about — the AI equivalent of shadow IT. The quickest way to find out what’s in use is simply to ask. A short survey or team meeting asking “what AI tools do you use regularly for work?” typically surfaces more than expected. You can also review browser history policies, check for AI-related Chrome extensions, or look at software spend on company cards. The goal isn’t to ban everything — it’s to understand what’s in use so you can make deliberate decisions about what to approve, what to restrict, and what to put controls around.
Does using AI tools affect our ISO 27001 certification?
Not directly — using AI tools doesn’t automatically invalidate or jeopardise your certification. What matters is whether you’ve addressed AI risks within your ISMS. If your risk assessment doesn’t mention AI, your acceptable use policy is silent on it, and your staff have no guidance on what they can and can’t do, an auditor may raise observations or nonconformities around risk management and policy coverage. The good news is that adding AI governance to an existing ISMS is relatively straightforward — a few policy additions, risk register updates, and a training session can give you solid audit evidence without a heavyweight programme.