ISO 27001 and AI: What Organisations Need to Consider

AI tools like ChatGPT and Microsoft Copilot create new information security risks. Here's how ISO 27001 and AI relate — and what you need to document.

Generative AI tools have moved from novelty to mainstream workplace tool in an astonishingly short time. ChatGPT, Microsoft Copilot, Google Gemini, and dozens of specialist AI tools are now in regular use across organisations of all sizes — often with little formal governance around how they’re used or what data is being shared with them.

Almost every small business I engage with is asking ‘What about ISO 27001 and AI?’. They want to know how AI needs to be considered, where do AI tools fit in the ISMS? What risks do they introduce? And what do auditors expect to see?


Why AI Creates Information Security Risks

AI tools — particularly large language model (LLM) tools — introduce risks that many organisations haven’t fully considered:

AI Information Security Risks for Organisations

Five AI Information Security Risks Organisations Need to Manage

Generative AI tools introduce risks that most organisations haven’t formally assessed — yet

🔌
Data Leakage via AI Prompts
Confidentiality risk

When staff paste client data, confidential documents, or sensitive information into a public AI tool, that data may leave your control entirely — and could be used to train future models.

In 2023, Samsung employees shared confidential semiconductor designs via ChatGPT prompts. Samsung subsequently banned the tool internally.
📜
Processing Personal Data Without a DPA
Compliance risk

Using a public AI tool to process personal data may constitute data processing under GDPR — requiring a lawful basis and a data processing agreement. Most free AI tools offer neither.

Free consumer AI tools typically don’t offer DPAs. Using them with personal data may breach GDPR Article 28 obligations.
📋
Unverified AI Outputs Used as Facts
Integrity risk

AI tools hallucinate — producing confident-sounding but incorrect information. If staff use AI-generated outputs in reports, client work, or business decisions without verification, integrity risks follow.

AI errors have appeared in legal submissions, client communications, and published materials when outputs weren’t checked before use.
🕵
Shadow AI
Governance risk

Just as shadow IT saw staff adopt unauthorised cloud tools, shadow AI is now a real governance challenge. Staff may be using AI tools the organisation doesn’t know about, hasn’t assessed, and has no agreement with.

Most organisations underestimate how many AI tools are already in active use across their teams. A shadow AI audit typically reveals more than expected.
🎭
AI-Enhanced Phishing & Social Engineering
External threat

AI tools make it dramatically easier for attackers to craft convincing phishing emails, impersonation attacks, and social engineering scripts at scale — with near-perfect grammar and personalised context.

Security awareness training needs to acknowledge this changed threat landscape — the “look for poor spelling” heuristic is no longer reliable.
Each of these risk areas should appear in your ISO 27001 risk assessment with a documented likelihood, impact, and treatment decision. AI is now a material risk for most organisations.
Consumer AI vs Enterprise AI: What's the Difference for Information Security?

Consumer AI vs Enterprise AI: Information Security Comparison

Not all AI tools carry the same risk — the version matters as much as the tool

✗ Consumer / Free tools ChatGPT (free & Plus), Claude.ai (free), Google Gemini (consumer), Bing Chat
✓ Enterprise tools Microsoft Copilot for M365, Google Workspace AI, ChatGPT Enterprise, Claude for Work
Criterion
Consumer / Free
Enterprise
📜
Data Processing Agreement
Not availableUsing these tools to process personal data likely breaches GDPR Article 28 obligations
DPA providedEnterprise agreements include data processing terms covering your obligations under GDPR
🌀
Prompts Used for Model Training
Often yes (opt-out available)Free tiers typically use conversation data to improve models; opt-outs exist but require action
No — prompts not used for trainingEnterprise agreements explicitly exclude customer data from model training
🔒
Data Retention & Deletion
!
Variable / unclearRetention periods and deletion practices for prompts and outputs are often not clearly documented
Documented and contractualEnterprise agreements specify data retention periods and deletion commitments
🏆
Security Certifications
!
Varies by providerSome providers hold ISO 27001 or SOC 2 certifications for their infrastructure; others don’t publish these for consumer tiers
Typically ISO 27001, SOC 2 Type IIEnterprise tiers are covered by security certifications; audit reports often available on request
📄
Suitable for Business / Client Data
No — public information onlyShould not be used with confidential, personal, or commercially sensitive data without explicit assessment
Yes — subject to your classification policyAppropriate for business data within the scope of your information classification policy
📋
ISO 27001 Supplier Assessment
Difficult to assess adequatelyLimited security documentation available; a formal supplier assessment is unlikely to pass for confidential data use
Assessable under Controls 5.19–5.22Security documentation, certifications, and DPA terms are available to complete a supplier assessment
✗ Consumer tools: use with care
Restrict to public information only. Must not be used with client data, personal data, or anything classified as Internal or above. Establish this in your acceptable use policy and information classification scheme.
✓ Enterprise tools: assess and approve
Conduct a supplier assessment under Controls 5.19–5.22, confirm DPA is in place, and add to your approved tools list. Document the assessment in your ISMS for audit evidence.

How ISO 27001 Helps You Manage AI Risks

ISO 27001 doesn’t mention AI specifically — the 2022 version predates the mainstream adoption of generative AI tools. But the framework provides exactly the right structure for managing AI risks:

ISO 27001 Controls for Managing AI Risks

ISO 27001 Controls for Managing AI Risks

ISO 27001:2022 doesn’t mention AI — but its existing controls provide exactly the right framework

Control
How it applies to AI
AI risks it addresses
6.1.2 Risk assessment
How it applies to AI
AI tools must appear in your risk assessment. Document each AI-related risk, assess likelihood and impact using your standard criteria, and record your treatment decision — approve, mitigate, or prohibit.
AI risks addressed
Data leakage GDPR / DPA Shadow AI AI phishing
5.10 Acceptable use of information
How it applies to AI
Your acceptable use policy must address AI tools explicitly: which tools are approved, what data categories are permitted, which tools are prohibited, and what verification is required before using AI-generated outputs in client-facing work.
AI risks addressed
Data leakage Shadow AI Hallucination
5.12 Classification of information
How it applies to AI
Your classification scheme should explicitly state which data categories may be used with external AI tools. A practical rule: Confidential and above → no public AI tools. Internal → needs assessment. Public → generally acceptable.
AI risks addressed
Data leakage GDPR / DPA
5.19–22 Supplier management
How it applies to AI
AI tools are third-party services and must be assessed like any other supplier. For each tool in use, assess: data processed, security certifications, DPA availability, data retention practices, and what happens to prompts and outputs.
AI risks addressed
GDPR / DPA Data leakage Shadow AI
6.3 Awareness & training
How it applies to AI
AI security must be part of your annual awareness training. Staff need to understand the risks of sharing sensitive data with public AI tools, which tools are approved, how to verify outputs before use, and how to recognise AI-enhanced phishing.
AI risks addressed
AI phishing Data leakage Hallucination Shadow AI
8.12 Data leakage prevention
How it applies to AI
If you have DLP tooling, assess whether it can detect and block uploads of sensitive data to AI platforms. Some security platforms now include AI-specific DLP capabilities that can identify and prevent data being sent to unapproved AI services.
AI risks addressed
Data leakage Shadow AI
For ISO 27001 audit purposes, you’ll need: risk register entries for AI risks, policy coverage in your acceptable use policy, supplier assessments for AI tools in use, and training records showing staff have been made aware of AI-specific risks.

Documenting Your AI Governance for ISO 27001

For ISO 27001 audit purposes, you’ll want to be able to show:

  1. Risk assessment entries covering AI-related risks, with treatment decisions
  2. Policy coverage — your acceptable use policy (or a dedicated AI usage policy) addresses AI tools
  3. Supplier assessments for any AI tools in use, distinguishing between approved enterprise tools and unapproved consumer tools
  4. Training records showing staff have been made aware of AI-specific risks
  5. A list of approved AI tools — even a simple register of which tools are in use and have been assessed

You don’t need a 20-page AI governance framework. A few clear policy additions, relevant risk register entries, and evidence of staff awareness is a solid starting point.


The Emerging Regulatory Picture

Beyond ISO 27001, organisations in the UK and EU should be aware of:

EU AI Act — the EU’s comprehensive AI regulation, which came into force in 2024. It introduces risk-based requirements for AI systems, with higher obligations for “high-risk” AI applications (e.g. AI used in hiring, credit scoring, healthcare). UK organisations that sell into the EU or use EU data may be affected.

UK Government AI approach — the UK is currently taking a lighter-touch, pro-innovation approach to AI regulation, but this landscape is evolving. NCSC guidance on AI security is worth monitoring.

ICO guidance on AI — the ICO has published detailed guidance on the use of generative AI tools and GDPR compliance. If you use AI to process personal data, this is essential reading.


A Pragmatic Approach

For most small and medium-sized organisations, the immediate priorities are:

  1. Understand what AI tools are in use — conduct a quick shadow AI audit. Ask people what tools they’re using.
  2. Define a permitted use policy — establish which tools are approved and what data can be used with them.
  3. Add AI to your risk assessment — it’s a material risk for most organisations now.
  4. Train your staff — make sure everyone understands the risks and the rules.
  5. Assess your key AI suppliers — particularly if you’re using enterprise AI tools that process business data.

This doesn’t need to be a heavyweight programme. Proportionality applies as much to AI governance as to any other ISMS element.

Get Started

Free Templates

Free

The 14 mandatory documents. The starting point for any ISO 27001 project.

A great way to get started without the commitment.

Get the free toolkit →

Templates

Full Toolkit

£85

130+ documents; policies, risk register, audit pack, staff communications and everything else you need to build a working ISMS.

Buy now →

Do-It-Yourself

DIY Course

£285

The Do-It-Yourself course introduces the standard, its requirements, and then shows you how to implement it, stage by stage.

Includes the full toolkit & email consultancy.

View the course →

More support?

Coaching

~£3,500

I can guide you through the standard and help you tailor it to your business through a series of coaching workshops.

Includes the full toolkit, personal consultancy, and first-pass guarantee.

Explore coaching →

Related Guides


FAQs

Does ISO 27001 cover AI tools, or do we need a separate framework?

ISO 27001:2022 doesn’t mention AI specifically — the standard predates mainstream generative AI adoption. But it doesn’t need to. The existing framework covers AI risks through controls you already have: your risk assessment (Clause 6.1.2) should include AI-related risks, your acceptable use policy (Control 5.10) should address approved tools, and your supplier management process (Controls 5.19–5.22) should assess AI tools just like any other third-party service. You don’t need a separate AI framework — you need to apply your existing ISMS to a new risk area.

Can our staff use ChatGPT for work tasks?

It depends entirely on what they’re doing with it. Using the free consumer tier of ChatGPT to summarise publicly available information or draft internal communications that contain no sensitive data is generally lower risk. Using it to process client data, personal data, confidential documents, or anything classified as Internal or above is a different matter entirely — the free tier offers no data processing agreement, and your data may be used to train future models. Your acceptable use policy should define this clearly, with specific rules about which data categories are permitted with which tools. Without that policy, staff are making judgement calls you haven’t authorised them to make.

What’s the difference between Microsoft Copilot and ChatGPT for ISO 27001 purposes?

Substantially different from a compliance standpoint. Microsoft Copilot for M365 is an enterprise product covered by Microsoft’s data processing agreement — your prompts are not used to train models, data retention is documented, and the service holds security certifications including ISO 27001. The free tier of ChatGPT is a consumer product with none of those contractual protections. This distinction matters both for your ISO 27001 supplier assessment and for GDPR compliance. The tool name matters less than which tier and agreement you’re operating under.

What is shadow AI, and how do we find out if it’s happening in our organisation?

Shadow AI is staff using AI tools that the organisation hasn’t approved, assessed, or even knows about — the AI equivalent of shadow IT. The quickest way to find out what’s in use is simply to ask. A short survey or team meeting asking “what AI tools do you use regularly for work?” typically surfaces more than expected. You can also review browser history policies, check for AI-related Chrome extensions, or look at software spend on company cards. The goal isn’t to ban everything — it’s to understand what’s in use so you can make deliberate decisions about what to approve, what to restrict, and what to put controls around.

Does using AI tools affect our ISO 27001 certification?

Not directly — using AI tools doesn’t automatically invalidate or jeopardise your certification. What matters is whether you’ve addressed AI risks within your ISMS. If your risk assessment doesn’t mention AI, your acceptable use policy is silent on it, and your staff have no guidance on what they can and can’t do, an auditor may raise observations or nonconformities around risk management and policy coverage. The good news is that adding AI governance to an existing ISMS is relatively straightforward — a few policy additions, risk register updates, and a training session can give you solid audit evidence without a heavyweight programme.

Photo of author

Written by

Alan Parker

Alan Parker is an ISO 27001 consultant who has helped dozens of UK small businesses achieve certification — often without a dedicated security team or a large budget. With over 30 years in IT governance and qualifications including ITIL v3 Expert, ITIL v4 Bridge, and PRINCE2 Practitioner, Alan writes in plain English for busy teams who need to get things done. Named IT Project Expert of the Year (2024, UK).