AI and Your ISMS: What ISO 27001 Auditors Are Starting to Ask About
- The Cyber Policy Pro
- Dec 1, 2025
- 7 min read
Three months ago, I got a call from a client who’d just wrapped up their ISO 27001 surveillance audit. They were caught off guard when the auditor asked a question that wasn’t on anyone’s radar last year: “Do you have an inventory of all the AI tools your employees are using?”
They didn’t. And they’re not alone.
If you’re maintaining an ISO 27001 certification right now, here’s something you need to know: auditors are starting to ask about artificial intelligence. Not as a nice-to-have conversation topic, but as a serious compliance concern that’s showing up in real audit findings.
Who’s Using What?
Here’s what’s happening. Your employees are using AI tools. Lots of them. Recent data shows that 96% of enterprise employees are now using generative AI applications at work, and 38% of them are sharing sensitive work information with these tools without company approval.
That’s not a typo – over a third of your workforce is potentially feeding confidential data into ChatGPT, Claude, Gemini, or dozens of other AI platforms you’ve never heard of.
The Samsung incident from 2023 is the perfect example. Engineers in their semiconductor division were pasting proprietary source code into ChatGPT to check for errors and uploading confidential meeting notes to generate summaries. They were just trying to be productive. But they were also creating unmonitored pathways for sensitive IP to leave the organization entirely.
This is what security professionals are now calling “shadow AI” – the use of artificial intelligence tools without IT approval or oversight. And it’s becoming a nightmare for ISO 27001 compliance.
What Auditors Are Actually Asking
I’ve been talking to colleagues who’ve gone through recent audits, and the questions are getting more specific. Auditors want to know if you can identify where AI is being used in your environment. They’re asking about policies governing AI tool usage. They’re inquiring whether you’ve assessed the data handling practices of AI vendors your company relies on.
The most common question seems to be about inventory. Can you tell an auditor which AI tools are in use across your organization? Most companies can’t, because they don’t even know. That shiny new feature in your CRM that quietly rolled out AI-powered summaries last month? That’s AI usage. The marketing team’s favorite content tool that added generative capabilities? Also AI. Your developers using GitHub Copilot on personal accounts? Definitely AI.
Traditional asset management and access control frameworks weren’t designed to catch these things. These tools often appear as browser-based services or plugins that don’t require installation. They’re not showing up in your endpoint management console, and they’re probably not on any approved software list.
Finding the Missing Leaks
According to Cisco’s 2025 study, 46% of organizations have already experienced internal data leaks through generative AI. Think about what that means for your ISO 27001 controls around data confidentiality and access management.
Your access control policy probably has detailed procedures for who can access what data within your systems. But what happens when an employee copies customer information from your CRM and pastes it into an AI chatbot to help draft a proposal? That data just left your controlled environment and went to a third-party server somewhere. Depending on the AI service’s terms, that data might be retained, used for training, or stored in a jurisdiction that doesn’t meet your compliance requirements.
This is a direct challenge to ISO 27001’s fundamental requirement to protect the confidentiality, integrity, and availability of information. Your carefully crafted information classification policy and your access controls are meaningless if employees are routinely extracting data and feeding it into uncontrolled external systems.
Where ISO 27001 Controls Already Apply (And Where They Don’t)
Here’s the thing that’s easy to miss: you don’t necessarily need brand new policies to address AI. Your existing ISO 27001 framework actually covers a lot of this if you know where to look.
Your Acceptable Use Policy should already address unauthorized software and data handling. It just needs to explicitly include AI tools in that scope. Your third-party risk management process should already have vendor assessment procedures. You need to extend those to evaluate AI service providers, with specific questions about data retention, training data usage, and model security.
The access control requirements in Annex A already give you a framework for managing who can use what tools. You just need to apply that thinking to AI platforms. Your security awareness training program (which ISO 27001 requires anyway) needs to evolve to cover AI-specific risks like data leakage through prompts and the limitations of AI-generated outputs.
But here’s where it gets tricky. ISO 27001 focuses on information security – confidentiality, integrity, and availability. AI introduces concerns that go beyond traditional information security. Things like algorithmic bias, model explainability, AI ethics, and accountability for AI-generated decisions aren’t really addressed by ISO 27001. That’s why ISO published a separate standard – ISO 42001 – specifically for AI management systems.
I’m not saying you need ISO 42001 certification tomorrow. But I am saying that auditors are starting to recognize that AI governance is a gap in many organizations’ ISMS, and they’re beginning to ask about.
Answer these questions and have them ready backpocket
If an auditor asks you about AI tomorrow, you should be able to answer these basic questions. Do you know which AI tools are approved for use in your organization? Have you defined what types of data employees are allowed to input into AI systems? Do you have a process for employees to request approval for new AI tools?
Can you demonstrate that you’ve assessed the security and data handling practices of any AI services your organization depends on? Are your employees trained on the risks of sharing sensitive information with AI platforms? Do you have any monitoring in place to detect unauthorized AI usage?
Most organizations right now would struggle to answer even half of these questions confidently. And that’s creating real audit findings.
Your Vendors are Deep Into AI too
Here’s another angle that’s emerging: auditors are starting to ask about AI usage by your vendors and service providers. Your SaaS platforms are all adding AI features. Your cloud providers are rolling out AI-powered tools. Your outsourced services might be using AI to process your data.
Do you know which of your critical vendors are using AI to handle your information? Have you updated your vendor assessment questionnaires to ask about this? Are your contracts and data processing agreements clear about whether and how vendors can use AI with your data?
This extends your ISO 27001 third-party risk management requirements into territory most organizations haven’t explored yet. Your Statement of Applicability probably says you’ve implemented controls for supplier relationships. But have you actually assessed AI usage by suppliers? Probably not, because nobody was asking that question six months ago.
You already have the framework - apply it to AI and ensure it’s followed
The good news is you don’t need to panic or start from scratch. If you’ve already built a solid ISO 27001 program, you have the foundation. You just need to extend it to explicitly address AI.
Start with discovery. You need visibility into what AI tools are actually being used across your organization. This probably means talking to department heads, reviewing browser usage logs, checking for unapproved SaaS applications, and honestly just asking people what tools they’re using. You might be surprised.
Once you know what’s out there, make decisions about what’s acceptable and what’s not. Create an approved list of AI tools that meet your security requirements. Define clear policies about what data can and cannot be input into AI systems. Customer names and contact information for a marketing campaign might be acceptable in an approved tool. Source code, strategic plans, and confidential contracts probably shouldn’t go anywhere near an external AI service.
Update your existing policies rather than creating entirely new ones. Your Acceptable Use Policy, your Data Handling Policy, your Third-Party Risk Management procedures – these should all be extended to explicitly cover AI. Don’t make the mistake of creating a standalone “AI Policy” that exists in isolation from your broader ISMS. That’s just creating documentation burden without integration.
Implement some level of monitoring. This doesn’t have to be sophisticated AI-specific DLP tools on day one. Start with basic network monitoring to identify connections to known AI services. Review application integrations in your sanctioned SaaS platforms for AI features that weren’t there before. Set up regular audits where you ask teams what tools they’re using.
And absolutely update your security awareness training to address AI. Your employees need to understand why pasting that customer list into ChatGPT to help write an email is a problem. They need to know what the approved alternatives are. And they need to feel like they can ask for approval to use new tools rather than just using them under the radar.
Looking Forward: ISO 42001 and Beyond
The lead editor of ISO 27001:2022 said in a recent interview that she wouldn’t be surprised to see AI-specific controls added to the next version of ISO 27001 or referenced in ISO 27002. ISO has already published ISO 42001 as a complete standard for AI management systems, and it follows the same structure as ISO 27001, which means there’s significant overlap.
I’m not predicting that ISO 42001 certification will become mandatory anytime soon. But I am seeing a pattern where advanced organizations are starting to look at it, especially those in heavily regulated industries or those whose business model depends heavily on AI. And having ISO 27001 already in place makes implementing ISO 42001 significantly easier – organizations with mature ISO 27001 programs can achieve ISO 42001 compliance 30-40% faster than those starting from scratch.
We’re Here to Help
AI is no longer a future consideration for your ISO 27001 program. It’s a present-day compliance issue that auditors are actively asking about. The organizations that are getting caught flat-footed are the ones who assumed their existing security controls were sufficient without explicitly considering how AI fits into their ISMS.
The organizations that are handling this well are the ones treating AI as a natural extension of their existing risk management and security frameworks. They’re not rebuilding everything from scratch, but they’re not ignoring it either.
Your next surveillance audit will almost certainly include questions about AI. You need to have answers that go beyond “we haven’t really thought about that.” You need to demonstrate that you’ve identified where AI is being used, assessed the risks, implemented appropriate controls, and integrated AI governance into your overall ISMS.
And if you’re updating your policies to address these AI considerations, you need documentation that’s comprehensive enough to satisfy auditors but practical enough that your organization will actually follow it. That’s exactly the balance our policy templates are designed to strike. We’ve already updated our ISO 27001 policy packages to include AI-related considerations and controls, so you don’t have to figure this out from scratch. Check out our products at CyberPolicyPro.com and get your ISMS ready for the questions auditors are asking right now.





