2026-04-09 AI Policy and Governance

The policy problem, part 1: your IT policy in the age of AI – where to start (without panic)

Let’s be honest, you’re probably reading this because your organisation is awash with AI tools, your existing IT policies look like they were written for dial-up modems, and someone—probably you—has been tasked with figuring out how to stop the whole thing from going completely off the rails. You’re not alone. I’ve been in those rooms, seen the panic, and heard the whispered fears about what employee X is doing with ChatGPT and that sensitive client data. The good news? You don’t need a perfect, 100-page AI policy by next Tuesday. You need a starting point, and that’s exactly what we’re going to tackle today.

The elephant in the room: shadow AI is already here

Before we even talk about policy, let’s acknowledge the uncomfortable truth: AI tools are already being used across your organisation. Not just the ones IT approved, but the ones individuals found, signed up for, and are now using to ‘boost productivity’. I call it shadow AI, and it’s the wild west out there. Your marketing team is probably whipping up copy with generative AI, your developers might be using AI code assistants, and your HR department could be experimenting with AI for drafting job descriptions. All of this is happening, often with the best intentions, but almost certainly without a clear understanding of the risks involved.

Your existing IT policies, bless their cotton socks, were simply not built for this. They were designed for a world of defined software, on-premise servers, and clear data boundaries. Generative AI, with its vast, often opaque training data, its ability to hallucinate, its rapid evolution, and its ubiquitous accessibility, throws a spanner in the works. Data privacy? Intellectual property? Bias? Security vulnerabilities? Rapid change? Your old policies just don’t have the language or the framework to address these challenges effectively.

So, the goal isn’t to build a fortress overnight. It’s about laying down some foundational planks, ‘good enough for now’ measures that will allow you to navigate the immediate chaos. Think of it as building a bridge whilst crossing a very fast-flowing river. It needs to be pragmatic, iterative, and focused on preventing immediate disaster, not achieving theoretical perfection.

Step one: take stock—what AI tools are actually being used?

Before you can govern something, you need to know what ‘something’ actually is. This is where most organisations stumble. They try to write a policy in a vacuum, based on hypothetical threats, whilst their employees are happily feeding proprietary data into public Large Language Models (LLMs).

Your first, most crucial step is to conduct a rapid inventory of AI tools currently in use. And I mean rapid. Don’t overthink it, don’t try to make it perfect. This isn’t a deep audit; it’s a snapshot.

How do you do this? Start with a simple, direct approach:

  • Internal Survey: Send out a short, anonymous (or semi-anonymous, depending on your culture) survey. Ask teams and individuals: ‘Which AI tools are you currently using for work-related tasks?’ Be clear that this isn’t a witch hunt; it’s about understanding the landscape so you can support them safely.
  • IT System Logs: Your IT team might be able to pull logs of web traffic or software installations to identify commonly accessed AI services. This won’t catch everything, but it can give you a quick overview of popular tools.
  • Conversations: Talk to department heads. Ask them what their teams are experimenting with. You’ll be surprised what comes out in a casual chat.

The point here isn’t to catch everyone out. It’s to get a realistic picture of the ‘shadow AI’ problem. You need to know which tools are gaining traction, where people are finding value, and crucially, where the biggest risks might be hiding. You can’t put guardrails on a road you haven’t even mapped.

Step two: pinpoint the immediate high-risk areas

Once you have a rough idea of what’s being used, your next step is to identify the immediate high-risk areas. You can’t tackle everything at once, so focus your energy where the potential for damage is greatest.

Think about the types of data your organisation handles and the nature of your work:

  • Sensitive Customer Data: Are employees using public LLMs or AI tools that might process personally identifiable information (PII) or other sensitive client data? This is a massive red flag. GDPR, CCPA, and a host of other regulations are not going to be sympathetic if you’ve allowed this to happen.
  • Proprietary Code/Intellectual Property: Are your developers feeding your unique algorithms or trade secrets into AI code assistants that might then use that data to train their models, potentially making your IP public or less valuable? This is a significant business risk.
  • Regulated Information: If you’re in finance, healthcare, or any other regulated industry, the stakes are even higher. Are AI tools being used to process or generate content related to compliance, patient records, or financial transactions without proper oversight?
  • High-Impact Decision Making: Is AI being used in any way to inform critical business decisions, hiring, or customer service, where a ‘hallucination’ or biased output could have severe consequences?

Prioritise these areas. The goal here isn’t to shut everything down, but to understand where the most immediate and severe threats lie. This will inform your ‘red lines’.

Step three: establish basic ‘red lines’—what absolutely cannot happen

Now that you know what’s out there and where the biggest risks are, it’s time to draw some very clear, very simple lines in the sand. These are your non-negotiables, your immediate ‘red lines’ for AI use. Forget about comprehensive policy for a moment; think about absolute prohibitions.

This isn’t about stifling innovation; it’s about preventing catastrophe. And frankly, your employees need to know this, because many simply don’t understand the implications of what they’re doing.

Here are some examples of foundational ‘red lines’ you should establish today:

  • Never input client PII or sensitive customer data into any public-facing AI tool. This is non-negotiable. If it’s not an approved, secure, enterprise-grade solution with a data processing agreement that meets your standards, it’s off-limits for sensitive data.
  • Never input proprietary code, trade secrets, or confidential business information into public generative AI services. Assume anything you put into a public LLM can and will be used to train its model and potentially become public knowledge. If it’s your competitive edge, keep it out.
  • Always verify AI-generated output for accuracy before use. AI models hallucinate. They make things up. They get facts wrong. This isn’t a maybe; it’s a certainty. Treat AI output as a draft, never a final product, especially for client communications, legal documents, or financial reports.
  • Do not use AI for making critical decisions without human oversight and verification. This applies to everything from hiring decisions to financial projections. AI can augment, but it cannot replace human judgement in high-stakes scenarios.

These are simple, actionable rules. They don’t require a legal degree to understand, and they address the most immediate and common risks. Communicate these widely, clearly, and repeatedly. Put them on your intranet, send out an email, bring it up in team meetings. Make sure everyone knows what the absolute ‘no-go’ zones are.

Involve your people, not just your lawyers

This isn’t a task for IT alone, or legal alone. You need to involve key stakeholders from the outset. That means IT, security, legal, HR, departmental leaders, and crucially, those ‘AI people’—the practitioners who are actually using these tools and understanding their capabilities and limitations. They’re your early adopters, your ‘power users’, and their input is invaluable. They can help you understand the real-world use cases and identify practical solutions, not just theoretical risks.

By involving them, you foster a sense of ownership and understanding, rather than imposing rules from on high. This isn’t about stopping innovation; it’s about enabling safe, responsible, and sustainable innovation. You want your teams to leverage AI’s power, but within a framework that protects your organisation, your customers, and your future.

This is just the beginning

I know this might feel like a lot, but trust me, these initial steps will put you miles ahead of most organisations still burying their heads in the sand. This is not about achieving perfection, but about establishing a baseline of sanity in a rapidly changing landscape. Your policy will be a living document, evolving as technology changes and as your organisation learns. This is just Part 1 of ‘The Policy Problem’ because, let’s face it, there’s a lot more to unravel.

Feeling less panicked? Good. This is just the start. In Part 2 of ‘The Policy Problem’, we dig into the specifics—what your AI usage policy should actually say, section by section.