2026-04-13 AI Policy and Governance

The AI policy vacuum: why your old IT policy just won't cut it anymore

Let’s be brutally honest for a moment. You’ve probably got an AI problem festering in your organisation right now, and you might not even know it. Or, worse, you know it, but you’re hoping it’ll just sort itself out. Spoiler alert: it won’t. Everyone’s having a dabble with AI tools—ChatGPT, Copilot, Midjourney, a hundred others—and why wouldn’t they? They’re powerful, often free, and seemingly innocuous. But while your teams are busy being ‘innovative’ and ‘efficient’, there’s a gaping chasm forming beneath your feet: the AI policy vacuum.

Some of you, especially those in IT or legal, might be thinking, “Hold on, we’ve got policies for everything. Data privacy, acceptable use, security, shadow IT—it’s all covered.” And I commend you for that. But let me tell you, with all the confidence of someone who’s seen this play out in the messy middle, your existing IT policies, robust as they might be for traditional technology, are fundamentally inadequate for governing AI. They’re like trying to catch a greased pig with a fishing net—completely the wrong tool for the job. And the longer you pretend they’re sufficient, the bigger the bloody mess you’ll have to clean up.

This isn’t just another software update; it’s a paradigm shift. And if you’re a leader trying to make sense of this, a practitioner suddenly tasked with being ‘the AI person’, or an IT/security professional battling the proliferation of ungoverned tools, then this is for you. We’re going to dig into why your old rulebook is failing, and what you need to start doing about it. This is Part 1 of ‘The Policy Gap’ series, and it’s time to face facts.

Why AI isn’t just another tool

Before we pick apart your existing policies, we need to understand why AI breaks them. It’s not just another application you install on a server or a SaaS subscription you manage. AI is fundamentally different in several critical ways:

  • Data Interaction: Traditional software processes data you explicitly input or store. AI consumes vast amounts of data for training, often from myriad sources, and then generates entirely new data based on complex patterns. This changes the game for privacy, ownership, and accuracy.
  • Autonomy and Decision-Making: Many AI systems aren’t just following a script; they’re making inferences, predictions, and even decisions with varying degrees of autonomy. Who’s accountable when an AI makes a bad call? Your old policies probably don’t have a clause for ‘algorithmic error’.
  • Opacity (The Black Box): Ever tried to fully understand why a complex AI model produced a specific output? It’s often incredibly difficult, sometimes impossible. This lack of explainability makes traditional auditing and compliance checks a nightmare.
  • Rapid Evolution: The pace of AI development is blistering. New models, capabilities, and risks emerge almost weekly. A policy written today could be obsolete by next quarter. Your traditional IT policies are built for stability, not this kind of relentless change.
  • User Accessibility: Unlike enterprise software that often requires IT-managed deployment, many powerful AI tools are freely accessible online. Anyone with an internet connection can start using them, bypassing traditional procurement and security gates entirely.

So, when someone tells you,