Beyond the Lab#002

AI Requires Governance by Design, Not Policy

Organisations are trying to write AI policies before they understand what they're governing. You can't regulate chatbot access into safety — we've spent decades designing systems around human error, and AI is no different. Build AI into specific workflows where governance is structural, not instructional. Design the constraints into the system instead of expecting perfect behavior.

Published
15 Feb 2026
Read time
7 min
Audience
Archetype
observerdabblerexplorerpractitioneralchemist
Series
Project

Your organisation has an AI problem — but it's not the one you think.

Most organisations I talk to about AI fall into one of two buckets. The first: no AI policy exists, so nobody touches anything. The second: AI policy does exist — and people are too damn terrified to actually use the tool because they're worried they'll get it wrong.

Both buckets produce the same outcome. No learning happens. No capability builds. The organisation stays frozen while the technology keeps moving.

The Two Kinds of Paralysis

The governance-first instinct feels responsible. It feels like the adult-in-the-room move. And I get it — the concerns aren't fabricated. A friend of mine who works in digital transformation kept circling back to governance and risk as the core blocker to AI adoption. I went away and researched her points. A couple of examples stood out:

  • Samsung banned ChatGPT in March 2023 after three employees leaked proprietary code and meeting transcripts within 20 days — despite already having policies warning against it.1
  • A New York lawyer was fined $5,000 after submitting ChatGPT-generated case citations that turned out to be entirely fabricated (Mata v. Avianca, 2023).2

These aren't hypotheticals. They're documented failures that validate the instinct to govern first.

But here's what I keep seeing in practice: the governance-first approach creates its own risk. The risk of standing still. You can't write good policy for something you haven't used — and you can't build capability with a tool everyone's been told not to touch.

The paralysis looks different on the surface. Underneath, it's identical.

You Can't Regulate Chatbot Access Into Safety

The deeper problem isn't insufficient governance frameworks. It's that most organisations are trying to govern an inherently ungovernable thing: open-ended chatbot access.

Think about what we're asking when we hand someone ChatGPT and a policy document. We're asking them to exercise perfect judgment — every time — about what's sensitive, what's confidential, what might produce a hallucination, and what the downstream consequences are of getting any of that wrong. Under time pressure. With a tool they barely understand.

We've spent decades designing systems around the fact that humans make mistakes with technology. Human-centred design exists precisely because we learned — painfully, repeatedly — that "just tell people to be careful" doesn't work. Form validation exists because people mistype. Confirmation dialogs exist because people click too fast. Safety interlocks on industrial machinery exist because people get tired and distracted. Every UX pattern we take for granted is a monument to the principle that you design around human error, you don't policy your way out of it.

And yet with AI, we've somehow decided the answer is "write a document telling people to be careful with their prompts."

Writing AI policies before understanding what you're governing is like writing traffic laws before you've seen a car. You might get a few obvious ones right — don't crash into things, probably — but the nuance of speed limits, lane markings, right-of-way rules? Those emerged from watching how cars actually behaved on actual roads with actual humans behind the wheel. The same principle applies here. You need to see how AI behaves in your specific context before you can govern it meaningfully.

None of this means abandon all rules. Basic guardrails have their place — don't paste client data into public tools, verify anything AI produces before it leaves your desk. But those are a floor, not a strategy.

Governance by Design, Not by Instruction

Key insight: Building custom solutions that enforce governance by design is the real opportunity

This isn't a new idea. I just finished building an income management tool for an organisation — it handles everything from the moment money hits the bank account through to reconciliation against budgets, individual funders, and line items, all the way to syncing back to Xero. Over 30 categories of income. Tied funding, untied funding, agreements, edge cases — context that staff previously had to track down, hold in their heads, and apply correctly every time.

Now all of that logic lives in the system. Good data, good design, structural constraints. Nobody needs to remember which funder requires which reporting treatment. The tool knows. The governance is in the architecture, not in a procedures manual people forget to read.

The same principle applies to AI — and it's where the real opportunity sits. Say your sales team needs to create client-facing decks. The worry: someone pastes confidential data into ChatGPT, or the AI hallucinates a revenue number that ends up in front of a client. The policy-first response is a document saying "don't put sensitive information into AI tools." The governance-by-design response is building something where that mistake can't happen — a tool that connects directly to your finance system, uses guided inputs for client context, and keeps the AI inside a workflow with structural constraints rather than floating in the open as a general-purpose tool.

The user doesn't need to be an expert in prompt engineering or data classification — they just need to use the tool as designed. Make the right thing easy and the wrong thing hard.

Not every AI application needs this level of engineering. But the principle scales down. Even a simple internal tool that pre-loads context, constrains inputs, and validates outputs is structurally safer than a raw chatbot with a policy PDF attached.

So Where Do You Start?

Here's what I see when I work with organisations: people don't want to talk about "AI strategy" in the abstract. They want to look at invoice management — or whatever tedious workflow has frustrated them for years — and ask "could AI help with this?"

That's the right instinct. It's specific. Everyone in the room already understands the problem. The value of improvement is immediately felt. And — this is the part that matters for governance — working on a concrete, bounded problem surfaces actual risks in your actual context.

Maybe the AI misreads a line item format your vendor uses. Maybe it handles GST calculations differently than your finance team expects. Maybe the failure mode is something nobody anticipated because it's specific to how your organisation processes invoices. Those risks only surface when you trial something real — not when you theorise about it in a steering committee.

You can explore policy while doing R&D and trialling in parallel — por qué no los dos? Start small. Pick a boring problem. Build the constraints in. See what happens.

That's how you build the understanding that makes governance meaningful rather than performative.

The Takeaway

Something I keep sitting with: most AI governance conversations I hear are really about chatbot access and prompt guidelines. And I think that's solving the wrong problem. The chatbot panic — someone pastes something sensitive, someone trusts a hallucination — isn't really a chatbot problem. It's an unstructured tool with no guardrails problem.

The real question is: what would it look like to build AI into one specific workflow where the constraints are designed in, not hoped for?

If you want help figuring out what that looks like, you know where to find me 👋 .


References

^1]: Forbes (2023). ["Samsung bans use of AI tools like ChatGPT after spotting misuse of the chatbot". Three separate incidents occurred within 20 days in March 2023, including engineers pasting proprietary source code and meeting transcripts into ChatGPT despite existing company warnings.

[^2]: Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y. 2023). Lawyers Peter LoDuca and Steven Schwartz were fined $5,000 after submitting a legal brief containing six fictitious case citations generated by ChatGPT. The case became one of the most widely reported examples of AI hallucination in professional practice.


Louis Razuki

Louis Razuki

Founder & Guide

I write about working with AI — the tools, the mindsets, the builds that actually deliver. Three years of daily AI practice distilled into experiments, insights, and honest takes on what's real and what's just hype.

About Me
Assessment

Discover Your AI Archetype

Take the 5-minute AI Journey Assessment and find out how you relate to AI — from Observer to Alchemist.

Take the assessment