top of page

Does AI Control Us, or Do We Control AI?

  • 4 days ago
  • 7 min read

Creator of Butler's Six Laws of Epistemic Opposition and the Seven Laws of Agentic Safety, together forming the constitutional AI safety framework. He is the CEO of Luminary Diagnostics.

Executive Contributor Tania Murray

Ask most senior leaders whether they control AI in their organisation, and they will say yes without hesitation. They will point to their governance committees (often many of them!), their AI policy, their oversight framework, and their risk register. They will tell you about the training they have rolled out and the guardrails they have put in place. They will sound entirely confident and almost all of them will be entirely wrong.


A person stands at a foggy crossroads with two screens: "CHAOS" in grayscale and "VERIFIED TRUTH" in blue. Sunset in the background.

Not dishonestly wrong. Not carelessly wrong. Wrong in the specific way that most governance failures happen, through a gap between what appears to be in place and what would actually hold under pressure when push comes to shove.


This is the thought-provoking question I want to put directly to you in this article. Not as a philosophical exercise. Not as a thought experiment. As an operational test.


Picture a boardroom after an AI incident, something went wrong, a decision was made, a line was crossed. There are consequences the organisation didn’t anticipate, and the board asks the obvious question, "Who had the power to stop this?"


Here is what they find, the governance committee meets monthly, and the human-in-the-loop is a junior analyst given just seconds to review each decision. Then there’s the governance policy, a document, carefully written and properly approved, which is sitting in a shared drive. So the uncomfortable answer to the board's question is the same as in every organisation that hasn’t genuinely solved this problem.


Nobody in the room had the power to stop it. The authority was in the algorithm.


This is not a technology failure. The technology did exactly what it was designed to do. It is a governance failure, the kind that happens when organisations confuse the appearance of control with the reality of it. A policy is not a breakpoint. A monthly committee is not oversight. A junior analyst with seconds to decide is not a human in the loop in any meaningful sense of that phrase.


The chair at the head of the table was occupied. The authority was not.


The question this article therefore considers, simply put, is, "Does AI control us, or do we control AI?"


The answer most organisations don't want to examine


The first thing I want to be clear about is what "control" actually means. Control is not having a policy, policies describe intentions, they do not produce outcomes. A policy that says "AI-assisted decisions will be reviewed by a human" does not tell you whether that human had the information, the time, the independence, the authority, and sometimes the awareness to actually change the outcome. It merely tells you someone signed off on a document.


Neither is control having an oversight committee. Committees can create the appearance of accountability without any individual carrying the weight of it, and when accountability is collective, it is often, in practice, nobody's.


Control is not monitoring either, monitoring alone merely tells you what happened after it happened. It does not give you the ability to stop it, change it, or redirect it at the moment it matters.


No, genuine control requires something more specific and considerably harder to demonstrate, though increasingly, having the ability to demonstrate true control is becoming vital. It requires a named human being, in a real position, with real authority, and with the awareness of what they are responsible for, who can see what the AI system is doing, understands it well enough to challenge it, and has the practical ability to override it at the moment a decision is being shaped, not after it has already been made.


That is what meaningful human authority looks like and it is far more rare than most organisations actually realise, and increasingly they will only find out when it is too late.


How control quietly slips away with AI


It does not happen suddenly, that is what makes it so difficult to catch. The first way control slips is through fluency. Modern AI systems speak with extraordinary confidence. They produce outputs that are polished, coherent, and internally consistent regardless of whether the underlying reasoning is sound, sometimes when there is almost completely unsubstantiated projection. In short, guesswork. When something sounds authoritative, humans instinctively defer to it, not because they are weak or incurious, but because we are wired to read confidence as a signal of knowledge. Something that again and again gets us into trouble, but with AI, that’s dangerous. When that signal is decoupled from actual reliability, as it is in almost all current AI systems, authority starts to migrate from the humans to the outputs without anyone explicitly deciding to allow it. It just, happens.


The second way control slips is through speed. AI operates faster than human deliberation. All the research shows we lack the cognitive ability to cope, AI is simply too fast now and getting faster. Recommendations arrive before the people who need to assess them have had time to think, decisions get made to keep pace with the system. The human in the loop begins to function as a rubber stamp rather than a governor, present in the process, but no longer meaningfully in control of it.


The third way control slips is through diffusion. When AI is embedded across an organisation, the question of who is responsible for a given output becomes genuinely difficult to answer. The data team, the model, the system owner, the business unit, the executive who approved deployment, responsibility spreads across so many hands that it effectively disappears. And often, everyone assumes that someone else owns it and diffused responsibility is not shared responsibility. It is no responsibility. What could possibly go wrong?


The question that actually tests control


There is a single question that cuts through all of this. It is not "do you have a governance policy?" It is not "have you completed an AI risk assessment?" It is not "does your AI system have guardrails?"

The question is this, and it is becoming more and more critical as control slips away, "Can a named human being, in a specific role, be shown to hold real and exercisable authority over AI-shaped decisions at the moment they are being made?"


Not in theory. Not in the policy. In practice. Today. Under pressure.


If the honest answer is yes, you can honestly demonstrate control. If the answer is no, or if you are uncertain, as will often be the case, then control has already started to drift. The AI is not malicious, it has not taken over in any dramatic sense. But the decisions your organisation makes are being shaped by systems that nobody can meaningfully override, and that is dangerous. Very dangerous.


Why this matters right now


Regulators have noticed. The EU AI Act is approaching. The UK's Financial Conduct Authority has issued guidance on AI in financial services. Courts are beginning to see cases where the question "who decided this?" is answered with "the algorithm." Insurers are asking whether the governance claims their clients make can actually be evidenced. I said at the start of 2026 that the most expensive question this year will be “I didn’t know it could do that.” I am being proved right.


The crucial point here is that the organisations that will navigate this transition well are not the ones with the most sophisticated AI. They are the ones who can answer the control question with evidence rather than assertion. Not "we believe we are in control", but "here is the named person, here is their authority, here is the moment they exercise it, and here is the proof."

That is the shift from AI governance as a compliance exercise to AI governance as a constitutional reality. It is the difference between an organisation that can withstand scrutiny and one that cannot.


What to do with the question


If you are a board member, a CRO, a head of risk, the Head of Compliance, or a senior executive with AI embedded in your operations, the most useful thing you can do right now is ask the question directly inside your organisation.


Not "do we have a policy?" but "who, specifically, holds meaningful human authority over our most consequential AI-shaped decisions, and can they prove it?"


If the answer comes back clear, with names and evidence, you are in a better position than most. If the answer comes back vague, or committee-shaped, or policy-pointing, you now know where your governance gap is.


An assessment from Luminary Diagnostics is designed to answer exactly that question, systematically, rigorously, and in a form that produces evidence rather than opinion. It maps where meaningful human authority is real, where it is assumed, and where it has already drifted. Not a score. A constitutional determination.


Because the question is not going away and the organisations that answer it with evidence will be in a fundamentally different position from those that answer it with paperwork.


Does AI control us, or do we control AI? The answer depends on what you can prove, not what you believe.


Follow me on Instagram, and visit my website for more info!

Read more from Steve Butler

Steve Butler, CEO of Luminary Diagnostics

Steve Butler is the founder of the Execution Governance as a Service (EGaaS) category, architecting the future of intelligent, accountable enterprise. His work transforms risk from a reactive problem into a proactive, embedded safeguard against catastrophic failures like Drift, Collapse, and Pollution. As the Chief Strategy & Operations Architect, he proves that true autonomy can only be earned and must be governed by verifiable truth. He is also the author of multiple books that diagnose the fundamental illusions in the AI age and provide the solution: Sentinel, the Epistemic Citadel.

Tags:

 
 

This article is published in collaboration with Brainz Magazine’s network of global experts, carefully selected to share real, valuable insights.

Article Image

Take the Lesson and Leave the Pain

There’s a pattern most people don’t realize they’re stuck in. We don’t just go through experiences. We carry them. The memory, the feeling, the replay, the “why did this happen,” the “what could I have done...

Article Image

What Will You Wish You'd Asked Your Mother?

When my mother passed, I expected grief. I did not expect discovery. In the weeks after her death, people gathered, neighbours, church members, women from her association, and faces I barely...

Article Image

5 Essential Steps to Successfully Raise Investor Capital

Raising investor capital requires more than a good business idea. Investors look for businesses with structure, market potential, operational readiness, and scalability. Many entrepreneurs approach fundraising...

Article Image

You're Not Stuck Because You're Not Working Hard Enough

Let me say the thing that nobody will say to your face. You are probably working incredibly hard. You are showing up, delivering, going above and beyond, and doing all the things you were told would lead to...

Article Image

The Gap Between Your Effort and Your Results is Where Most People Quit

The pattern repeats itself: consistency beats intensity. Not sometimes, but every time. If you want to achieve anything, your willingness to keep showing up matters more than any burst of effort, regardless of...

Article Image

How to Lead from Internal Stability When the World Is Unstable

Have you ever wondered why you abruptly quit a project just as it was about to succeed, or why you find yourself compulsively cleaning when you are actually deeply hurt? These are sophisticated...

bottom of page