Your Next Security Breach Will Come From AI Agents With Valid Credentials
- 3 days ago
- 7 min read
Written by Amer Altaf, Executive Contributor
Amer Altaf is Founder and CEO of Arkava.ai, a sovereign AI agentic automation consultancy serving UK and European enterprises, and Managing Editor of The Control Layer, a leading publication covering AI, cybersecurity, geopolitics, and leadership.
Shopify’s Chief Information Security Officer has spent fourteen years building the security architecture behind one of the world’s largest commerce platforms. Now, as AI agents begin making purchases autonomously, no screen, no checkout button, no human in the loop, he believes the entire model for how we secure online commerce must change. In a candid conversation on The Control Layer podcast, Andrew Dunbar revealed how Shopify is rethinking identity, trust, and security for a world where the buyer is no longer a person. This is what he told me.

What happens when the buyer is not a person?
When did you last buy something online? Now imagine you never had to again. Your AI agent browses catalogues, compares prices, negotiates terms, and pays all on your behalf. You never see a product page. You never click “add to basket.” You simply get a notification that your agent has completed the transaction.
This is not a thought experiment. Shopify, Google, and a growing coalition of major platforms have launched the Universal Commerce Protocol, an open standard that makes commerce programmable for machines. Shopify has seen a 15x increase year-over-year in agentic shopping, leading to purchases on its platform. When a merchant enables UCP, their store generates a manifest file that broadcasts its capabilities to any compliant AI agent. The shopfront is no longer a website designed for human eyes. It is an API designed for machine intelligence.
Andrew Dunbar has been at Shopify since 2012, when the company was a team of one hundred people. He was the first member of the security team. He built the function from the ground up over fourteen years, before which he served as an IT Security Specialist for Global Affairs Canada. His perspective on agentic commerce is not theoretical. He is the person responsible for making sure it does not fall apart.
When I asked Dunbar about his gut reaction to AI agents entering commerce, his answer was revealing. His first concern was not the technology.
“My first reaction wasn’t about the technology. The technology is not the risk. It was the human behaviour. The idea that people are handing over access to sensitive information to an agent to act on their behalf is a totally novel thing in the world of computers.”
Dunbar sees this as a generational opportunity, not just a threat. He drew a striking analogy, just as mobile operating systems benefited from every lesson learned from the security failures of desktop computing, the agentic era has the chance to build trust and authentication in from the start rather than bolting it on after the fact.
“We have the chance to ensure that authentication and trust are built in from the start, not bolted on later.”
5 agentic commerce risks your security team is not ready for
1. Identity fractures when the buyer is a machine
When a human makes a purchase, identity verification follows familiar patterns: a password, a biometric check, and a session cookie. When an AI agent makes a purchase, the identity question fractures. Whose identity is the agent acting under? The consumer who delegated the task? The developer who built the agent? The platform hosting it?
Traditional authentication was not designed for this level of indirection. The NIST AI Agent Standards Initiative, launched in February 2026, now treats agent identity as first-class infrastructure, not an afterthought bolted onto human identity systems. Mastercard’s Verifiable Intent protocol goes further, linking identity, intent, and action into a single privacy-preserving record that confirms who authorised the agent, what instructions were given, and what transaction resulted.
Shopify’s UCP addresses this by separating four distinct roles in every transaction: the platform, the business, the credential provider, and the payment processor. Each has its own permissions, its own rules, and its own trust boundary. As Dunbar explained, the protocol ensures that no single participant needs to trust the others blindly, every step has its own verification built in.
2. The security signals humans relied on no longer exist
For two decades, the security industry trained consumers to look for signals: the padlock icon, the HTTPS prefix, the browser warning. All of that assumed a human sitting in front of a screen making decisions.
“A lot of our security industry has been built around a premise of your computer will tell you if something bad is about to happen. You’ll get warnings, you’ll see HTTPS, the padlock sign, and you’ll get a prompt. All of that was designed to bring humans along into the process of security on the internet. Once you start acting agentically, you’re outside the context of the browser. A lot of that is not there anymore.”
This is not a minor adjustment. It is the disappearance of the entire visual trust layer that consumers and security teams have depended on. When an AI agent operates outside the browser, outside the screen, the signals that generations of internet users were taught to rely on simply do not exist.
3. Detecting a compromised agent requires a fundamentally new approach
Perhaps the most unsettling challenge in agentic commerce is detection. Traditional fraud systems were trained on human behaviour how people click, scroll, hesitate, and type. Machines do not hesitate. They do not deviate. A compromised agent may execute thousands of transactions that appear entirely legitimate to every security system in the chain.
Dunbar’s approach to this problem is to go deeper than monitoring outcomes. His team monitors intent.
“With AI agents, you can go a step deeper. You can get chain of thought logging, chain of thought attribution, where you cannot just expect to see what is the outcome of the thing that was being done, but what was the agent thinking at the time when it did that thing.”
He then made a point that stopped me in my tracks:
“You get to ask, what were you thinking when you did that? Which, when you’re dealing with people, you can’t.”
This is the paradigm shift. For the first time in the history of security, you can interrogate the reasoning behind a decision, not just the decision itself. Organisations that fail to build this observability into their agentic systems will be flying blind.
4. The most urgent threat is AI-enabled attackers
When I asked Dunbar what he would tell every CISO in the world if he had five minutes, his answer was not about agentic commerce. It was about what is happening right now.
“The most important threat that we need to deal with right now is AI-enabled attackers. Every company is dealing with the fact that attackers now have easy access to deep fakes, to social engineering methodologies, to the ability to craft bespoke malware.”
His top priority is phishing-resistant multi-factor authentication everywhere. Not because it is new advice, but because in a world where an attacker can clone your CEO’s voice and call an employee with a convincing request, there is no substitute for cryptographically secured authentication.
5. Third-party AI use is a blind spot in your risk landscape
Dunbar’s second piece of advice for CISOs was equally direct: understand how your vendors and suppliers are using AI. Most organisations have a reasonable grasp of their own AI adoption. Very few have visibility into how the companies they depend on are deploying it.
“When it comes to AI, you’re not limited to just your own company’s use of AI. You need to understand, how are your vendors and your suppliers using it? How do they govern it? Make this part of your vendor security due diligence.”
This is the supply chain risk that most vendor assessment questionnaires do not yet capture. If a critical supplier has deployed agentic workflows that interact with your data or your systems, that is your risk surface whether you chose it or not.
What to do on Monday morning
Dunbar’s recommendations are specific. First, deploy phishing-resistant MFA across every employee identity, not next quarter, now. Second, audit your AI landscape, including how your vendors and suppliers use it, and make AI governance part of your vendor due diligence. Third, shift your monitoring mindset from infrastructure to decision, instrument chain-of-thought logging, not just API calls, so you can understand why an agent did what it did, not just what it did.
“We’ve very traditionally operated at the perimeter or in a zero-trust way with a device, a user, and a perimeter. But really take a new mindset to how you could think about securing AI and go a little bit deeper into expectations of understanding their way of thinking, how they arrived at the thing that they’re doing.”
Fourth, engage with the emerging standards now. UCP is open. The NIST AI Agent Standards Initiative is accessible. Microsoft’s Zero Trust for AI framework is published. The organisations that shape these standards will have a significant advantage over those that adopt them reactively.
Hear the full conversation
This article captures the headlines. The podcast captures the nuance including how Shopify’s UCP separates four transaction roles to prevent credential leakage, why Dunbar runs dedicated bug bounty events for agentic attack surfaces, and the subscription-box analogy that reframes how consumers will learn to trust autonomous agents.
Episode 1 of The Control Layer, “Who Controls the Agent?” is available now on YouTube, Apple Podcasts, and Spotify. Listen and subscribe at The Control Layer.
Follow The Control Layer for weekly analysis on AI, cybersecurity, geopolitics, and leadership where the decisions that shape the next fifty years are examined before they are made.
Prepare your organisation for the agentic era
If your organisation sells online and nearly all of them do, agentic commerce is not optional. It is the next operating reality. Arkava is a sovereign AI agentic automation consultancy helping UK and European enterprises, defence contractors, and public sector organisations harness AI with guaranteed data sovereignty and compliance. Whether you need to assess your agentic risk exposure, build trust architectures for autonomous systems, or develop governance frameworks that keep pace with the technology, Arkava delivers Trusted Intelligence with Tangible Results.
Read more from Amer Altaf
Amer Altaf, Founder & CEO, Arkava®
Amer Altaf is Founder and CEO of Arkava.ai, a UK-based sovereign AI agentic automation consultancy helping enterprises, defence contractors and public sector organisations harness AI with guaranteed data sovereignty and compliance.
He is also Managing Editor of The Control Layer, an influential publication exploring the intersection of artificial intelligence, cybersecurity, technology, geopolitics and leadership.
With over two decades of enterprise technology leadership, Amer founded Arkava to bridge the gap between complex AI capabilities and measurable business outcomes.
His mission: Trusted Intelligence, Tangible Results. Read more at thecontrollayer.arkava.ai










