Skip to content

Visa AI Agents Can Pay for You—But Should They?

6 min read

Visa AI Agents Can Pay for You—But Should They?
Photo by Towfiqu barbhuiya on Pexels

What Visa Actually Launched in March 2026

On March 17, 2026, Visa launched its Agentic Ready programme — a structured framework that lets issuing banks test and validate payments initiated by AI agents on behalf of consumers. The rollout began in Europe, with 21 banks signed up for the first phase, including Barclays, HSBC UK, Revolut, Commerzbank, Banco Santander, Nationwide Building Society, Nexi Group, Raiffeisen Bank International, and DZ Bank. Swiss issuer Cornèrcard also joined as an early partner.

The programme isn’t experimental in the conceptual sense — Visa has already completed hundreds of real, production-grade agent-initiated transactions. In the most concrete demonstration to date, Banco Santander used a Visa credential issued in Spain to have an AI agent purchase a book: the agent triggered authorization, tokenized payment, and network settlement with zero manual input from the consumer.

That sequence — a machine making a real purchase on your behalf, against your real financial account, with no human touching the checkout — is the actual product now entering mainstream testing. The technology works. The harder questions are about what happens when it goes wrong.

How Agent Authentication Works (And What It Relies On)

Visa’s approach to agent authentication centers on tokenization. Instead of exposing a consumer’s actual card number to the AI agent, Visa substitutes it with a unique digital token tied to that account. The agent uses the token, not the real credential. Biometric verification — fingerprint or face scan — links the token to a verified account holder at setup time, establishing a chain of consent before any agent transaction occurs.

Issuers also get configurable spending controls: pre-set limits on what an agent can spend, under what conditions, and with which merchants. The consumer defines those parameters in advance. Visa layers its standard risk scoring on top, the same fraud detection that runs on every card transaction today.

This is a genuinely clever architecture. Tokenization is battle-tested at scale; extending it to agent credentials is evolutionary rather than revolutionary. The consent model — set rules once, let the agent operate within them — is similar to how standing orders or direct debits work today, just with more granularity and an AI making real-time decisions about when to trigger within those rules.

The weakness is in that last clause. A standing order fires on a fixed schedule for a fixed amount. An AI agent decides dynamically — which means the consumer’s pre-set parameters have to anticipate scenarios the consumer hasn’t imagined yet.

The SCA Problem No One Has Fully Solved

Strong Customer Authentication (SCA) is the regulatory requirement that sensitive transactions be verified with two independent factors — something you know, have, or are. It’s the reason your bank sends an SMS code when you log in from a new device. Under the EU’s Payment Services Directive 2 (PSD2) and its successor PSD3, SCA is mandatory for electronic payments above certain thresholds.

Agent-initiated payments create a structural conflict with SCA. As Addleshaw Goddard noted in February 2026, “SCA becomes impossible to perform in a world where transactions are taking place autonomously without direct payer customer interaction.” Biometric consent at setup time establishes initial authorization, but it doesn’t clearly satisfy per-transaction SCA requirements for each subsequent agent purchase.

The legal grey zone matters for payment service providers (PSPs). If a PSP cannot demonstrate that a specific transaction was duly authorized by or on behalf of the customer, its liability exposure increases significantly. Visa’s tokenization and consent framework helps establish a chain of authorization — but regulators in the UK and EU haven’t yet issued clear guidance on whether agent-delegated consent satisfies PSD3’s SCA requirements for individual transactions. That guidance is expected later in 2026, and until it arrives, participating banks are operating in acknowledged uncertainty.

Who Owns the Mistake When an AI Overspends?

Liability in agentic finance is genuinely unresolved. The consumer authorized the agent to act within certain parameters. The agent acted within those parameters — but triggered an unintended purchase due to a misunderstanding of context, a data error, or a capability gap. Who makes the consumer whole?

TRM Labs’ analysis of agentic financial crime risks puts it plainly: “Autonomy redistributes, but does not eliminate, accountability.” The question is where it redistributes to. The consumer delegated authority. The bank issued the credential and runs the rails. The AI provider built the agent. The merchant accepted the payment. Each party has some claim to having acted correctly; the liability falls through gaps between them.

Emerging frameworks are proposing Know Your Agent (KYA) requirements to sit alongside traditional Know Your Customer (KYC) rules — essentially requiring that financial institutions verify and register the AI agents they allow to transact on customer accounts, much as they verify account holders. The World Economic Forum’s January 2026 analysis of AI agent trust flagged this as the foundational governance question: without KYA, dispute resolution has no clear ownership chain when things go wrong.

For consumers, the practical concern is more immediate: current credit card chargeback rights and fraud protection frameworks assume a human made the purchase decision, or that fraudulent use occurred without consent. An AI agent operating within a consumer’s granted parameters doesn’t fit neatly into either bucket — it was authorized, but perhaps not for this specific purchase, under these specific circumstances.

What the Adoption Numbers Suggest

Despite the open questions, adoption isn’t waiting for answers. According to a Wolters Kluwer survey cited by Neurons Lab, 44% of finance teams will be using agentic AI in 2026 — a greater than 600% increase over the prior year. Visa reports more than 100 partners in its Intelligent Commerce ecosystem, with 30+ actively building in its sandbox and 20+ agents integrating directly with the Visa network.

The scale signals that the financial industry has decided agentic payments are inevitable and is building infrastructure rather than waiting for a regulatory green light. This is the same pattern that played out with mobile payments and open banking: industry moves first, regulators formalize after, edge cases get resolved by litigation and guidance over years.

For engineering teams building on top of agentic payment APIs, the current moment is a good time to design conservative defaults into consent flows — making rollback easy, audit trails complete, and spending constraints explicit and human-readable. The infrastructure from Visa works. The risk isn’t in the rails; it’s in the assumptions baked into agent decision logic about what the consumer actually wanted.

If you’re watching the enterprise AI agent space more broadly, the Snowflake and OpenAI $200M partnership on AI agents over enterprise data is the adjacent story — similar trust and authorization questions, applied to data access rather than financial transactions. And for the bigger picture on where agentic deployment currently stands, the 2026 pilot-to-production transition is still the defining context.

Further Reading

Don’t miss on Ai tips!

We don’t spam! We are not selling your data. Read our privacy policy for more info.

Don’t miss on Ai tips!

We don’t spam! We are not selling your data. Read our privacy policy for more info.

Enjoyed this? Get one AI insight per day.

Join engineers and decision-makers who start their morning with vortx.ch. No fluff, no hype — just what matters in AI.