Trust & Identity Infrastructure
KYC verified customers. KYB verified businesses. KYA verifies the agents acting on their behalf.
The Trust Problem
There is a number that should keep every payments and commerce leader awake at night: 16 percent. That is the proportion of US consumers who currently trust AI to make payments on their behalf. In the UK, only 29 percent would trust AI to make even small automated payments.
This is not a technology problem. The technology to execute agent-initiated transactions exists today. The problem is trust.
The rails are built. Santander and Mastercard have completed live agent payments through production infrastructure. Stripe's Shared Payment Tokens are in production. Visa has processed hundreds of agent-initiated transactions with ecosystem partners. What does not exist is widespread consumer confidence.
What Trust Means in Agentic Commerce
Identity: Is this agent what it claims to be? Who built it? Who deployed it?
Authority: Is this agent authorised to act on behalf of this specific consumer, for this specific transaction, up to this specific amount?
Integrity: Has the agent's instructions been tampered with? Is it doing what the consumer actually asked it to do?
Accountability: If something goes wrong, who is responsible? Can we trace the chain of decisions back to a verified human?
In traditional commerce, trust is inferred from credential possession. If someone has your card number, CVV, and billing address, the system assumes they are you (or at least authorised by you). Fraud systems look for anomalies, but the baseline assumption is that credential equals authorisation.
In agentic commerce, this assumption breaks down completely. The agent always has the credentials. It was given them deliberately. The question is not whether the agent has access, but whether this specific action, at this specific moment, for this specific amount, was actually what the consumer intended.
This is why trust infrastructure, not payment infrastructure, is the layer that will determine how fast agentic commerce scales.
Know Your Agent: The Framework
KYA (Know Your Agent) verifies the AI agents acting on behalf of customers and businesses, just as KYC verifies customers and KYB verifies businesses.
The term entered academic literature in February 2025. Within months, it jumped from academia to production. Visa launched the Trusted Agent Protocol in October 2025. Sumsub launched AI Agent Verification in January 2026. Trulioo partnered with Worldpay. The concept is no longer theoretical.
KYA answers three fundamental questions about any AI agent operating in a commerce environment:
Unlike KYC, which is typically a point-in-time verification, KYA must be continuous. An agent's authorisation can change between transactions. A consumer might revoke permission, adjust spending limits, or restrict the agent to specific merchant categories. The trust verification has to happen in real time, not once at registration.
The Four Pillars of KYA
Agent Identity: Unique, persistent identification that remains stable across interactions. This includes cryptographic signatures, digital certificates, and verifiable credentials.
Authority Binding: Linking the agent's actions to a verified human principal. Every agent action can be traced back to informed, bounded consent from a real person.
Runtime Controls: Policy enforcement during execution. Spending limits, merchant restrictions, geographic constraints, time-based permissions. These are the guardrails that prevent authorised agents from exceeding their mandate.
Audit Trail: Tamper-evident logging of every agent decision and action. When something goes wrong, the audit trail lets you reconstruct what happened, who authorised it, and where the breakdown occurred.
Gartner expects 40 percent of enterprise applications to embed task-specific AI agents by 2026. Gartner also forecasts that one in four enterprise breaches could be tied to AI agent exploitation by 2028. Without KYA, every one of those embedded agents is a potential attack vector.
The Trust Protocol Landscape
Three major approaches to trust infrastructure are emerging in 2026. They are not mutually exclusive. In fact, the most robust implementations will layer multiple approaches.
Verifiable Intent: Mastercard and Google
Announced March 5, 2026, Verifiable Intent is the most architecturally ambitious trust protocol in the market. Co-developed by Mastercard and Google, endorsed by IBM, Worldpay, Fiserv, Getnet, Checkout.com, Basis Theory, and Adyen, it creates cryptographic proof of consumer authorisation at the moment an AI agent initiates a transaction.
The specification creates a tamper-resistant record that links three things together: consumer identity, specific purchase instructions, and merchant transaction data. All in a single privacy-preserving authorisation record. Built on FIDO Alliance, EMVCo, IETF, and W3C standards, with no proprietary infrastructure required.
The strategic significance is in the positioning. While other protocols handle how transactions are executed, Verifiable Intent handles whether those transactions were authorised in the way the consumer intended. That is a narrower but arguably more legally significant question.
Trusted Agent Protocol: Visa
Visa's Trusted Agent Protocol (TAP), launched in October 2025, takes a different approach. Where Verifiable Intent focuses on proving consumer authorisation, TAP focuses on distinguishing legitimate AI agents from malicious bots at the checkout.
This is a critical operational distinction. Fraud systems are built to be conservative. When they see automated behaviour, they reject. Agent-initiated commerce looks exactly like the patterns fraud systems are designed to mistrust: continuous operation, efficient retries, optimisation for completion. Without TAP, issuers see volume without explanation and clamp down.
TAP provides the signal that lets fraud systems relax appropriately. It says: this is not a bot attack. This is a legitimate agent, verified against Visa Intelligent Commerce standards, acting within its authorised scope on behalf of a verified consumer.
Visa is working with over 100 partners worldwide. Over 30 are actively building in the VIC sandbox. Over 20 agents and agent enablers are integrating directly.
Human-Binding Verification: Sumsub and Trulioo
Sumsub launched its AI Agent Verification tool in January 2026, positioning it as the only solution that provides "human binding": linking every AI agent action to a verified human identity.
The Sumsub approach works in three steps: Detect whether an action is automated. Evaluate the risk level of the automated action. Apply targeted verification when warranted, including liveness checks to confirm a real human is present and authorised.
Trulioo, which partnered with Worldpay and Google's AP2 protocol, has published the most comprehensive KYA white paper to date. Their Digital Agent Passport concept envisions every agent carrying verifiable credentials that merchants, payment providers, and regulators can quickly authenticate.
The identity verification approach is complementary to the network-level approaches of Verifiable Intent and TAP. Verifiable Intent proves the consumer authorised the transaction. TAP proves the agent is legitimate. Human binding proves a real person remains in control of the delegation chain.
On-Chain Identity: The Decentralised Alternative
Not everyone agrees that trust should be controlled by payment networks and platform gatekeepers. The crypto ecosystem is building an alternative that starts from a fundamentally different question: can AI agents be trusted without relying on a central platform?
ERC-8004: Agent Identity on Ethereum
ERC-8004, Ethereum's "Trustless Agents" standard, deployed to mainnet on January 29, 2026. It establishes blockchain-native identity infrastructure for AI agents. Each token is a credential NFT containing structured identity data: a unique agent identifier, a capability manifest defining what the agent can do, and a reputation score built from on-chain feedback.
Within 24 hours of launch, agents managing millions of dollars in deposits began registering. The standard allows agents to discover each other, build portable reputation, and transact across organisational boundaries without gatekeepers.
Crypto-Native Payment Rails
Coinbase's x402 is a crypto-native payment standard for AI agents. It enables agents to transact autonomously in stablecoins using smart contract logic. Conditions like "transfer on fulfilment of predefined criteria" are embedded directly in code. Settlement occurs without human intervention.
The key difference between the two approaches: payment networks build trust within controlled ecosystems, using existing regulatory frameworks and merchant relationships. Crypto protocols build trust at the protocol level, using cryptographic proofs and on-chain reputation.
For consumer-facing commerce (retail, travel, services), network-based trust protocols will dominate in the near term. For agent-to-agent transactions (automated procurement, API marketplaces, micro-services), crypto rails offer advantages. Sub-cent transactions, pay-per-use billing, and cross-border settlement are native to the protocol. For enterprise deployments, the likely approach is layered: network tokens for consumer-facing transactions, with crypto infrastructure for machine-to-machine payments.
The Fraud Surface
Bots now account for almost 50 percent of all internet traffic, with bad bots near one third. CrowdStrike reports that the average breakout time for attackers fell to 29 minutes in 2025, with the fastest observed at just 27 seconds. Thales reports that 59 percent of companies have experienced deepfake-driven attacks, and 48 percent report reputational damage tied to AI misinformation.
Agentic commerce amplifies every one of these risks. Without KYA, every embedded agent is an attack vector. Consider these scenarios:
Agent Impersonation: A malicious actor creates an agent that appears legitimate but is designed to intercept consumer credentials or redirect transactions. Without KYA, there is no way to distinguish this agent from a legitimate one.
Mandate Manipulation: An agent is given permission to buy a winter jacket under $200. A compromised version instead buys $2,000 worth of electronics. Without Verifiable Intent, the consumer's recourse is limited.
Agent-to-Agent Attacks: In a multi-agent environment where buyer agents negotiate with seller agents, a malicious seller agent could manipulate pricing, hide terms, or exploit the buyer agent's decision logic. The attack surface is code manipulating code at machine speed.
Scale and Speed: A single compromised agent can execute thousands of transactions before human monitoring detects the anomaly. At 29-minute breakout times, a sophisticated attacker can deploy, exploit, and extract value before traditional fraud systems react. This is why KYA must include runtime controls and behavioural monitoring, not just identity verification at registration.
Gartner predicts that one in four enterprise breaches by 2028 could stem from AI agent exploitation. The window to build trust infrastructure is now, before the attack surface becomes unmanageable.
Building Your KYA Policy
This section provides a practical framework for designing a Know Your Agent policy for your organisation. Whether you are a payment provider, a merchant, a platform, or a regulator, the same core structure applies.
Agent Onboarding
Every agent that interacts with your systems should go through a structured onboarding process:
- Registration: The agent receives a unique identifier and cryptographic keys. The developer and deployer are verified.
- Capability Documentation: Technical specifications define what the agent can do, what data it can access, and what actions it can take.
- Permission Configuration: Access controls limit the agent's interactions with your systems. Spending limits, merchant categories, geographic restrictions.
- Integration Testing: Verify the agent operates within defined parameters before allowing production access.
Runtime Governance
Onboarding is necessary but not sufficient. Agents must be monitored continuously:
- Transaction monitoring: Real-time analysis of agent behaviour against expected patterns. Anomaly detection for volume, velocity, amount, and merchant mix.
- Mandate verification: For each transaction, verify the agent's current authorisation against the consumer's active permissions.
- Behavioural analysis: Track patterns over time. An agent that gradually increases transaction amounts may indicate a compromised deployment.
- Kill switches: The ability to immediately revoke an agent's access if anomalous behaviour is detected. Response time matters. At machine speed, even minutes of delay can result in significant losses.
Audit and Accountability
Every agent action should generate a tamper-evident audit record that captures: the agent's identity and verified human principal; the specific mandate (what was the consumer's instruction?); the action taken; the outcome; any deviations from the mandate and the reason.
This audit trail is not just for fraud investigation. It is for dispute resolution, regulatory compliance, and continuous improvement of your trust infrastructure.
Regulatory Alignment
The EU AI Act takes full effect for high-risk AI systems on August 2, 2026. Many agentic AI systems in commerce will be deemed high-risk, triggering obligations for documentation, risk assessment, transparency, and human oversight. Fines reach up to 35 million euros or 7 percent of global turnover. The Colorado AI Act takes effect in June 2026, requiring risk management policies. Build your KYA policy to exceed current requirements, because requirements are moving faster than most compliance teams expect.
What would change in your fraud detection strategy if you treated every agent action as autonomous, rather than assuming human oversight?
Key Takeaways
- Trust is the bottleneck: Only 16 percent of US consumers trust AI to make payments. Closing this gap requires infrastructure, not reassurance.
- KYA is the framework: Know Your Agent answers three questions: Who made this agent? Who does it represent? What can it do? It must be continuous, not point-in-time.
- Three trust approaches: Verifiable Intent (cryptographic proof), TAP (agent vs. bot), and human binding (verified person). They are complementary layers.
- Decentralised alternative: ERC-8004 and x402 offer blockchain-native agent identity without central gatekeepers. Best for agent-to-agent and machine-to-machine transactions.
- The fraud surface is expanding: 50 percent of internet traffic is bots. 29-minute breakout times. KYA is not optional. It is the minimum trust infrastructure for agent deployment.