Visa is warning that the rise of AI-driven “agentic commerce” is creating a fast-moving environment for fraud.
It warns that users already exploiting autonomous shopping agents and generative tools to launch cyberattacks at a scale that traditional security systems were not designed to handle.
In a Nov. 20 analysis, Visa said cybercriminals are quickly shifting tactics to target AI shopping agents — systems that compare prices, identify merchants and complete transactions automatically on a user’s behalf. Because these agents eliminate human decision points, attackers can automate scams from end to end, generating synthetic websites, spoofed brands and fake customer-service agents that are difficult for both consumers and machines to detect.
Visa’s Payment Fraud Disruption unit reported a more than 450% surge in dark-web posts discussing “AI Agent” tools over the past six months compared with the previous period. The company also recorded a 25% increase in malicious bot-initiated transactions globally. That includes a 40% jump in the U.S.
One emerging threat involves cyber-criminals manipulating the logic that AI agents use to find the “best” deal.
Visa detects fraud risk amid rise of agentic commerce
Visa said cyber-criminals can engineer counterfeit merchants to appear legitimate and to pass automated checks. That would prompt AI agents to complete purchases with stored credentials. Attackers can then harvest payment data and immediately deploy it for unauthorized transactions.
Another concern is the evolution of social-engineering attacks. Visa recently uncovered a network of scam websites that used embedded conversational AI agents to impersonate customer support. These agents engaged victims for days or weeks. They offered help and discouraged victims from contacting their bank. They also delayed fraud reports long enough for the scammers to operate undetected.
Visa said criminals are also using agentic AI to build full-scale fraudulent ecosystems. In minutes, they’re creating:
- Convincing storefronts
- Synthetic corporate identities
- Fabricated compliance documents
- Automated payment flows
The speed and volume of these operations can overwhelm legacy detection tools built around slower anomaly tracking.
The company said mitigating these risks requires new verification capabilities, including systems that can identify synthetic content, track rapid operational changes and confirm the identity and intent of AI agents in real time. Visa pointed to its Trusted Agent Protocol, a standards-based framework that applies time-based transaction challenges, verifies agent identity and feeds continuous telemetry into its risk models.
Visa noted that it has invested more than $13 billion in technology and security over the past five years and continues to block more than 500 fraudulent transactions per minute using AI-driven defenses. Its zero-liability guarantee remains in place for consumers affected by unauthorized charges.
But the company emphasized that no single organization can address the emerging risks alone. Visa said the cross-border nature of agentic AI crime requires payment networks, financial institutions, regulators, law-enforcement agencies, and technology providers to coordinate intelligence-sharing and develop common verification standards.
Sign up
Sign up for a complimentary subscription to Digital Commerce 360 B2B News. It covers technology and business trends in the growing B2B ecommerce industry. Contact Mark Brohan, senior vice president of B2B and Market Research, at mark@digitalcommerce360.com. Follow him on Twitter @markbrohan. Follow us on LinkedIn, X (formerly Twitter), Facebook and YouTube.
Favorite
