AI is the new default for product discovery
Exploding Topics surveyed 1,009 US consumers in early 2026 and found that 77.6% had used AI tools for shopping in the previous six months, with 43.21% doing so weekly [5]. ChatGPT led the pack at 77.56% usage, followed by Google Gemini at 58.21%. Yet nearly one in three of those same shoppers won’t let AI spend a dollar on their behalf, and the most common amount consumers would trust an AI agent to spend autonomously is zero [5]. That gap between enthusiastic research adoption and near-total payment refusal is now the most consequential conversion problem in AI-assisted ecommerce.
What consumers actually do with AI during shopping sessions tells a clear story. Product research tops the list at 68.5%, followed by price comparison at 55.19% [5]. About 44% of shoppers now start their buying journey with an AI tool, while another 44.8% use AI as a supplement alongside traditional retail sites [5]. The fully AI-driven purchase journey, where a consumer starts and finishes inside an AI interface, remains vanishingly rare at 2-9%. Cross-Border Magazine’s analysis of US ecommerce growth confirms the broader trend: generative AI usage for shopping jumped from 38% in 2024 to 51% in 2025, and the 2026 numbers suggest the curve hasn’t flattened [6].
AI has also proven to be a genuine demand generator, both a research shortcut. According to the same Exploding Topics data, 68.64% of respondents said AI influenced them to buy items they wouldn’t have purchased otherwise, and 37.18% reported that AI made product research “much easier” [5]. Adoption is accelerating, too: 39.1% said they use AI for shopping “much more” than they did six months ago. For marketers, this means generative engine optimization (GEO) is no longer optional if you want to appear in the discovery layer where nearly half of all shopping journeys now begin. and all that demand creation runs headlong into a wall the moment the shopper reaches the payment step.
Why the trust breaks down at checkout
Riskified’s Q1 2026 survey of 2,000 US and UK consumers found that 55% are uncomfortable with AI making purchases on their behalf, a dramatic reversal from Q4 2025 when 70% expressed comfort with the idea [9]. That 25-point swing in a single quarter is striking, and it happened while adoption for research tasks was still climbing. Something specific spooked consumers about the transactional side of AI, even as they grew more dependent on it for everything leading up to the transaction.
Fraud fear is the most cited reason. In the Riskified data, 53.9% of respondents said AI increases the risk of payment fraud [11]. They’re not wrong to worry: BNY reported that AI-enabled scams surged 1,210% in 2025 [1], and new account fraud climbed 13% to $7 billion in the same year [4]. Over 75% of US firms experienced payments fraud in 2025 [2]. Consumers are reading these headlines, and the result is a rational, evidence-based reluctance to hand financial credentials to AI systems.
But fraud isn’t the only driver. There’s a deeper psychological issue around control and opacity. When AI recommends a product, the consumer still clicks “add to cart,” reviews the price, enters a shipping address, and confirms the order. Every step is visible and reversible. Agentic commerce, where an AI agent handles the entire purchase flow autonomously, removes those checkpoints. The Exploding Topics survey found that 51.45% of consumers are at least somewhat uncomfortable storing card details with AI tools [5]. Even among weekly AI shoppers, the most enthusiastic cohort, only 50.69% were comfortable with card storage [5].
What we’re seeing is a widening gap between adoption and trust… Shoppers want the convenience and personalization AI can deliver, but they’re not yet willing to hand over control or responsibility.
Jeff Otto, CMO, Riskified
There’s also a question of who consumers believe should be held accountable when things go wrong. Riskified found that 50.8% think the AI platform should bear responsibility for unauthorized purchases, while 23.2% point to the retailer and only 18.7% accept personal responsibility [9]. This fragmented liability picture means no single party has a clear incentive to build the trust infrastructure consumers are demanding, and until that changes, the checkout wall stays up.
How the payment trust gap hurts conversions
I’ve been tracking AI-assisted shopping flows for the past year, and the pattern is remarkably consistent: AI generates interest, surfaces options, and narrows consideration sets faster than any previous tool, then the consumer exits the AI environment entirely to complete the purchase on a retailer’s own site. That handoff is where conversion leaks. Every additional click, every context switch, every moment of re-entering information on an unfamiliar checkout page is a chance for the shopper to abandon. The irony is that AI makes the top of the funnel more efficient while making the bottom of the funnel more fragmented.
The numbers bear this out. While 61.5% of consumers used AI for product discovery, 46.5% said they trust no company to make autonomous purchases on their behalf [11]. That’s not a soft preference; it’s a hard rejection of the entire agentic commerce model as currently constructed. When consumers do set an autonomous spending cap, the median is $50, and the mode is $0 [5]. Even among weekly AI shoppers, only 20.87% would allow an AI agent to spend more than $100 [5]. For any retailer selling products above that threshold (which is most of them outside consumables), agentic checkout is functionally off the table for the vast majority of shoppers.
This creates a specific problem for retailers who have invested heavily in AI-powered personalization and recommendation engines. The AI does its job well: 68.64% of consumers report buying things they wouldn’t have purchased without AI influence [5]. But that demand generation doesn’t translate into frictionless conversion because the trust architecture for payments hasn’t kept pace with the trust architecture for recommendations. Retailers are essentially building sophisticated AI-powered demand engines that feed into the same old checkout flows, and the disconnect between those two experiences is itself a source of friction. A shopper who just had a fluid, conversational AI interaction is now staring at a five-step checkout form, and the cognitive gap between those experiences is growing wider as AI discovery gets better.
There’s a subtler issue, too. About 27.52% of consumers believe AI shopping tools primarily serve the interests of AI companies, while 27.32% think they serve brands [5]. That skepticism about whose interests AI represents compounds the payment trust problem. If a consumer already suspects the recommendation was optimized for someone else’s margin, they’re even less likely to let that same system handle their money.
Three strategies to bridge the AI trust gap
The most straightforward approach is meeting consumers where their stated preferences already are: 73.9% of respondents in the Riskified survey said they expect biometric verification or one-time passwords before any AI-initiated transaction [11]. That’s not a nice-to-have; it’s a prerequisite. Retailers and payment processors who build visible, human-in-the-loop confirmation steps into AI-assisted checkout will convert at higher rates than those who try to push fully autonomous flows. The FIDO Alliance is already developing standards for trusted AI agent interactions, which would give agents verifiable authorization credentials without requiring consumers to hand over raw card numbers [12]. Adoption of those standards, once finalized, should be a priority for any retailer building agentic commerce capabilities.
Second, retailers should consider tiered autonomy models that respect the spending thresholds consumers have already set for themselves. If the median autonomous spend cap is $50, design the AI checkout experience around that reality instead of fighting it. Let AI agents handle reorders of consumables and low-cost items autonomously while routing higher-value purchases through a confirmation step that gives the consumer visible control. This isn’t a retreat from agentic commerce; it’s a realistic on-ramp. Weekly AI shoppers, the cohort most likely to expand their comfort zone over time, already show higher card-storage comfort at 50.69% [5]. Building trust with low-stakes transactions is how you eventually earn permission for high-stakes ones.
Third, and this is where I think most retailers are underinvesting, the liability question needs a clear answer. When 50.8% of consumers say the AI platform should be responsible for unauthorized purchases and 23.2% say the retailer should be responsible [9], the worst possible response is ambiguity. Retailers who proactively publish clear, consumer-friendly liability policies for AI-assisted purchases will differentiate themselves. Think of how PayPal’s buyer protection program reduced friction for early ecommerce by giving consumers a clear recourse path. AI-assisted checkout needs an equivalent guarantee, and the retailers or platforms that offer it first will capture disproportionate trust. AmEx and J.P. Morgan are reportedly building agentic commerce capabilities [11], and the financial institutions that pair those capabilities with explicit fraud guarantees will likely set the standard.
Agentic AI in commerce
Google’s AP2 protocol for verifiable agent authorization and the FIDO Alliance’s work on agent authentication standards [12] suggest that the infrastructure for trusted agentic payments is being built, but the timeline for consumer adoption is likely longer than the hype cycle implies. McKinsey’s fintech analysis points to AI and digital assets as converging forces in financial services [10], yet the consumer data tells a story of retrenchment, not acceleration. Trust dropped 25 points in a single quarter [9], and AI-enabled fraud is growing faster than AI-enabled trust-building.
I think the industry is making a familiar mistake: assuming that because consumers adopt a technology for one purpose, they’ll naturally extend it to adjacent purposes. People adopted smartphones for communication and then for payments, but that transition took nearly a decade and required Apple Pay, tokenization, biometric authentication, and explicit bank guarantees before it reached mainstream comfort. Agentic AI commerce is asking consumers to make a similar leap, from “AI helps me find things” to “AI spends my money,” and the trust infrastructure isn’t remotely close to where mobile payments were when Apple Pay launched. Forbes has documented how personalization enthusiasm is curdling into what they call “paranoia” as AI capabilities expand faster than consumer comfort [3].
For marketers, the practical implication is that GEO and AI-optimized discovery will remain the high-ROI investment for the foreseeable future, while agentic checkout should be treated as an experimental channel with limited near-term revenue impact. The 77% adoption figure for AI-assisted shopping is real and growing, and brands that aren’t visible in AI-generated recommendations are already losing share of consideration. But the conversion architecture still runs through traditional checkout, and optimizing that handoff from AI discovery to human-controlled payment is where the immediate money is. The retailers who win in 2026 and 2027 won’t be the ones who push hardest for autonomous AI purchasing; they’ll be the ones who build the smoothest bridge between AI-powered discovery and trust-verified checkout, accepting the gap as a feature of consumer psychology rather than a bug to be engineered away.
Sources
- How AI Is Changing Payments Fraud and Fraud Prevention – BNY
- Over 75% of US Firms Experienced Payments Fraud in 2025
- Personalization To Paranoia – Why Consumers Pull Back As AI Expands
- Fraud Losses Stabilize, But AI-Driven Threats Are Eroding Trust
- New data: 77% use AI to shop. Nearly 1 in 3 won’t let it spend.
- The AI is Driving E-Commerce Growth in the USA
- AI shopping hits a trust ceiling even as AI adoption rises
- Consumer trust in AI drops
- Fintech industry trends: AI, digital assets, and more
- Many Consumers Say “No Thanks” to Agent-Based Payments
- FIDO Alliance to Develop Standards for Trusted AI Agent Interactions

