Building the business case for AI commerce optimization
Three framings that work for an executive who has to defend the budget — and the two that consistently land flat. The argument shapes are durable; the supporting numbers have to come from the catalog's own data.
Reference reading: For the underlying economics framing, see the economics of AI commerce. For the technical scope of what “optimization” actually means, see the 6 dimensions of AI readiness.
The hardest part of shipping AI commerce work in 2026 is often not the technical work. It’s getting an executive who isn’t watching the ChatGPT and Perplexity surface to fund the catalog work that makes a brand visible there.
The technical case is straightforward: structured data, clean feeds, identifier coverage, conversational fields. The executive case is harder because the channel is small enough today that “we should optimize for it” can read as a distraction from channels producing revenue right now.
This post is about the framings that consistently land — and the ones that don’t. The argument shapes below are durable; the supporting numbers in any specific pitch have to come from the catalog’s own data, not borrowed industry multipliers.
What the executive is actually deciding
The decision is not “is AI commerce real” — it generally is. The decision is “is the time and attention worth more spent here than elsewhere on the roadmap.”
Three things shape that decision. Lumio cannot supply the specific numbers — they vary by catalog and by category — but the questions are consistent:
- What is the marginal cost of AI commerce optimization, given how much of the work the catalog has already done for Google SEO?
- What is the per-customer value of a customer acquired via the AI surfaces, in the catalog’s own data?
- What is the cost of waiting — does ranking-signal calibration compound such that catalogs that wait fall further behind?
The job of the business case is to put honest answers to these three questions in front of the decision in a form the executive can act on.
Framing 1: The marginal-cost frame
This frame is the easiest to operationalize because it doesn’t require a separate team, separate budget, or separate roadmap slot.
The argument: a catalog already running competent Google SEO already has much of what AI commerce optimization requires. Schema.org markup, identifier coverage, feed management — these are not new disciplines. The work is finishing the cases the SEO team deprioritized because they didn’t move organic rankings: complete schema on every product (not just the top SKUs), identifier coverage on private label, conversational metafields, parity between Google Merchant Center and other commerce-data destinations.
The reframe: this isn’t a new program. It’s closing the long tail of work the SEO team would have done eventually. The cost is incremental; the surface count it unlocks is multiplicative.
Operationalizing the frame: audit the catalog and produce a concrete inventory of what’s already in place vs. what’s missing. The marginal cost is the work needed to close the gap. That estimate is real and specific to the catalog; it doesn’t require borrowed industry benchmarks.
Framing 2: The unit-economics frame
This frame works for finance audiences, but only if the catalog has the data to back it up. Without the data, the frame is unsupported — and unsupported assertions in a CFO conversation are worse than the original problem.
The argument: an AI-acquired customer may be unusually valuable. The plausible mechanism is selection — the agent did filtering work in conversation before the click, which biases toward articulated, higher-fit customers.
Three things would need to be true for this argument to land:
- The conversion rate of AI-referred sessions is measurably higher than other organic sessions, in the catalog’s data.
- AOV is measurably higher.
- 12-month LTV is measurably higher.
Lumio does not have published industry-wide multipliers for these. The honest version of the proposal is: “We hypothesize the per- customer economics are different. We propose 12 weeks of cohort instrumentation, then revisit budget allocation with our own numbers.”
The reframe: don’t argue from borrowed multipliers. Argue from the catalog’s own data once it’s measured. The proposal is an instrumentation step first; the budget argument follows.
Framing 3: The compound-cost frame
This frame works for operators who think in competitive position rather than current revenue.
The argument: AI agents are still developing how they rank product recommendations, and a meaningful share of the calibration looks likely to come from the catalogs that populate the most complete structured data. As more catalogs ship the data, the floor likely rises — what counts as “well-specified” today probably becomes “average” later.
The reframe: the question isn’t “what does this earn this year.” It’s “what does the catalog look like to AI agents over the next two to three years, given the work the comparison set is doing right now.”
The hedge: the speed of this compounding is uncertain. AI agent ranking algorithms are not published. The mechanism is plausible but not proven; treat it as a directional argument, not a quantified one.
Two framings that backfire
These come up often enough to be worth naming as anti-patterns.
”AI is going to disrupt search and we need to be ready”
This framing fails because it’s both unfalsifiable and overstated. Search isn’t disappearing in the near term. The executive listening has heard “the next big disruption” enough times to discount it.
It also positions the work as speculative rather than concrete. The catalog work is concrete (add structured data, sync the feed, populate identifiers). The disruption framing makes it sound like a bet on a future state, which makes it easy to defer.
The replacement: skip the disruption rhetoric. Argue from concrete operator-level facts — what catalogs are shipping, what surfaces are publicly published (e.g., Microsoft’s October 2025 ads guidance for AI search), what specific changes produce what specific surfacing.
”Look at how much ChatGPT traffic we already have”
This framing fails when the dashboarded number is small enough not to carry the argument. A CMO hearing “we get 1.2% of sessions from AI referrers” reasonably concludes the channel doesn’t matter. Quoting a higher “real” number from a corrected-attribution argument typically reads as an excuse for under-instrumented data, not as evidence of channel size.
The replacement: lead with marginal cost (Framing 1). Channel size is a measurement project, not an argument anchor.
Sequencing the conversation
Most effective business cases combine framings in this order:
- Open with the marginal-cost frame. Position the work as finishing what the SEO team already started, not as a new program. This lowers the perceived size of the ask before the rest of the conversation.
- Bring in the unit-economics frame as a measurement proposal. “Here’s what we’d measure to find out whether AI-acquired customers behave differently. Revisit in 12 weeks.”
- Close with the compound-cost frame for the strategic argument. Hedge it explicitly — directional, not quantified.
Opening with the strategic argument (which is the most exciting one to make) often gets dismissed as speculation before the marginal- cost frame can land.
What the proposal actually looks like
A proposal that lands has four parts:
- Scope. The specific catalog work (structured data audit, feed sync, identifier backfill, conversational fields). The hours required come from the catalog’s own audit, not from a borrowed estimate.
- Cost. Concrete hours or dollars, derived from the audit.
- Measurement. The cohort instrumentation that will produce the revisit numbers — AI-referrer classification, LTV-by-acquisition- channel tracking.
- Revisit. A specific date (typically 90–180 days out) at which the team will report cohort behavior and recommend whether to continue investing.
The revisit is what most proposals skip and what most executives quietly want. “Try it for a quarter, look at the numbers, decide” is a much easier yes than “fund a permanent new program.”
What to ship this month
Three concrete moves to operationalize the case:
- Audit how much of the structured data and feed work is already in place. The remaining work is the marginal cost. This becomes the scope line in the proposal.
- Pull the catalog’s existing AI-referrer data, even imperfect GA4 default channel grouping numbers. Note the growth rate over the last 12 months. Use the catalog’s own number, not a borrowed one.
- Draft the proposal in the four parts above. Scope, cost, measurement, revisit. Keep it under two pages.
The business case for AI commerce optimization in 2026 isn’t “this channel will be huge soon.” It’s “the work is mostly done, the per-customer value may be different (worth measuring), the cost of not finishing is rising, and the revisit point is 90 days away.” That argument lands. The disruption story doesn’t.