← Back to blog
Do AI agents read your JSON-LD? The honest picture in 2026
May 1, 2026 10 min read

Do AI agents read your JSON-LD? The honest picture in 2026

Recent testing shows ChatGPT, Claude, and Perplexity ignore JSON-LD when they fetch a page directly. That finding is real — and it's not the whole story. Here's how AI agents actually meet your products, and where structured data still does the work.

json-ldstructured-dataai-commerceresearch

Reference guide: For the comprehensive primer on JSON-LD for ecommerce, see JSON-LD for AI shoppers. For the Schema.org Product reference, see Product schema for Shopify.

A test made the rounds late last year that cast doubt on the practice of half the AI-search industry. The headline read about how it sounds: AI chatbots don’t read JSON-LD when they fetch your page. The data behind the headline is real, the methodology is clean, and the finding is harder to dismiss than the structured-data defenders would like.

It is also incomplete. Read on its own, the test makes JSON-LD look pointless. Read in context — alongside how products actually reach AI agents in 2026 — it reframes what JSON-LD does, where it does the work, and what else has to be true for the work to matter.

The honest picture has three paths, not one. JSON-LD is load-bearing on two of them. Brands that read the headline and pull schema out of their stack are about to lose visibility on the path that matters most to ecommerce.

What the Searchviu test actually found

Searchviu’s October 2025 test built a single product page and distributed eight prices across five different content sources: visible HTML, JavaScript-rendered text, JSON-LD, Microdata, and RDFa. They then queried ChatGPT, Claude, Perplexity, and Gemini five to ten times each and measured which prices each system could surface.

The results, in their words: “JSON-LD Schema Markup is NOT extracted by ANY system during direct fetch.” No system found the price that lived only in JSON-LD. Across the eight prices total:

  • Gemini found 4/8, including the JavaScript-rendered price
  • ChatGPT found 3/8, all from visible HTML or visible Microdata
  • Perplexity found 1/8, after indexing
  • Claude found 0/8

The test is narrow. One page, one product, one window of time. But the finding is consistent enough across the systems Searchviu tested that it is hard to write off. The pattern the test suggests — though one study can’t settle it — is that when an AI agent fetches a live URL and tries to read the page directly, the JSON-LD block may not be where it consistently looks.

That is the part of the story most coverage stops at. It is also the part of the story that says the least about whether JSON-LD matters for shopping queries.

The path the test actually measured

Searchviu measured one thing precisely: the direct-fetch path. A user asks an AI agent a question, the agent fetches a specific URL in real time, and the agent extracts what it can from what came back over HTTP. For most general-purpose queries — research, analysis, summarization — that is the dominant path.

For shopping queries, it is rarely the path that matters. AI shopping agents almost never decide which products to recommend by fetching candidate URLs live and parsing them. They decide from a pre-built product index, populated long before any shopper ever asks the question.

Three paths actually carry products into AI agent answers. The direct-fetch path is the one that gets tested. The other two are the ones that move the needle for ecommerce.

Path one: direct fetch (the path Searchviu tested)

A live URL, a live HTTP request, a live extraction. JSON-LD ignored; visible HTML and JavaScript-rendered content read inconsistently.

This path matters for:

  • One-off content questions where an agent verifies a specific page
  • Niche product detail lookups when the agent already knows the URL
  • Citation-style flows where the agent surfaces a source link

It does not dominate shopping. When a shopper asks ChatGPT for “a good travel stroller under $400,” ChatGPT does not fetch a hundred candidate URLs and parse them. It queries an index it already built.

Path two: the index path

Search engines and AI platforms build product indexes from catalogs they crawl, ingest, or partner with. Google’s index has parsed Schema.org structured data for years. Microsoft’s Bing crawler reads JSON-LD as well — Microsoft introduced JSON-LD support in Bing Webmaster Tools in 2018 and has reaffirmed it through the Copilot era; their October 2025 ads guidance for AI search explicitly recommends structured data as part of preparing content for inclusion in AI-generated answers.

When AI agents pull from these indexes — Google AI Overviews pulling from Google’s index, Copilot pulling from Bing’s, ChatGPT referencing Bing-derived sources — they inherit the structured data those crawlers already extracted at index time. The JSON-LD was read during indexing, not during the user’s live query. By the time the agent answers, the structured data is already cached, parsed, and resolved into the agent’s view of what your product is.

This is what the Searchviu test cannot measure. Their methodology hits a fresh URL during a live query. The index path, by definition, runs ahead of the query.

A small controlled experiment by Search Engine Land in 2025 suggests this matters: across three nearly identical pages — one with well-implemented schema, one with poor schema, and one with none — only the page with well-implemented schema appeared in an AI Overview, and it ranked higher in organic search as well. The authors are explicit that this is not absolute proof, and the sample size is small enough to make causal claims uncomfortable. But the pattern is consistent with how Google has long described its own use of structured data: schema helps Google understand what a page is about, and that understanding feeds every surface Google builds on top of its index, including AI Overviews.

Path three: the feed path

This is the path that decides shopping in 2026 — and it is the one brands underweight most.

Major AI shopping experiences do not crawl your product pages at all. They ingest a feed.

  • Google Merchant Center has been the canonical product feed for paid Google Shopping for over a decade. It now also feeds Google’s AI shopping experiences, including AI Overviews and Gemini-driven product recommendations.
  • OpenAI opened a merchant program in late 2025 at chatgpt.com/merchants with a dedicated product feed specification. Merchants submit a structured feed; OpenAI ingests, validates, and indexes it so ChatGPT can retrieve, rank, and display products. Shopify and Etsy catalogs are integrated by default.
  • Microsoft runs Bing Shopping infrastructure that feeds Copilot’s product responses, with structured product data as a central input.

Feed paths are not “alternative” to JSON-LD — they are powered by the same structured data discipline. Merchant Center entries require the same product fields a complete Product JSON-LD block defines: GTIN, brand, price, availability, condition, images, attributes. The fields that matter for AI shopping are substantially the same fields Schema.org has been standardizing since 2011. A catalog with thin JSON-LD almost always has thin feed data, because the underlying product data is thin in both places.

What changed, honestly

The Searchviu test does change something. It makes a few things more true than they were a year ago.

Visible content quality matters more. When an agent fetches a page directly, the visible HTML is what gets read. Pages where the description, attributes, and Q&A content live only in JSON-LD — hidden from a human reader, exposed only to a crawler — leave nothing for the direct-fetch path. The fix is not to drop JSON-LD; it is to make sure the same content appears in human-visible form on the page. Content that is invisible to a human is increasingly invisible to a live AI fetch as well.

JSON-LD alone is not sufficient. A catalog whose only AI-search strategy is “we have JSON-LD on every page” is optimizing for one path out of three. The brands winning AI search in 2026 do all three: complete on-page structured data, complete visible content that mirrors the structured data, and a complete feed presence across the major shopping indexes.

The “rich result” framing is over. Google’s August 2023 guidance restricted FAQ rich results to “well-known authoritative government and health websites,” limited HowTo to desktop, and then fully removed HowTo from desktop on September 13, 2023. The schemas are still valid Schema.org types — Google explicitly said there is no need to remove them — but the era when adding FAQ markup earned a visible SERP enhancement is over. Schema’s job in 2026 is structural: it tells indexes what a page is, so the agents drawing from those indexes know what to do with it.

What did not change

A few things the Searchviu test does not refute, despite the headlines.

Schema is still the language of the indexes. Google, Bing, and the merchant feeds that power ChatGPT all parse Schema.org. The direct-fetch path skipping JSON-LD does not mean the index path skips JSON-LD. The two are different operations.

Identifiers still decide trust. GTINs, MPNs, brand consistency across feeds and on-page markup — these remain the credibility signals that make AI agents confident enough to recommend a product. None of that changed in October 2025.

Conversational content still matches conversational queries. The Q&A pairs and usage scenarios that match how shoppers actually phrase requests to AI agents do their work whether the data arrives via direct fetch, index, or feed. The route changes; the content requirement does not.

The practical takeaway

If a brand reads the Searchviu finding and concludes “JSON-LD does not matter,” they will quietly disappear from the path that actually drives ecommerce visibility. The finding is real for direct fetch. The conclusion is wrong for the index and the feed.

The work that holds up across all three paths is the same work it has been for the past two years:

  1. Identifier coverage. Every product carries GTIN/MPN/brand, consistent across the website, the on-page JSON-LD, and the feed.
  2. Title structure. Brand + product type + key attribute + variant. In the title tag, in the JSON-LD name, and in the feed.
  3. Description density. Materials, dimensions, compatibility, use cases — written in human-visible HTML and mirrored in the structured data.
  4. Conversational fields. Q&A pairs and usage scenarios on the page and in the structured data.
  5. Availability precision. Real-time stock, handling time, shipping windows. In Schema.org and in the feed.
  6. Schema completeness. Valid Product, Offer, and where relevant AggregateRating JSON-LD on every product page.
  7. Feed presence. Google Merchant Center for AI Overviews and Gemini; OpenAI’s merchant program for ChatGPT Shopping; Microsoft Shopping for Copilot.

These compound. None of them is sufficient alone. JSON-LD without visible content fails the direct-fetch path. Visible content without JSON-LD weakens the index path. Both without a feed leave ecommerce’s largest AI surfaces blind.

Lumio’s AI Readiness Score measures the catalog-side inputs — identifier coverage, title quality, description density, conversational fields, availability precision, schema completeness — across the dimensions that hold up regardless of which path an AI agent takes. The enrichment engine writes the missing content into Shopify metafields, where Lumio’s theme extension renders it as JSON-LD on product pages and the same structured fields flow into the merchant feeds. One discipline; three paths covered.

The headline that JSON-LD is dead makes for a sharper post. The honest picture is more demanding. Schema markup is not a magic spell that earns visibility on its own. It is one of three overlapping requirements, and the work that satisfies it is the same work that satisfies the other two. The brands that win AI search are doing all of it, in concert. The ones that read one test and pulled the wrong lever will find out, slowly, what they gave up.