The instinct on most catalogs is to start fixing. The instinct on most catalogs is also wrong. A two-hour manual audit, done in the right order, surfaces the gaps in roughly the order an operator should fix them — and almost always exposes one or two issues that would have been wasted effort to bulk-fix until they were named.
This guide is a manual auditing methodology for a Shopify catalog. The work is sequential: each pass builds on what the previous pass found. The audit fits comfortably in one focused afternoon for a catalog under a few thousand SKUs, and the artifacts are copy-pasteable into a follow-up plan.
The walk-through covers six passes, in order:
Each pass should land in 15–30 minutes if the catalog is not pathological. The full pass takes 2–3 hours. The output is a prioritized punch list, not a set of fixes — fixing happens after.
Before you start
Three things to have at hand:
- Admin access to the Shopify store (a staff account with view permission is enough; you do not need theme editor access for the audit).
- A Google Search Console property verified for the storefront domain. Optional but useful — surfaces crawl errors and rich-results status the manual passes cannot reach.
- A second browser profile or incognito window for verifying live pages without admin-session cookies. Some structured-data checks behave differently when viewed under admin context.
A scratch markdown file open in a separate window for taking notes saves time on the writeup at the end. Each pass ends with a short list of findings; the gap report is the merge.
Pass 1 — Crawl access
The first question is whether AI surfaces can fetch the catalog at
all. A catalog blocking the major AI crawlers in robots.txt has
a visibility problem at the discovery layer, and nothing
downstream matters until that is resolved.
Fetch the live robots.txt:
curl -s https://yourstore.com/robots.txt
Look for any of these patterns:
User-agent: GPTBot
Disallow: /
User-agent: ClaudeBot
Disallow: /
User-agent: PerplexityBot
Disallow: /
User-agent: anthropic-ai
Disallow: /
If any major AI crawler is blocked, decide whether that was
intentional. Some catalogs block deliberately — IP protection on
sensitive content, ongoing licensing negotiations, contractual
obligations to partners — and those decisions stand. A different
share of catalogs end up blocking inadvertently after copy-
pasting a robots.txt from a publisher template or a competitor’s
site. The audit’s job is to surface the rule and let the operator
decide; the action depends on which case applies.
Also check for the inverse — User-agent: * with Disallow: /
or large Disallow: blocks that catch product paths.
Finding template: “Discovery — robots.txt blocks GPTBot and
ClaudeBot. No documented reason. Recommend reverting both.”
Pass 2 — Structured data
The second pass is whether the catalog’s structured data is parseable and complete. Pick three to five products spanning the catalog’s variety — bestseller, mid-tier, recent addition, a variant-heavy product, an item from a thin category. The sample matters: a clean bestseller can mask broken variants.
For each sample product:
- View source on the live product page. Find the
<script type="application/ld+json">block. - Paste it into the Schema Markup Validator. Note any syntax errors or unrecognized properties.
- Run the URL through the Rich Results Test. Note any required-property warnings.
- Inspect the structured data manually for completeness.
At minimum, look for
name,description,image,offers(withprice,priceCurrency,availability),brand,sku, and at least one identifier (gtin13,gtin12,gtin8, ormpn).
Common findings on Shopify catalogs:
- Theme-default schema missing
gtin/mpnon every product priceValidUntilset to a date in the past (stale-signal generator)availabilityrendered as a free-text string (“In stock!”) rather than a Schema.org enum (https://schema.org/InStock)- Variant products rendered as a single
Productwith nohasVariantarray (collapsing the variants into the index entry) - A second JSON-LD block injected by a SEO app duplicating the
theme’s block (the surface sees two
Productblocks and may pick the worse one)
See Product schema for Shopify for the property-by-property reference.
Finding template: “Structured data — 3/5 sampled products
missing GTIN. priceValidUntil set to 2024-12-31 across the
catalog (likely a theme template). Variant products render as a
single Product with no hasVariant.”
Pass 3 — Catalog hygiene
Once structured data is validated, the next question is whether the content the structured data references is itself good. This pass is about titles, descriptions, images, and attribute density.
For the same sample products, evaluate each on:
Titles. Does the title follow a structured pattern (brand + product type + defining attribute + variant)? Or is it marketing copy (“The Sweater You’ll Reach For”)? Marketing titles surface on marketing-intent queries; structured titles surface on shopping-intent queries. AI surfaces prefer the latter for the shopping context. See Writing product titles for AI agents.
Descriptions. Is the description specific enough that an embedding of it would land somewhere distinctive in vector space? A description that reads “Premium quality. Crafted with care. Order today.” embeds to the same generic location as a thousand other descriptions. A description that names materials, dimensions, use cases, and constraints embeds to a specific location that matches specific buyer queries.
Images. Each product needs at least one image; primary
images need to be high-resolution, well-lit, and ideally on a
neutral background. Check alt text — Shopify’s image alt
defaults to the product title, which is usable, but specific alt
text (e.g., “navy blue wool runner from the side”) is better for
both accessibility and image-based retrieval.
Attributes. What attributes are present and which are missing? For apparel: material, fit, care instructions, season. For electronics: model number, certifications, technical specs. For home goods: dimensions, materials, weight. Each category has its own attribute stack — see the category-specific guides for the per-vertical reference.
Finding template: “Catalog hygiene — titles are 70% marketing-led, 30% structured. Descriptions average 90 words; attribute density low (no materials, no use cases). Images: alts default to titles, primary images mostly OK.”
Pass 4 — Feeds
If the catalog has a Google Merchant Center feed, audit it. GMC’s Diagnostics report surfaces disapproved items, attribute warnings, and policy issues. The specific things to check:
- Disapprovals. Any disapproved items? Cluster by reason to see if it is a template issue (one missing attribute across the catalog) or product-specific.
- Warnings. GMC warns on missing-but-recommended attributes. Note which attributes are warned about most frequently.
- Feed freshness. When was the feed last updated? A feed more than 24 hours stale on a Shopify storefront is suspicious — Shopify’s GMC apps push hourly by default.
- Attribute coverage. Use the GMC attribute coverage report
to see how many products have
gtin,brand,mpn, etc.
If the catalog has a Microsoft Merchant Center feed, run the same checks. If the catalog has feeds to TikTok Shop, Pinterest, Meta, or others, check those too.
Finding template: “Feeds — GMC has 47 disapprovals, all for missing GTIN. 312 warnings for missing brand. Feed updated 2 hours ago; freshness is fine. No Microsoft Merchant Center feed exists.”
Pass 5 — Surface checks
The most important pass. Test live queries against the actual AI surfaces. This is what an audit that stops at the technical layers will miss: a catalog that passes every technical check can still be invisible because the structured-data work has not propagated to the surface index yet.
Pick five queries that should plausibly return the catalog’s products:
- A branded query (e.g., “Acme Outfitters wool sweater”)
- A category + brand query (“Acme wool sweater for women”)
- A pure category query (“wool sweater for winter”)
- A constrained category query (“wool sweater under $200 for cold weather”)
- A use-case query (“warm sweater for hiking in cold rain”)
Run each query on:
- ChatGPT with shopping mode (if available in the region)
- Perplexity
- Google (check both the AI Overview and the Shopping carousel)
- Claude (claude.ai)
- Gemini (gemini.google.com)
Record for each query × surface combination: did the catalog appear? If yes, in what position? If no, what catalogs appeared in its place? Knowing the competitive set is half of knowing what to fix.
The branded query is the floor — if the catalog does not appear for its own brand name on a major surface, the issue is at the discovery or indexing layer (likely Pass 1 or Pass 2). Category queries are the ceiling — appearing in these against established competition takes time.
Finding template: “Surface checks — branded query returns the catalog on ChatGPT and Google but not on Perplexity (likely a schema validity issue affecting Perplexity specifically). Category queries return three named competitors; our catalog does not appear. Use-case query returns no specific products on any surface.”
Pass 6 — Gap report
The last pass is the merge. Take findings from each of the five preceding passes and group them by priority:
Now (this week).
- Anything blocking discovery (robots.txt rules excluding AI bots, sitemap issues)
- Structured data that is invalid (not just incomplete — actually failing to parse)
- Stale
priceValidUntilacross the catalog - Major GMC disapprovals
Soon (this month).
- Missing GTINs that can be sourced
- Variant collapse (products with variants rendering as one product)
- Description-density gaps on the top 20% of products by revenue
Eventually (this quarter).
- Long-tail description rewrites
- Image alt-text upgrades on legacy products
- Voice-rule tuning if not yet set
The structure is intentional: the “now” tier is for issues that are suppressing the entire catalog. The “soon” tier is for issues that are suppressing meaningful subsets. The “eventually” tier is incremental improvement.
Where this audit breaks down
Three known limitations to flag in the writeup:
Tiny catalogs (under 50 SKUs). The sample-of-five from Pass 2 covers most of the catalog. The methodology still works but the gap report is closer to a per-product punch list.
Catalogs with content gated behind authentication. If a catalog requires a login to view products (some B2B catalogs do), AI surfaces cannot crawl past the login wall. The audit reveals the structural issue but cannot reach the products themselves.
Catalogs with very recent migrations. A catalog migrated to Shopify within the last 30 days may still be in transition — old URLs still indexed, new URLs not yet crawled. Some Pass 5 findings reflect the migration, not the steady-state.
What to do after the audit
The gap report is the input to fixing. Most “now” items can be addressed in a single dedicated sprint. The “soon” tier benefits from automation — bulk schema fixes, GTIN sourcing via a service, description enrichment via a tool. The “eventually” tier is ongoing.
The most useful follow-up reads:
- The 6 dimensions of AI readiness — the framework the manual audit is informally evaluating.
- Product schema for Shopify — the technical reference for Pass 2 fixes.
- Validating structured data — the deeper treatment of the validation tools used in Pass 2.
- Google Merchant Center setup — feeds work for Pass 4.