Zeodyn Score Explained: What It Means and How to Improve
AI agents are already shopping. Google's AI Mode, ChatGPT shopping, and Amazon's Buy for Me are live, and every major platform is building more. If you're new to agent commerce, the shift is simple: AI agents now discover, evaluate, and purchase products autonomously. The question is no longer whether they will interact with your online store — it is whether they can.
The Zeodyn Score™ is a composite metric from 1 to 100 that measures how ready your website is for AI agent commerce. It assesses whether AI agents can discover your products, understand your catalogue, evaluate your offers, transact programmatically, and trust your infrastructure. This guide explains exactly how the score works, what each band means, where most sites fall short, and what you can do to improve.
The five score bands
Every Zeodyn Score falls into one of five bands. Each band represents a meaningful threshold in how AI agents can interact with your commerce capabilities.
Agent-Ready (90–100)
Your site is fully accessible to AI agents across all dimensions. Agents can discover your products, read structured data, evaluate offers, initiate transactions through supported protocols, and verify your legitimacy — all programmatically. Sites in this band are positioned to capture the full value of agentic commerce as it scales.
Strong (70–89)
Your site supports AI agent commerce well, with most interactions likely to succeed. There are improvements possible — typically one or two dimensions holding the score back — but the fundamentals are solid. Sites here are ahead of the vast majority of e-commerce businesses.
Developing (50–69)
AI agents can discover your site but struggle with evaluation, comparison, or transactions. This is where most technically competent e-commerce sites land today. The data is partially there, but gaps in structured markup, protocol support, or commerce signals prevent agents from completing their tasks reliably.
Limited (25–49)
Significant gaps prevent meaningful agent interactions. AI agents might find your site but cannot extract the product data, pricing, or trust signals they need to recommend or transact. Sites in this band are effectively invisible to agent-driven commerce.
Not Ready (1–24)
AI agents cannot meaningfully interact with your commerce capabilities. This typically means the site actively blocks AI crawlers, lacks machine-readable data entirely, or has critical infrastructure failures like missing HTTPS. Immediate action is needed.
The six scoring dimensions
The Zeodyn Score is built on the Agent Commerce Stack™ framework, which evaluates your site across six dimensions. Each dimension answers a specific question about your AI commerce readiness.
1. Discovery & Access
Can AI agents find and access your commerce capabilities?
This dimension checks whether AI agents are permitted to crawl your site, whether your content is discoverable via sitemaps and well-known endpoints, and whether your robots.txt policy allows the major AI user agents (GPTBot, ClaudeBot, GoogleBot, AmazonBot, PerplexityBot, and others) to access your pages. A site that blocks all AI agents at the front door scores poorly here, regardless of how good its product data is.
Weight: High
2. Structured Data
Can AI agents understand your products in machine-readable form?
This is about JSON-LD schema markup — specifically, whether your pages contain Schema.org Product and Offer types with the fields AI agents need: name, description, price, availability, images, brand, SKU, and GTIN identifiers. Pages without structured data force agents to scrape and interpret raw HTML, which is unreliable and slow.
Weight: Very high
3. Commerce Data
Can AI agents trust your operational data for transactional decisions?
Beyond basic product information, agents need operational signals to make purchasing decisions: real-time pricing, stock availability, shipping information, return policies, and payment methods. This dimension assesses whether that transactional data is present and machine-readable.
Weight: High
4. Protocol Support
Can AI agents programmatically transact with your commerce infrastructure?
This dimension evaluates support for agent commerce protocols — UCP (Universal Commerce Protocol), ACP (Agentic Commerce Protocol), MCP (Model Context Protocol), and OpenAPI specifications — as well as platform-level integrations (Shopify, WooCommerce, BigCommerce). Sites with protocol support give agents a structured, reliable way to browse, negotiate, and purchase. Without protocols, agents must rely on fragile HTML scraping.
Weight: High
5. Security & Trust
Can AI agents verify your legitimacy and operate safely?
Agents need to confirm that a site is legitimate before recommending it or processing transactions. This dimension checks HTTPS enforcement, security headers (HSTS, CSP, X-Content-Type-Options), privacy policies, contact information, and business verification signals. A site without HTTPS is fundamentally untrusted.
Weight: Moderate
6. Technical Performance
Can AI agents parse your pages quickly and efficiently?
AI agents are not patient browsers. If your pages take seconds to load, rely entirely on client-side JavaScript rendering, or return excessively large payloads, agents will time out or move on. This dimension measures response times, server-side rendering, page weight, and content efficiency.
Weight: Moderate
How the score is calculated
Geometric aggregation — not a simple average
The Zeodyn Score uses a weighted geometric mean to combine the six dimension scores into a single composite. This is the same mathematical approach used by the UN Human Development Index.
Why does this matter? With a simple arithmetic average, a site could score 100 in five dimensions and 10 in one, and still end up with a respectable average of 85. But in reality, that single weak dimension would completely block agent commerce. An agent that cannot discover your site (dimension 1) will never reach your beautifully structured product data (dimension 2).
Geometric aggregation prevents this compensatory effect. Weakness in any single dimension pulls the entire score down disproportionately. A chain is only as strong as its weakest link, and the same is true for AI agent commerce readiness.
The dimensions carry different qualitative weights — Structured Data carries Very high importance, Discovery & Access, Commerce Data, and Protocol Support carry High importance, while Security & Trust and Technical Performance carry Moderate importance. The exact numeric weights are proprietary, but the qualitative labels are published with every scan result so you know where to focus.
Fail gates — hard floors on critical requirements
Some requirements are so fundamental that failing them caps your dimension score regardless of how well you perform on other sub-checks. These are called fail gates.
Current fail gates include:
- No HTTPS — caps the Security & Trust dimension. Without encrypted connections, no agent can safely transact.
- Blocking all AI agents via robots.txt — caps the Discovery & Access dimension. If agents cannot crawl your site, nothing else matters.
- Active AI blocking with no programmatic alternative — caps Protocol Support. If you block agents and offer no API or protocol endpoint, agents have no way to interact.
- No detectable pricing — caps Commerce Data. Without price information, agents cannot make transactional decisions.
- No Product or Offer schema — caps Structured Data. Without any schema markup, agents cannot parse your catalogue.
- Pure client-side SPA with no server-rendered content — caps Technical Performance. If the initial HTML response contains no meaningful content, most agent crawlers will see an empty page.
When a fail gate is triggered, it is flagged in your scan results alongside the reason. This makes it immediately clear which critical issues need addressing first.
Real-world examples
To illustrate how the scoring works in practice, here are Zeodyn Scores™ from scans of major e-commerce sites (scores as of March 2026; results may change as sites evolve). These scores reflect what the scanner observes at the website layer.
shopify.com — 45 (Limited)
Shopify's own marketing site scores 45 — the highest in this group, but still Limited. This might seem surprising for the company co-developing UCP, but it makes sense: shopify.com is a marketing site, not a storefront. It has Corporation schema, strong OG tags, and Twitter Cards, but no Product or Offer schema — triggering a Structured Data fail gate that caps the dimension despite those other signals. Technical Performance is solid (87) and all AI crawlers are allowed. Protocol Support (28) and Commerce Data (34) drag the score — no UCP manifest on its own domain, and limited transactional signals. Shopify merchants will benefit from UCP when it ships. shopify.com itself is not a commerce destination for AI agents. (For a full breakdown of Shopify's strengths and gaps, see our Shopify AI agent readiness guide.)
walmart.com — 35 (Limited)
Walmart scores 35 — strong homepage technical foundations (SSR, gzip, HTTP/2, JSON-LD Organisation schema) but zero product pages discoverable through standard web protocols. Product catalogue is rendered entirely client-side via JavaScript. The four discovery layers — JSON-LD, sitemap, HTML links, and OG meta — all return empty: the homepage JSON-LD contains only Organisation and WebSite schemas, the sitemap is bot-blocked, and not a single anchor tag points to a product page path. From an AI agent's perspective using standard HTTP requests without JavaScript execution, the product catalogue is invisible. Protocol Support (10) remains nearly absent — no UCP, no ACP, no MCP. Commerce Data (37) is weak without any discoverable product pages. Walmart's improvement path is clear: make product pages discoverable through standard protocols (add Product schema to JSON-LD, include product URLs in the sitemap, add standard HTML links to product pages).
nike.com — 25 (Limited)
Nike scores 25, just above the Not Ready threshold. The site has some structured data signals — enough to reach 40 in Structured Data — but Commerce Data (20) and Protocol Support (8) are weak. There is no UCP, no ACP, and no MCP endpoint. The most striking gap is Technical Performance at just 30: the site is JavaScript-heavy, and agent crawlers see slow response times and limited server-rendered content. Security & Trust (40) also lags, with inconsistent security headers. Nike's brand strength does not translate into machine-readable commerce readiness.
etsy.com — 9 (Not Ready)
Despite being an early ACP adopter, Etsy's website scores just 9 on a standard crawl. Here's the twist: Etsy's robots.txt actually allows all six AI agents. But the server returns HTTP 403 responses, blocking access at the infrastructure level. On top of that, the page is a client-side SPA with almost no server-rendered content (Technical Performance: 30). Structured Data scores just 11 — no JSON-LD at all, zero schema markup. Three fail gates fire simultaneously: Structured Data (no Product or Offer schema), Commerce Data (no detectable pricing), and Technical Performance (pure SPA) — the only benchmark site to trigger all three. The ACP integration works at the API layer (accessed via ChatGPT), not at the website layer. That gap between API capability and website readiness is exactly what the Zeodyn Score™ measures.
zara.com — 6 (Not Ready)
Zara is the flagship brand of Inditex, the world’s largest fashion group. It operates in over 90 markets with thousands of product lines refreshed weekly. Yet its website scores 6. The entire storefront is a client-side single-page application — the initial HTML response contains no meaningful content. No JSON-LD. No Product or Offer schema. No pricing in markup. No sitemap. Commerce Data scores 1 out of 100. An AI agent visiting zara.com receives a JavaScript shell and nothing else. Technical Performance scores 72 (fast server response, at least), but speed without substance gives agents nothing to parse. The geometric aggregation penalises exactly this: a fast-loading page with zero machine-readable commerce data.
Every site in this list has strong commerce operations. None scores above 45. Having great products is not the same as being AI-agent ready. The Zeodyn Score™ measures whether agents can access your commerce capabilities — not whether those capabilities exist.
Common failure points
Across the sites we have scanned, clear patterns emerge. These are the most frequent reasons sites score poorly:
-
Missing JSON-LD structured data — The single most common issue. Without Schema.org Product and Offer markup, AI agents cannot reliably extract product information. Many sites rely on microdata or have no structured data at all.
-
Blocking AI bots in robots.txt — Many sites added blanket blocks for GPTBot, ClaudeBot, and other AI user agents during the 2023–2024 AI scraping concerns. These blocks now prevent legitimate agent commerce interactions.
-
No agent commerce protocols — UCP, ACP, and MCP are new, and adoption is still early. Most sites have no
/.well-known/ucpendpoint, no ACP integration, and no OpenAPI specification. This dimension drags down many otherwise strong sites. -
Missing HTTPS or poor security headers — Less common on established sites, but still appears on smaller merchants and older platforms. Missing HTTPS triggers a fail gate.
-
Slow response times and client-side rendering — Single-page applications that render entirely in JavaScript are invisible to most agent crawlers. Similarly, pages that take more than a few seconds to respond will be skipped.
Five quick wins to improve your score
If you have scanned your site and want to improve, these are the highest-impact actions you can take today, roughly ordered by effort and return.
1. Add JSON-LD Product schema to your product pages
This is the single most impactful change for most sites. Add a <script type="application/ld+json"> block to each product page with Schema.org Product markup including name, description, price, currency, availability, image, and brand. If you are on Shopify, WooCommerce, or BigCommerce, your platform likely has plugins or built-in support for this.
2. Review your robots.txt AI agent policy
Check whether your robots.txt blocks GPTBot, ClaudeBot, GoogleBot, or other AI user agents. If you added blanket blocks, consider selectively allowing access — particularly if you want AI agents to discover and recommend your products. You can block training-specific crawlers while still permitting commerce-oriented agents.
3. Publish a UCP discovery endpoint
Create a /.well-known/ucp file on your domain that describes your commerce capabilities. Even a minimal endpoint signals to AI agents that you are open for agentic commerce. Google's UCP documentation provides the schema.
4. Ensure HTTPS is enforced everywhere
If any part of your site serves content over HTTP, fix it. HTTPS is a fail gate — without it, your Security & Trust dimension is capped and the geometric aggregation drags your entire score down. Ensure your server redirects all HTTP requests to HTTPS and sends an HSTS header.
5. Add operational commerce data to your markup
Beyond basic product information, include availability status, shipping estimates, return policy links, and accepted payment methods in your structured data. This strengthens your Commerce Data dimension and gives agents the transactional confidence they need to recommend your site.
Multi-page scanning — beyond the homepage
Your homepage is your front door. Your product pages are where commerce happens. A homepage can look strong — good structured data, fast load times, correct security headers — while product pages are blocked, empty, or lack the schema markup AI agents need.
Multi-page scanning addresses this by discovering product pages via sitemap and structured data, then assessing them alongside your homepage. Site infrastructure (Discovery & Access, Protocols, Security) is measured from the homepage. Commerce dimensions (Structured Data, Commerce Data) are sourced from discovered product pages, where these signals naturally live. Technical Performance is averaged across all pages.
Product pages are where commerce data and structured data naturally live, so multi-page scanning gives a more complete picture of your agent commerce readiness.
Multi-page scanning is available on Pro (2 pages: homepage + 1 product page) and Growth (up to 5 pages: homepage + up to 4 product pages). Free scans assess the homepage only — giving you a baseline score.
Dive deeper
The Zeodyn Score is built on the Agent Commerce Stack™ v1.0 framework, which runs 54 individual sub-checks across the six dimensions. For the full technical methodology — including how sub-checks are scored, how scoring curves are applied, and how the framework was developed — see the methodology page.
Every scan produces a detailed breakdown showing your score in each dimension, which sub-checks passed or failed, which fail gates were triggered, and prioritised recommendations for improvement. You can scan any publicly accessible URL for free.
Get your Zeodyn Score
AI agent commerce is not a future trend — it is happening now. Google, OpenAI, Amazon, and every major platform are building agent shopping experiences. The businesses that prepare their infrastructure will capture this new channel. Those that do not will be invisible.
Find out where you stand. Scan your site and get your Zeodyn Score.
Is your business ready for AI agents?
Find out in under a minute with the free Zeodyn Scanner™.
Scan Your Site