Siteimprove in AI search.

Where the brand currently appears across ChatGPT, Microsoft Copilot, Google Gemini, Perplexity and Claude — and the shortest path to fixing it.

Prepared for
Jen Jones · CMO, Siteimprove
Prepared by
Onur Büyüktezgel
Scope
Homepage · AEO page · AI sentiment · Bot crawlability · Brand perception
Date
May 9, 2026
01Executive summary

Executive summary

Siteimprove has rebranded around AEO and agentic content intelligence — and the third-party signals that matter are already in the index. The mechanics of AI citation have not caught up. AI engines select sources via two paths: grounded retrieval (the engine searches the web in the moment and cites what it finds) and ungrounded memory (the engine recommends from training data without searching). The grounded path is blocked at the firewall. The ungrounded path is anchored on a pre-2026 product description. Both paths fail at the same time, which is why an AEO vendor — Forrester Wave Leader Q4 2025, Gartner AEO Market Guide Representative Vendor 2026, G2 Spring 2026 Leader across all four product categories — shows up at 4% own-domain citation share when buyers ask AI engines to build accessibility shortlists.

94% of B2B buyers now use generative AI in their purchasing process, and 73% specifically use AI to build vendor shortlists. At 4% citation share, Siteimprove is structurally absent from the pre-contact phase of those journeys — the phase where 95% of vendor decisions are effectively made before the seller is ever contacted.

19
search-engine crawlers returning HTTP 403 — Bing, MSNBot and Yandex blocked at the WAF, including on robots.txt itself
0 / 37
branches in the AI's brand model that mention AEO, agentic, or Siteimprove.ai — the model still anchors on the pre-2026 positioning
4%
share of citations across AI engines that point to siteimprove.com — the rest goes to TestParty, BrowserStack, Reddit and review aggregators

What's broken — three layers, all small fixes

The visibility gap is not one problem but three working together. Retrieval: Bingbot, MSNBot and Yandex all return HTTP 403 at the WAF — including on /robots.txt itself, which per RFC 9309 §2.3.1.3 means "do not crawl." ChatGPT and Microsoft Copilot are both Bing-fed; they cannot reach the site, so they fall back to training memory. Memory: a token-level perception scan finds zero mentions of AEO, agentic, or Siteimprove.ai across 37 alternative-sentence branches. The model still describes the company as a Digital Certainty Index for QA, SEO and Accessibility. That is the 2022 product. On-site: the homepage carries Organization + WebSite schema but no SoftwareApplication, Product, or FAQPage; the AEO product page at /platform/seo/aeo-visibility/ carries only template-level schema; a duplicate label appears under all four homepage product cards (the four pillars lose distinctness in the index); and the seven existing competitor comparison pages need calibration, schema, and three missing competitors (Deque, Conductor, TPGi) before AI engines will treat them as structured artifacts.

What's already working — don't rebuild it

The category authority is real and visibly weighted by AI engines. Forrester named Siteimprove a Wave Leader in Q4 2025 in the AEO category specifically. Gartner named it a Representative Vendor in the AEO Market Guide 2026. G2's Spring 2026 Leader designations cover Digital Governance, Digital Accessibility, Digital Analytics and SEO Tools — all four platform pillars. IAAP and W3C affiliations are present. AI bots are fully allowed at the server: GPTBot, ClaudeBot, Claude-SearchBot, Google-Extended, OAI-SearchBot and PerplexityBot all return 200 OK. The retrieval surface for AI is open. Enterprise customer references (Vodafone, BT, Cuisinart, and the case studies in the customer hub) are cited by AI engines today. The credibility substrate is in place; what's missing is the freshness layer catching up to it.

On the four-Ps model of AI presence — Presence, Positioning, Perception, Permanence — Siteimprove is strong on Presence (the brand is in training data and on review sites), weak on Positioning (the entity model anchors on the 2022 product), and at risk on Permanence (without fresh signals, current authority decays as models update on stale content). The work in §08 closes Positioning and stabilises Permanence; Presence is already there.

What to do — a 90-day plan

The full action list in §08 is light on dev resourcing and heavy on internal coordination. The 30-day foundation phase: unblock Bingbot at the WAF (1–2 days of web ops), ship JSON-LD schema across the homepage and platform pages (half a day of dev), fix the duplicate homepage label (five minutes), publish a curated llms.txt (under an hour), and refresh the thin Wikidata entity Q28127172 (two hours). The 60-day phase: extend the comparison hub by three competitor pages plus calibration of the existing seven (2–3 weeks of content), publish a definitive "How Siteimprove uses AI" page that the model can re-anchor on (two weeks), and publish a pricing range page that displaces TestParty's currently-indexed $15K–$50K+ claim (one week). The 90-day phase: sustained Reddit and G2 review-platform engagement to displace the stale third-party comparison content the model currently retrieves.

AI engines are presently re-deciding category authority, which makes the timing asymmetry structural. Domains that publish fresh authoritative signals during this 12-to-18-month window get cemented as the AI-cited reference; domains that don't have to displace hardened citations later. Cost of action now ≈ one cross-functional pod for a quarter. Cost of equivalent action in 18 months ≈ the same effort plus the displacement work to remove competitor citations that have settled into training data and review-platform indexes.

What this needs to move

The 30-day foundation needs 1–2 days of web-ops time to investigate and remove the WAF rule blocking Bing, half a day of front-end dev to ship JSON-LD schema, and editorial sign-off for the duplicate homepage label and the llms.txt. The 60-day phase needs content-team capacity for three new competitor comparison pages plus a recalibration of the existing seven, and editorial sign-off on the "How Siteimprove uses AI" page and the pricing range page. The 90-day phase needs sustained engagement on Reddit and G2. The single highest-friction approval is the pricing page; the rest are operational.

How we'll know it's working

Standard analytics will not capture this recovery. AI-driven referrals appear as direct or branded organic traffic in HubSpot and GSC; the engine that triggered the visit is not in the referrer chain. Three measurement layers run alongside the work: own-domain citation share on a fixed prompt set, tracked weekly via Siteimprove's own AI Visibility dashboard; self-reported attribution on demo-request forms with AI engines as named options (this captures up to 15× more AI-driven conversions than analytics alone); and branded search volume in Google Search Console as a proxy for AI-driven awareness lift. The M3 falsification target is own-domain citation share moving from 4% to 12%+ on the target prompt set.

Strategic finding worth surfacing

Beyond the mechanical gaps, every audited page exhibits a title/H1 split (§06.e): the <title> indexes against the category keyword while the <h1> reaches for an emotional value proposition, fragmenting the retrieval chunk that AI engines reward. This is not a strategic-retreat finding — the new AEO positioning is analyst-validated. It is page-level signal coherence, fixed in a week of editorial work on the top eight pages, ranked by inbound link concentration.

The chain — cause, mechanism, outcome, business impact

Bingbot is blocked at the WAF/origin (including on robots.txt itself) → per RFC 9309, Bing treats this as "do not crawl" and the index of siteimprove.com goes stale → ChatGPT and Copilot, both Bing-fed, cannot retrieve fresh content during live search → they fall back to training-era memory → that memory still anchors on Quality Assurance, SEO and Accessibility, not on AEO or agentic content intelligence → the visible share of AI citations stays at 4% on own-domain. The business consequence: 94% of B2B buyers now use generative AI in their purchasing process (Forrester Buyers' Journey Survey 2025, n=4,000+), and a March 2026 analysis of 680 million AI citations found 73% specifically use AI tools to build shortlists. At 4% own-domain citation share, Siteimprove is structurally underweighted at the top of every AI-mediated buying journey. Buyers using AI to build vendor shortlists never see Siteimprove during discovery — they enter the funnel later through branded search or direct navigation, with a 2022-era reference frame already locked in. Each step in the chain is a small fix. Together they explain the share.

02Audit scope and method

Audit scope and method

Three URLs reviewed in detail, plus three independent measurement passes across the public AI answer layer.

The lens is answer-engine optimization, not classic search rankings. Findings prioritise whether AI engines can find, parse and accurately represent Siteimprove — not whether the site ranks for a given keyword. Audit conducted April 28 – May 8, 2026.

siteimprove.com / (homepage)
Rendered HTML, head metadata, JSON-LD schema, four-card product section content QA, render-without-JS check.
siteimprove.com/platform/seo/aeo-visibility/
AEO product page — canonical URL in the platform navigation. Information architecture, answerability, FAQ surface, internal linking from the homepage.
siteimprove.com/why-siteimprove/competitor-comparison/
First-party competitor comparison hub — 7 individual pages (vs Level Access, Silktide, Acquia, BrightEdge, SEMrush, Matomo, GA4). Depth, schema, calibration, indexability.
/robots.txt, /sitemap.xml, /llms.txt
Crawler allowlists for OAI-SearchBot, Claude-SearchBot, PerplexityBot, GPTBot, Google-Extended.
Bot crawlability test
147 bots tested against siteimprove.com on May 8, 2026. Health score 85%. 22 server-blocked, 0 robots.txt-blocked.
Brand perception scan
Token-level confidence scan of the model's stored representation of Siteimprove, plus 37 alternative-sentence branches.
AI answer-layer sweep
~30 prompts across ChatGPT, Microsoft Copilot, Google Gemini, Perplexity and Claude. Citations, recurring claims, accuracy, recency.
Queries tested
"Siteimprove alternatives," "best accessibility platforms 2026," "how does Siteimprove compare to [competitor]," "does Siteimprove use AI."

Note on sample size: the 30-prompt sweep is directional, not statistical. It surfaces patterns that the bot test and the brand perception scan then explain mechanically. The three passes were designed to corroborate, not to substitute for one another.

03Signals already working

Signals already working

Nine signals AI engines visibly weight, none of which need work. The audit assumes these stay in place.

AI bots fully allowed
GPTBot, ClaudeBot, Claude-SearchBot, Google-Extended, OAI-SearchBot, PerplexityBot, ChatGPT-User — all 200 OK in the bot test. The retrieval surface for AI is open.
Metadata is in place
Open Graph and Twitter card tags fire correctly across pages.
Semantic navigation
Clean HTML structure. Crawlers can read the page when they're allowed in.
IAAP & W3C affiliations
Strong third-party authority signals. Visibly weighted.
G2 Spring 2026 Leader badges
Leader across Digital Governance, Digital Accessibility, Digital Analytics, and SEO Tools. Indexed widely.
Enterprise customer logos
Vodafone, BT, Cuisinart and others — public case studies, cited by AI engines.
Recent Gartner Peer Insights
Positive, current reviews. AI engines treat these as primary.
Gartner AEO Market Guide 2026
Recognized as a Representative Vendor. The category-fit signal is already in the index.
EAA Resource Center
Real topical authority. The right asset to wire to llms.txt.
04Finding 01

Bing crawlers blocked at the server

For a vendor whose AEO product depends on AI visibility, this is the single most consequential finding in the audit. The block also extends to /robots.txt itself, which has standard-defined consequences.

A bot crawlability test on May 8 returned HTTP 403 for 22 of 147 bots — 19 of them are search-engine crawlers. The block is at the WAF or origin level, not in robots.txt. Direct curl with a Bingbot user-agent confirms 403 on both the homepage and on /robots.txt. With a GPTBot or ClaudeBot user-agent, the same URLs return 200.

// siteimprove.com — bot test results, May 8, 2026
GPTBot                  200    AI bots all allowed
ClaudeBot               200
Claude-SearchBot        200
Google-Extended         200
OAI-SearchBot           200
PerplexityBot           200

// 19 search-engine crawlers returning 403:
Bingbot Desktop         403    Bing variants — all blocked
Bingbot Mobile          403
BingPreview Desktop     403
BingPreview Mobile      403
MSNBot                  403
MSNBot-Media            403
YandexBot               403    plus Yandex variants
YandexImages            403
YandexVideo             403
… and 10 more search crawlers

// Direct curl confirms the block at the origin:
$ curl -I -A "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" \
       https://www.siteimprove.com/
HTTP/2 403

$ curl -I -A "Mozilla/5.0 (compatible; bingbot/2.0; …)" \
       https://www.siteimprove.com/robots.txt
HTTP/2 403     ← critical: Bingbot can't even read robots.txt

Why this matters

ChatGPT search runs on Bing's index. Microsoft Copilot runs on Bing's index. If Bingbot can't crawl siteimprove.com, Bing's index goes stale — and the two AI engines that read Bing have nothing fresh to retrieve.

The robots.txt 403 makes the situation worse. RFC 9309 §2.3.1.3 specifies that when a crawler receives a 4xx response other than 404 on robots.txt, the standard interpretation is "do not crawl." Bingbot can't even check what's allowed because the rules file itself is forbidden. There's also a redirect-chain anomaly: under a browser user-agent, /robots.txt 301-redirects to /robots.txt/ (with trailing slash) and then to /Util/Errors/Error405/ — an internal error endpoint that 301s to itself. Whatever else this means operationally, it isn't a configuration any crawler should have to interpret.

This is the most likely mechanical explanation for why an AEO vendor — recognised by Forrester as a Leader in Q4 2025, recognised by Gartner in the AEO Market Guide 2026, with multiple G2 leader badges, with active product launches in March and April 2026 — shows up at 4% own-domain citation share. The site is producing fresh content. It's not reaching the index that ChatGPT and Copilot actually read.

What to do

This is a 30-minute conversation with web ops, not a content project. Step one: pull the WAF / CDN rule list and identify what's matching Bing's IP ranges or user-agent strings. Step two: verify in Bing Webmaster Tools that crawl errors are present (they should be). Step three: allowlist Bing IP ranges and re-test with a bot crawlability tool — confirm both the homepage and /robots.txt return 200 to a Bingbot user-agent. Step four: fix the /robots.txt redirect chain so it serves directly without going through /Util/Errors/Error405/. Step five: submit the sitemap via IndexNow to accelerate re-indexing.

Caveat: the test was run from a single IP. WAFs sometimes block by region or ASN. Recommend re-testing from a second region before stating the block as universal — but the pattern (every Bing/MSN variant blocked, no AI bots blocked, robots.txt clean) makes a per-IP false positive unlikely.

05Finding 02

Stale citations and stale brand memory

Two layers of the same problem. The retrieval layer reaches third-party comparisons. The training layer remembers the 2022 product.

Citation share by source — "best accessibility platforms 2026"

Approximate share of source-domain citations across ~30 prompts on ChatGPT, Copilot, Gemini, Perplexity and Claude. Aggregated April–May 2026. Directional sample.

TestParty
24%
BrowserStack
18%
Sourceforge
14%
Webability.io
12%
A11y Pulse
9%
Askem
8%
AccessibilityChecker
7%
siteimprove.com
4%
Other
4%

N = 30 prompts · 5 engines · own-domain share = 4% · 92% of citations come from competitor pages, third-party aggregators, or independent reviewers

Sources currently shaping the AI narrative

A representative sample of pages AI engines consistently cite — and the framing each one repeats.

Source Year Recurring claim Cited by
TestParty 2026 Enterprise pricing $15K–$50K+; legacy architecture. GPT · PPLX
BrowserStack 2025 Weak CI/CD integration; complex onboarding. CPLT · GEM
Askem 2026 Two- to four-week onboarding cycle. GPT · CLD
AccessibilityChecker 2024 "Doesn't use AI or machine learning." Factually wrong in 2026. GPT · PPLX · GEM
Sourceforge 2025 Legacy stack; UX described as dated. CPLT · GPT
A11y Pulse 2025 "Best for compliance, not innovation." PPLX · CLD
Webability.io 2026 Pricing opaque; sales-driven funnel. GPT · GEM

Brand perception drift in training data

A token-level perception scan asked the model to describe Siteimprove, then expanded along 37 alternative branches. The result is consistent across every branch.

// brand perception scan — 37 generated branches, May 8, 2026
Recurring entities anchored:
  Digital Certainty Index (DCI)        — high confidence
  Quality Assurance / SEO / Accessibility — high confidence
  Compliance / WCAG                    — high confidence

Confidence on current positioning terms:
  "Analytics"                  19.8%   ← among the four named pillars
  "offers" (as primary verb)   11.63%
  "aimed" (purpose statement)  10.67%

Branches mentioning AEO, agentic, or Siteimprove.ai:  0 / 37

The model still describes Siteimprove as "a Digital Certainty Index for QA, SEO and Accessibility." Not one of the 37 branches mentions AEO, agentic content intelligence, or the Siteimprove.ai unified platform. That's the 2022 product, not the 2026 one. Under the chain in §01, this is exactly what would be expected: training-era memory plus blocked retrieval equals stale answers.

Why it sticks

Two reinforcing problems. The first layer is retrieval: the only ranking comparisons are written by competitors, and AI engines pull from whatever ranks. The second is memory: the AI's stored representation of the brand pre-dates the AEO and agentic positioning entirely. Closing the first needs first-party comparison content the AI can find. Closing the second needs Bingbot unblocked, fresh authoritative pages indexed, and one definitive source on "How Siteimprove uses AI" that the model can re-anchor on across updates. Both layers are work. Neither is a rebuild.

06Finding 03

On-site technical issues

Four on-site gaps. None is the highest-leverage item in the report. All are easy to ship and visibly weighted by AI engines or by the credibility argument an AEO vendor has to make.

a. Schema on the homepage and AEO product page

Basic markup is in place; the platform-pillar gap is the real issue, especially on the AEO product page.

The homepage carries two JSON-LD blocks: WebSite with a SearchAction, and Organization with name, description, URL and a sameAs graph linking LinkedIn, Wikipedia, Facebook and X. That's a real foundation — it gives Google Knowledge Graph an entity anchor. But it stops there. The four platform pillars (Accessibility, Analytics, SEO/AEO, Content) carry no SoftwareApplication or Product markup. There is no FAQPage on the homepage or the AEO product page. The AEO product page itself, sitting at /platform/seo/aeo-visibility/, carries only the template-level WebSite/SearchAction — no Organization, no SoftwareApplication, no Product, no FAQPage.

// JSON-LD coverage across audited pages, May 9, 2026 (verified via curl)

                            Organization  WebSite  SoftwareApp  Product  FAQPage
homepage                                        
/platform/seo/aeo-visibility/                      
/why-siteimprove/competitor-comparison/                      
…/siteimprove-vs-silktide/                      

Two fixes. First: extend the homepage schema with SoftwareApplication markup for each of the four pillars (or one combined SoftwareApplication with sub-products), and add FAQPage wherever there's standing FAQ content. Second: ship full schema on the AEO product page — at minimum Organization, SoftwareApplication, and FAQPage. Direct lift on Gemini and Copilot via Knowledge Graph and Bing inheritance. Controlled tests show ChatGPT and Perplexity treating JSON-LD as flat text, so don't expect direct citation lift there — but for a vendor selling AEO, shipping an empty knowledge-graph signal on the product page is a story to avoid.

b. No llms.txt file

There is no current evidence that llms.txt is read by major AI vendors. Studies across 500+ sites show no measurable correlation with AI citations. John Mueller publicly stated Google does not use it. The argument for shipping it is not algorithmic — it's brand credibility.

$ curl -I https://siteimprove.com/llms.txt
HTTP/2 404
date: Fri, 08 May 2026 14:22:11 GMT
content-type: text/html; charset=UTF-8

Anthropic ships an llms.txt. Cloudflare, Vercel and Coinbase ship them. For an AEO vendor not to ship one is the kind of detail competitors will surface in a sales conversation. The fix is short: a curated map pointing to platform pages, the accessibility statement, the EAA Resource Center, case studies, and the glossary. Under one hour of work. Sell the optics, not the algorithm.

c. Duplicate label under all four product cards

"ACCESSIBILITY AGENTS HELP YOU REVIEW BEFORE YOU PUBLISH" appears beneath the accessibility, analytics, SEO, and content cards. AI crawlers ingest the same label four times and lose pillar distinctness. Verified via Google's own indexed snippet of the page.

// homepage — product cards section, rendered HTML
<div class="card">Accessibility</div>
  └─ "Accessibility agents help you review before you publish"
<div class="card">Analytics</div>
  └─ "Accessibility agents help you review before you publish"  ← duplicate
<div class="card">SEO</div>
  └─ "Accessibility agents help you review before you publish"  ← duplicate
<div class="card">Content</div>
  └─ "Accessibility agents help you review before you publish"  ← duplicate

Five-minute content fix. Outsized impact on how cleanly the four pillars get parsed — a cheap signal repair while the brand-perception drift in §05 is being addressed.

d. The comparison hub exists but is under-leveraged

Siteimprove already has a 7-page first-party comparison hub. It's substantive in places. It's also missing the schema, the calibration, and three of the competitors that AI engines actually cite.

The hub at /why-siteimprove/competitor-comparison/ covers Level Access, Silktide, Acquia, BrightEdge, SEMrush, Matomo, and GA4 — seven dedicated pages, all 200 OK. The Silktide page (sampled) carries a Forrester Wave Leader quote, a feature-by-feature Yes/No table across seven categories, two customer testimonials, and a concrete differentiation argument. That isn't thin content. The reason these pages don't show up in AI answers is not their absence — it's a stack of solvable issues:

  1. The Bing block keeps them out of the index ChatGPT and Copilot read. Until §04 is resolved, no comparison content will reach those engines regardless of quality.
  2. The pages carry only template-level WebSite/SearchAction schema. No Product, no Article, no ComparisonReview. AI engines parsing the page see "another page" rather than "a structured comparison artifact."
  3. The Yes/No tables are uniformly tilted toward Siteimprove. Where both vendors offer the same capability, the cells read "Siteimprove Yes / [Competitor] Yes," but every category where there's a difference resolves in Siteimprove's favour. AI engines prefer comparison content that explicitly names where the competitor wins (engines learn to trust sources that calibrate honestly, which means honest framing is itself a citation-rate lever); the current pattern reads as marketing rather than honest comparison.
  4. Three competitors that get cited heavily in AI answers are missing entirely. Deque (axe DevTools), Conductor, and TPGi do not have dedicated comparison pages. The existing Acquia page covers Acquia generally but doesn't surface for "Monsido alternatives" specifically — Acquia rebranded Monsido and the legacy positioning still drives queries.

The recommendation in §08 is therefore "extend and improve," not "build." Adding three competitor pages, shipping Product/ComparisonReview schema across all of them, and recalibrating the Yes/No tables to honestly name competitor strengths is two to three weeks of work, not the four to six weeks a greenfield hub would take.

e. Title and H1 disagree across the highest-traffic pages

A page-level signal-coherence problem, not a positioning problem. The category is analyst-validated (Forrester Wave Q4 2025; Gartner AEO Market Guide 2026). The pages themselves still hedge.

Every page sampled in the audit puts one phrase in the <title> and a different phrase in the H1. AI engines favour chunks where the query terms, the named source, and the supporting statistic concentrate together; when title and H1 push different value propositions on the same page, the retrieval surface fragments into two adjacent half-chunks rather than one strong one. Verified via direct HTML inspection on May 9, 2026.

// title vs. H1 across audited pages — verbatim

siteimprove.com/
  title  Agentic Content Intelligence — Siteimprove
  H1     Where accessibility meets performance

siteimprove.com/platform/
  title  Agentic Content Intelligence Platform — Siteimprove
  H1     Where accessibility meets performance

siteimprove.com/accessibility/
  title  Accessibility Digital Governance
  H1     From risk to reach with accessibility made simple

siteimprove.com/platform/seo/aeo-visibility/
  title  AEO Visibility — Siteimprove
  H1     [generic platform-pillar value prop, see §06a — no SoftwareApplication schema]

The pattern is consistent. The title indexes against the category keyword, the H1 reaches for an emotional value proposition, the lead paragraph splits the difference, and the schema (where present) anchors to the brand entity rather than the page topic. Each individual choice is defensible; together they produce a page that retrieves weakly on any one query.

Two clarifications matter, because the obvious read of this finding is "the new positioning is wrong," and that read is wrong. First, the AEO and agentic-content category is real and externally validated: Forrester's Wave designated Siteimprove a Leader in Q4 2025 specifically in this category, Gartner named the company a Representative Vendor in the AEO Market Guide 2026, and G2's Spring 2026 Leader badges cover the platform's four pillars. The category exists. Second, the finding here is page-level, not strategic — the editorial choice to translate "Agentic Content Intelligence" into "Where accessibility meets performance" at the H1 level is what splits the retrieval chunk. The fix is editorial coherence within the existing positioning, not a retreat from it.

The remediation is one week of editorial work. Pick the top eight pages by inbound link concentration. For each page, pick one query it should win. Rewrite the title, the H1, and the first 50 words of body copy so all three concentrate on that query, with one extractable statistic and one named source per page (the §08 ChatGPT quick-wins items 05 and 06 already cover the answer-first rewrite of the first 50 words — this is the same work, extended to title/H1 alignment). The Bing-block fix in §04 gets crawlers back into the index; this fix makes what they index actually retrievable on category-specific queries instead of brand-only ones.

What this is not

Not a recommendation to drop "Agentic Content Intelligence" as the homepage positioning. The analyst signals supporting the new category are already in the index (see §03) and the AEO product is a real differentiator. This finding is purely about within-page coherence — making sure each page concentrates its retrieval signal on one query rather than two adjacent ones.

07AI engine landscape

AI engine landscape

Underneath the brand names, AI search runs on a handful of indexes. Investing in one signal often pays off across multiple engines. ChatGPT and Copilot both read Bing. Gemini and AI Overviews both read Google. Claude reads Brave Search. Perplexity runs its own crawler. As of February 2026, ChatGPT also runs an advertising layer that competitors can pay into.

Two layers, two fixes

AI citations come from two different mechanisms, and the audit findings split cleanly between them. Grounded mentions happen when an engine searches the web in the moment, finds your content, and cites it — influenced by what's currently in the index, current schema, and content freshness. The Bing block in §04 is a grounded-retrieval problem. Ungrounded mentions happen when the engine recommends a brand from its training data without searching — influenced by historical brand presence, third-party mentions, review platforms, and community discussions. The brand perception drift in §05 is an ungrounded-memory problem. Most of the audit's recommendations split along this line: unblocking Bingbot, shipping schema, and server-side rendering fix grounded retrieval; Wikidata, Reddit and G2 seeding, original research, and the "How Siteimprove uses AI" page repair ungrounded memory. Both layers need work. Investing in only one leaves half the citation gap unaddressed.

ChatGPT

Most-cited engine in board conversations. The search index is Bing, so investing in Bing-friendly signals pays double once Bingbot is unblocked. This is the engine the WAF block hits hardest.

Index
Bing + OAI-SearchBot
Weights
Schema, server-rendered HTML, FAQ structure, recency
Ad layer
Live since Feb 9, 2026. Self-serve since May 5. CPC bidding. $50K min spend.
Priority
Highest. See §08 for ChatGPT-specific quick wins.

Copilot

Index
Bing
Weights
Schema, knowledge-graph entries
Note
Same Bing dependency as ChatGPT. Unblocking Bingbot fixes both.

Gemini · AI Overviews

Index
Google
Weights
Entity grounding, Knowledge Graph, fresh first-party content
Note
Wikidata entry feeds Knowledge Graph directly. See §08 action 06.

Perplexity

Index
Own crawler (PerplexityBot)
Weights
Citation density, recency, original research

Claude

Index
Brave Search · Claude-SearchBot
Weights
Authority, clean structure, primary sources over aggregators
Note
Anthropic split crawlers in Feb 2026 — ClaudeBot (training), Claude-User (live fetches), Claude-SearchBot (search indexing). All three currently 200 OK on siteimprove.com.
The advertising layer is new — and worth naming

OpenAI's ChatGPT ads pilot launched February 9, 2026. By March 26 it crossed $100M annualized. On May 5 it opened self-serve at a $50K minimum with CPC bidding. The advertiser list already includes Target, Ford, Adobe and others. For an AEO vendor, this means AI search visibility is now partly a paid auction in addition to an organic earn. The Siteimprove.ai Search dashboard's "share of voice" metric will need to disambiguate paid from organic citations or it will systematically undercount competitor risk. This is a 2026 product-roadmap conversation as much as an audit finding.

What the ranking-factor research actually shows for B2B SaaS

The most extensive public study to date on LLM recommendation signals (OppAlerts, March 2026, covering 145 industries and 34,092 domains across more than 105,000 ChatGPT prompts) is worth reading carefully because its headline numbers are misleading when applied to any specific category. The all-industry table shows search engine signals and backlink authority as the strongest universal predictors, with Spearman correlations clustered between 0.20 and 0.24, while Reddit engagement, Wikipedia citations, Wikidata entries, Common Crawl coverage, and homepage keyword relevance all sit between 0.07 and 0.12. Those universal numbers average across very different industry patterns.

The per-industry breakdowns tell a different story for B2B SaaS categories that map closest to Siteimprove's positioning. Across SaaS verticals where the data is robust, Wikidata entity strength and SERP breadth consistently outperform the universal average by a factor of three to five. ERP software shows Wikidata at ρ=0.655. Healthcare practice-management software shows Wikidata at ρ=0.696. CRM software shows the closely related Wikipedia Citations signal at ρ=0.577. Customer support and contact-center software shows SE Outbound Links (a measure of SERP breadth that counts appearances across the linking surface of Google search results, not just rank) at ρ=0.547. Marketing automation shows Wikidata at ρ=0.383. These are dominant-tier correlations, not the modest universal ρ=0.12 figure that the all-industry headline would suggest.

Two caveats matter for interpretation. The first is that these are correlation measurements, not causal. R-squared values across all signals are modest, with the strongest universal predictor explaining only 5.8% of the variance in LLM recommendation scores, which means the measured factors leave most of what's happening unexplained. The recommendations the study supports are defensible bets, not deterministic wins. The second is that the study does not measure schema markup, llms.txt, content recency, citation patterns, or earned-media-vs-owned distribution as separate signals. The audit's recommendations in those areas rest on different evidence and are not refuted by what this study does or does not find. What changes in this audit as a result of the OppAlerts work: the Wikidata recommendation gains stronger justification through the B2B SaaS-specific correlations, and a new SERP-breadth recommendation joins the action list because Google search visibility across the full query surface looks more directly predictive of ChatGPT recommendations than the audit previously treated it.

08Priorities, sequencing, and 90-day outlook

Priorities, sequencing, and 90-day outlook

Unblock Bing, ship schema, build the comparison hub. Everything else is incremental.

Priority 01

Unblock Bingbot at the WAF/origin

Investigate the WAF rule blocking Bing, MSN and Yandex crawlers. Allowlist Bing IP ranges. Submit sitemap via IndexNow. Re-test with a bot crawlability tool.

1–2 days · web ops
Priority 02

Ship JSON-LD schema

Organization, SoftwareApplication, FAQ on the homepage. Product schema on each platform sub-page.

Half day · dev
Priority 03

Extend and improve the existing comparison hub

Seven pages already exist. Add Deque, Conductor, TPGi. Ship Product/ComparisonReview schema. Recalibrate the Yes/No tables to honestly name where competitors win.

2–3 weeks · content

Full action list, in priority order

The three above plus six more. All nine fit comfortably inside one quarter.

  1. Unblock Bingbot at the WAF/origin level

    Pull WAF rule list, identify what's matching Bing's IPs or user-agents, allowlist Bing IP ranges, submit sitemap via IndexNow. Recovery in Bing typically 7–14 days. Until this is done, every other AEO investment for ChatGPT and Copilot is throttled.

    1–2 days
  2. Ship JSON-LD on the homepage and key product pages

    Organization, SoftwareApplication, FAQ at minimum. Product schema on each platform sub-page. Closes the credibility gap given what Siteimprove sells. Direct lift on Gemini and Copilot via Knowledge Graph and Bing inheritance.

    ½ day
  3. Extend the existing comparison hub

    Seven pages already live at /why-siteimprove/competitor-comparison/ covering Level Access, Silktide, Acquia, BrightEdge, SEMrush, Matomo, GA4. Three high-leverage adds: Deque (axe DevTools), Conductor, TPGi. Ship Product and ComparisonReview schema across all ten pages. Recalibrate every Yes/No table to honestly name where the competitor wins (the current pattern reads as marketing, not comparison). Worth a separate audit: why pages that exist aren't getting cited — the Bing block in §04 explains most of it, schema explains the rest.

    2–3 wk
  4. Build Google SERP breadth across the full query surface, not just brand queries

    The play is to invest in topical content that captures Google SERP appearances across the full set of queries a buyer might run while evaluating accessibility, AEO, content governance, and analytics tooling. The OppAlerts ranking-factor evidence supports this directly. Across B2B SaaS verticals, a signal called SE Outbound Links — which counts how many distinct Google search result pages a domain appears in, weighted by the linking surface of those SERPs — correlates with ChatGPT recommendations at ρ=0.45 to ρ=0.67 in categories like customer-support software, MarTech, B2B marketing data providers, and consumer banking. The mechanism this implies is that Google SERP breadth across many adjacent query types (not just brand queries, but informational queries, comparison queries, and problem-statement queries) has direct predictive power for which domains ChatGPT recommends. Concretely, this means building out glossary pages for accessibility, AEO, and digital governance terms, problem-statement landing pages such as "how to audit a website for WCAG 2.2 compliance," "how to measure AI search visibility," and "how to detect content drift across a 10,000-page site," and comparison and alternatives queries beyond named competitors. This is a content investment that compounds across both Google and AI search rather than serving only one surface. Measurement is straightforward: track distinct query SERP appearances on a fixed prompt set monthly, alongside the citation-share tracking already in plan.

    8–12 wk
  5. Publish "How Siteimprove uses AI" — as a knowledge-graph repair

    Treat this not as a sentiment fix but as the authoritative source the AI's brand model can re-anchor on. Concrete model and capability detail. Named architecture. Public docs links. The 2024 "no AI/ML" claim is still indexed and still cited; the brand perception scan shows the model still describes Siteimprove as the 2022 product. This page is what overwrites both.

    2 wk
  6. Fix the duplicate label on the homepage

    Five-minute content fix with outsized impact on how clearly the four pillars get parsed by AI crawlers.

    5 min
  7. Update the Wikidata entity (Q28127172) — the highest-impact ungrounded-memory lever

    Wikidata is one of the most efficient ungrounded-mention interventions available. It feeds Google Knowledge Graph directly, has fewer conflict-of-interest restrictions than Wikipedia, and shows up as a primary source across model updates. The OppAlerts March 2026 ranking-factor research makes the case substantially stronger than the original framing suggested. While the universal Wikidata correlation with ChatGPT recommendations is modest at ρ=0.120, the per-industry data for B2B SaaS verticals runs three to five times higher. ERP software shows Wikidata at ρ=0.655, healthcare practice-management software at ρ=0.696, healthcare IT at ρ=0.575, and marketing automation at ρ=0.383, with CRM software's closely related Wikipedia Citations signal at ρ=0.577. Siteimprove's category sits at the intersection of digital governance, accessibility, AEO, and analytics, and does not appear directly in the study, but maps closest to these high-Wikidata B2B SaaS patterns. The current Wikidata entry is thin (instance-of statements only, no current CEO, no Forrester Wave citation, no product references). A two-hour edit adds current CEO Nayaki Nayyar, Forrester Wave Q4 2025 Leader status, the four platform pillars, the Siteimprove.ai unified-platform positioning, and current customer references with sources. For Wikipedia itself, the safer path is to encourage independent journalists and analysts to source updates, since direct edits get reverted as promotional.

    2 hr
  8. Ship llms.txt as credibility, not as ranking

    Curated map pointing to platform pages, accessibility statement, EAA Resource Center, case studies, glossary. Low effort, signals seriousness about AEO. Frame internally as brand theater for a category Siteimprove sells into — not as an algorithmic signal.

    < 1 hr
  9. Seed Reddit and review platforms with current customer voices

    Industry citation-pattern data is unambiguous. Muck Rack's December 2025 analysis of more than one million AI citations found that 82% come from earned media (review platforms, forums, third-party mentions) and 94% from non-paid sources overall — a University of Toronto controlled study confirmed the bias is structural, not platform-specific. The current 4% own-domain share reflects exactly this distribution, and the highest-leverage way to shift it is to add weight to the earned-media half. Reddit citations appear consistently in AI answers about accessibility tooling. The relevant subreddits are r/accessibility, r/SEO, r/webdev, r/marketing, and r/bigseo. The format that performs is AMAs, technical Q&A, and real customer threads, not promotional posts. G2 review velocity matters in parallel: G2 is the most-cited software review platform across ChatGPT, Perplexity, and Google AI Overviews (Radix 2026). The two together also work as a defensive moat against the "$15K–$50K+" pricing claim and the "no AI/ML" claim that are currently being cited — fresh customer voices on neutral platforms displace older third-party framing.

    ongoing
  10. Publish a credible pricing page (or pricing range)

    TestParty's "$15K–$50K+" claim is now indexed and being repeated by ChatGPT and Perplexity. A vague rebuttal won't displace a specific number. Even an honest "starts at $X for mid-market, custom enterprise" page gives AI engines a first-party number to anchor on.

    1 wk

ChatGPT-specific quick wins

ChatGPT search runs on Bing's index, so most of these double up as Copilot wins. Items 01 and 02 are blocked by §04 — they need the Bing unblock to compound.

01Unblock Bingbot at the WAF
Prerequisite for items 02–04. See action list above.
02Verify in Bing Webmaster Tools
Confirm crawl errors, submit sitemap, use IndexNow API to accelerate re-indexation.
03Allow OAI-SearchBot
Verify it's explicitly allowed in robots.txt (it currently is — keep it that way).
04Server-side render the homepage
So OAI-SearchBot reads content without executing JS.
05Add FAQPage schema
AEO page and top three platform pages.
06Rewrite the first 50 words
Answer-first format on key pages.
07Refresh the Wikidata entity
Feeds Knowledge Graph directly. Less COI-restricted than Wikipedia.
08Seed five high-signal Reddit answers
r/accessibility, r/SEO, r/webdev, r/marketing, r/bigseo.
09Publish original research
"State of AEO 2026" or similar, with a visible date stamp.
10Weekly prompt tests
Log citation presence, sentiment, accuracy in ChatGPT. Use Siteimprove.ai's own AI Visibility dashboard — eat the dog food.

90-day outlook

M1Foundation
  • Bingbot unblocked at WAF; sitemap submitted via IndexNow
  • Schema shipped across homepage and platform pages
  • Homepage label bug fixed
  • llms.txt live
  • Wikidata entity refreshed
  • Weekly prompt-test cadence running on Siteimprove.ai dashboard
M2Narrative repair
  • Comparison hub extended — Deque, Conductor, TPGi added; existing 7 pages re-calibrated and re-schema'd
  • "How Siteimprove uses AI" page live
  • SERP-breadth content build underway — glossary pages and problem-statement landing pages live for top 20 query types
  • Reddit and G2 review-platform seeding underway
  • Bing index recovery measurable in Bing Webmaster Tools
  • Pricing page (or range) published
M3Compounding
  • Own-domain citation share moves from 4% → 12%+ on target prompt set
  • Brand perception scan: 25%+ of branches mention AEO/agentic/Siteimprove.ai
  • Customer voices visible across Reddit and G2
  • Outdated "no AI/ML" narrative replaced in ChatGPT and Perplexity citations

What to measure (and why standard analytics will miss most of it)

This work will look invisible in default analytics dashboards. AI-driven referrals show up as direct traffic or branded organic; the AI engine that triggered the visit is not in the referrer chain. Vault GTM Research has measured a 90% gap between what software attribution credits and what self-reported attribution credits, meaning standard analytics captures roughly one in ten AI-driven conversions. The quality offset is also significant: a March 2026 synthesis of six independent studies (Loganix, drawing on Averi's 680-million-citation analysis) found AI search traffic converts at 14.2% versus Google organic at 2.8%, a 5.1x advantage. The implication is that volume metrics will not show the win here; quality metrics will. The reporting layer to set up alongside the work above includes self-reported attribution on demo-request forms ("How did you first hear about Siteimprove" with AI engines as named options), branded search volume trends in Google Search Console (a clean proxy for AI-driven awareness lift), own-domain AI citation share on a fixed prompt set tracked weekly, conversion rate by acquisition channel (rather than session counts, which will mislead), and revenue per visitor by source. Siteimprove.ai's own AI Visibility dashboard handles the citation-share tracking natively. The rest sits in HubSpot, GSC and the analytics layer Siteimprove already operates.

Falsification check

If by month 3 the own-domain citation share doesn't exceed 12% on the target prompt set, the comparison hub strategy isn't working — escalate hub depth and Reddit seeding. If the brand perception scan still anchors on DCI/QA/SEO with zero AEO/agentic mentions, the "How Siteimprove uses AI" page didn't catch — escalate to a dedicated AI capabilities microsite plus Wikipedia source-building. If Bing index recovery isn't measurable within 30 days of the unblock, escalate to Bing's webmaster support.

Closing

The AI bots are allowed in. The category authority is real: Forrester Wave Leader Q4 2025, Gartner AEO Market Guide Representative Vendor 2026, G2 Leader across four categories. The product positioning is current. The infrastructure underneath has not caught up. Bingbot is being turned away at the door (including on robots.txt itself, which has standard-defined consequences), the homepage's schema covers the brand but not the platform pillars, the seven existing competitor comparison pages aren't reaching the index that ChatGPT and Copilot actually read, and the AI's stored memory of the brand is three years out of date. Each gap is a small fix; together they explain the share.

A 12 to 18 month window is open while AI engines re-decide who they cite. Acting in the next quarter is meaningfully cheaper than acting in the next year.

What this needs to move

The work is light on dev resourcing and heavy on internal coordination. The 30-day foundation phase needs a half day of web ops time to investigate and remove the WAF rule blocking Bing, half a day of front-end dev time to ship JSON-LD schema across the homepage and AEO product page, and editorial sign-off to fix the duplicate homepage label. The 60-day phase needs content team capacity for three new competitor comparison pages plus a recalibration of the existing seven, and editorial sign-off to publish the "How Siteimprove uses AI" page and the pricing page. The 90-day phase needs sustained capacity on Reddit and G2 review-platform engagement. The single highest-friction approval is likely the pricing page; the rest are operational.

Happy to go deeper on the comparison-hub structure, the WAF investigation, or the prompt-test methodology. Any of the three warrants a follow-up call.

Audit by
Onur Büyüktezgel
Independent SEO & AEO consultant
Prepared for
Jen Jones · CMO, Siteimprove
May 9, 2026 · Internal use
Siteimprove · AEO/SEO Audit · May 2026 End of main report · Appendix A follows
AAppendix · State of AEO 2026

State of AEO 2026 — research outline

Priority 09 in §08 names "publish original research" as a high-leverage move in one line. This appendix elaborates that line into a working brief — what the report would publish, why each chapter earns citations, what the production effort looks like, and how to distribute the asset for maximum AI-engine citation lift.

Strategic case

§05 shows that 96% of citations on the target prompt set come from third-party sources, and the brand-perception scan shows the AI's stored memory anchored on Siteimprove's 2022 product. Both layers fail because Siteimprove publishes commentary while competitors publish data. A single well-cited annual benchmark report compounds harder than 50 blog posts: it earns the third-party citations §05 identifies as the missing growth lever, and it provides the evidence layer underneath the AEO-and-agentic positioning that §06.e and §07 both depend on. The Forrester Wave and Gartner Market Guide recognitions are upstream of this asset; this report is the work that converts category recognition into citation density.

What the report would publish — seven chapters

Each chapter anchored on a single hero finding that's quotable and extractable. The hero finding is what journalists copy-paste; the supporting findings are what AI engines pull as evidence. Every chart is designed to be embeddable with attribution back to a deep-linked anchor on the report hub.

01The 2026 AEO baseline
Hero: median enterprise own-domain citation share across ChatGPT, Copilot, Gemini, Perplexity and Claude. Supporting: distribution by source class (third-party reviewers, aggregators, community, own-domain), per-engine variance, year-over-year shift. The corollary to §05's 4% figure for Siteimprove specifically, scaled to industry.
02The industry citation leaderboard
Hero: which industries achieve the highest own-domain AI citation share. Supporting: per-industry breakdown (Finance, Healthcare, Higher Education, Government, Retail, Manufacturing, SaaS), the third-party sources dominating each industry, "best in class" company profiles. Trade-press catnip — HigherEdDive, RetailDive, FinTech Magazine pick up the per-industry slice.
03Where the AI's memory is stale
Hero: median age of training-era brand claims that AI engines still repeat. Supporting: the mechanism by which a 2022 product description survives in 2026 AI answers, how often grounded retrieval overrides ungrounded memory, examples of brand-perception drift Siteimprove can credibly source from its scan-corpus customers. Directly extends the §05 perception-scan methodology.
04The crawler-blocking problem
Hero: share of enterprise sites that block Bingbot or AI-search crawlers at the WAF or origin level. Supporting: the most-common WAF rule patterns that cause it, the consequences for AI visibility, recovery timelines. The §04 finding scaled to industry — a story-driving chapter for the press hook and a defensible reason for AEO buyers to scan their own infrastructure.
05The schema gap
Hero: share of enterprise product pages with no SoftwareApplication or Product schema. Supporting: schema coverage by industry, the correlation between schema density and AI citation share (per the Knowledge-Graph mechanism in §07), specific schema types that earn lift.
06Cross-pillar correlations
Hero: pages in the top accessibility-and-content-quality quartile earn X% more AI citations than the bottom quartile, controlling for backlinks. Supporting: the correlation patterns only Siteimprove can measure because it scans across all four pillars (accessibility, analytics, SEO/AEO, content). The chapter that empirically justifies the Siteimprove.ai unified-platform positioning — competitors with single-axis tools cannot replicate it.
07The 2026–2027 forecast
Hero: predicted citation-share shift as Bing index stalls, Brave Search grows, and ChatGPT advertising matures. Supporting: EAA enforcement one-year retrospective, ADA Title II April 2026 deadline status, ChatGPT advertising auction maturity, brands most likely to lose visibility in the next 12 months and why. Legal-trade and analyst citation density — Law360, Bloomberg Law, JD Supra, and the analyst notes Siteimprove already participates in.
Methodology chapter
The methodology section is the most-read appendix in any benchmark report — it is what academics, journalists, and analysts evaluate before deciding whether to cite. Treat as a chapter, not a footnote: corpus size, anonymization protocol (k-anonymity thresholds, no per-customer attribution), sampling design, statistical-significance thresholds, replication instructions.

The linkable asset stack

The report is the bait; the citation machinery is the surface around it. Six concurrent assets, ordered by leverage.

  1. The hub page at /state-of-aeo-2026/

    Deep-anchor link per chart, per finding, per industry. Built so a journalist citing a single stat can deep-link to its exact source. The hub is where every press mention and academic citation eventually points.

    infra
  2. The ungated press PDF — no lead-gen gate

    Direct download for journalists and academic researchers. Lead-gen gating fragments academic citations and is the largest single mistake B2B SaaS makes with research reports. Marketing's lead-capture goal should be served by a separate "executive summary deck" gated asset, not the report itself.

    design
  3. Seven per-industry mini-reports at /state-of-aeo-2026/[industry]/

    Templated landing pages that pull only that industry's slice. Double as SEO money pages for industry queries Siteimprove already targets in §03's solutions hub. Each one becomes a citation magnet for its trade press.

    2 wk
  4. Embeddable charts with attribution

    Each major chart has an iframe embed code that includes attribution back to the hub. This is the mechanism by which WebAIM's Million accumulates thousands of citations year after year — embed velocity compounds linearly across publishers and academic-paper appendices.

    infra
  5. Press kit and four-week analyst pre-briefing window

    Five pre-written quotable stats, a CMO-level briefing deck, two named expert sources from inside Siteimprove available for reporter calls. Gartner, Forrester and IDC analysts on the AEO and DXP coverage list get a four-week pre-publication window in exchange for inclusion in their next research note. The Forrester and Gartner recognitions named in §03 are the relationships that make this work.

    2 wk
  6. Quarterly refresh on one dimension

    The report's data is updated once a quarter on a single chapter (e.g., Chapter 02 industry leaderboard refresh in Q3, Chapter 04 crawler-blocking refresh in Q4). Keeps the asset current in AI-engine retrieval indices and earns net-new citations across the year, not just at launch. Costs roughly one-fifth the effort of the annual flagship.

    quarterly

Production scope

Pod size
5–6 people · 1 data engineer · 1 senior analyst/author · 1 designer · 1 PR lead · 1 product marketer · 0.5 legal review
Duration
12 weeks kickoff to publication
Cost shape
Mostly internal time. External design and PR placement are the discretionary spend buckets.
Cadence
Annual flagship in Q2 (anchored to EAA anniversary) · quarterly micro-refreshes thereafter
Dependencies
Legal sign-off on scan-corpus anonymization (k-anonymity thresholds, no per-customer attribution). Engineering work on a repeatable extraction pipeline so year-two costs ~half of year-one.
Maps to §08
Priority 09 in the action list. M2 milestone "original research begins"; M3 milestone "citation share moves 4% → 12%+" cannot be hit without this asset producing earned citations.

Anti-patterns to avoid

  1. The self-serving leaderboard. If the data shows Siteimprove customers outperform on a given metric, footnote it transparently. If it doesn't, do not engineer the chart to make them look better. One detected case of methodology cherry-picking destroys the asset for a decade.
  2. The vendor-PDF aesthetic. The reference points are McKinsey Global Institute, Cloudflare Radar, Datadog State of DevOps — not gated SaaS ebooks. Design and writing budget reflect this.
  3. Burying the methodology. See chapter ∎ above. Treat it as a chapter, not a footnote.
  4. Skipping the embed code. If charts cannot be embedded with attribution, citation count drops by an order of magnitude. The compounding mechanism breaks.
  5. One-shot release with no refresh cadence. A single annual report decays in AI retrieval indices. Quarterly micro-refreshes are what keep the asset current.
The decision being requested

Greenlight the 12-week build of the 2026 flagship, targeted at June 2026 publication to anchor on the EAA enforcement anniversary. Approve the cross-functional pod and the methodology/legal review track. Name the executive sponsor today so analyst pre-briefings can be booked four weeks ahead of release. If full scope is too ambitious for this quarter, the alternative is Chapter 01 alone as a standalone "2026 AEO Citation Benchmark" in eight weeks, with the remaining chapters released quarterly through 2027 — same compounding outcome, distributed effort.