Most SEO guides on the web were written between 2015 and 2021 and never quite caught up to what actually happened next. Google's algorithm moved to entity-based understanding. AI search engines — ChatGPT browsing, Perplexity, Claude, Gemini, AI Overviews — started handling a growing share of queries. Content production became something a well-briefed language model can do at 10× the speed a human team ever could. None of this killed SEO. It widened the discipline.
This guide is the operating manual we give new clients at Axccelerate. It's written for marketing leads, founders, and in-house SEO operators who need to move organic from “the thing we probably should do” to an infrastructure line item in the growth stack — funded like engineering, measured in pipeline, compounding every month.
01Why SEO still matters in the AI-search era
A common line from founders in 2026: “Isn't SEO dead now that ChatGPT and Perplexity are answering questions directly?” It's the wrong question. Organic search has never been a single channel — it's the bundle of surfaces that route people to content based on intent. The surfaces have multiplied, not disappeared.
Google's core search still drives the majority of high-commercial-intent traffic on the web. AI Overviews, AI Mode, and AI-generated summary cards are built on top of indexed web content — the same content SEO optimises for. ChatGPT browsing and Perplexity fetch live web results and cite the sources. Claude and Gemini embed web answers in their responses. Every one of these systems consumes the same underlying substrate: crawlable, well-structured, authoritative content with clear entity relationships.
SEO didn't die. It quietly became the discipline of being the source everyone else cites — whether the citer is a human, a human assisted by AI, or an AI system answering on behalf of a human.
The economics haven't changed, either. Organic traffic is still the only acquisition channel that compounds — paid media meters reset the day you cut spend. And the distribution of clicks on Google has always been winner-take-most (see the chart below): position #1 takes about 40% of clicks, position #5 takes under 6% (Advanced Web Ranking's ongoing CTR study). Those numbers haven't moved meaningfully in a decade. What's changed is that ranking #1 is no longer enough if the AI Overview above you answers the user's question without a click.
The practical response is to broaden what you're optimising for. The next sections break that into four working pillars — technical, on-page, off-page, and AI-visibility — with explicit tactics for each.
02How SEO actually works in 2026
Strip away the jargon and modern SEO has four pillars. Three of them are decades old; the fourth is new.
Technical SEO
The plumbing. Can search engines crawl every page worth indexing? Do they render without JavaScript errors? Are Core Web Vitals (LCP, INP, CLS) inside Google's thresholds on mobile? Is structured data (schema.org JSON-LD) clean, consistent, and covering the types that matter for your business? Is your information architecture shallow enough that authority flows to commercial pages? Technical SEO is the substrate — if it's broken, nothing else matters.
On-page SEO
Content written to answer specific searcher intent and cover the semantically-related entities Google expects. The old mental model was “keywords in titles, keywords in headings, keyword density.” The modern one is topic coverage: if the user's query is about “AI lead scoring for hospitality,” the ranking page needs to discuss scoring methods, hospitality-specific signals, integration patterns, common vendors, and measurement — because Google's models expect those entities to co-occur when the topic is covered well.
Off-page SEO
Authority signals that originate outside your domain. Backlinks from relevant, authoritative sites remain the clearest external vote on your content. Digital PR, expert commentary, podcast appearances, and brand mentions that get picked up in press all feed the graph. In 2026 the same signals — citations, mentions, entity associations — also drive whether an AI system sees your brand as the right source to name.
AI-visibility (the new fourth pillar)
A new discipline that sits next to the traditional three. It's the practice of making your brand, products, and content retrievable and citable by large language models. We'll cover the specifics in Section 06. The chart below shows approximate weight across the four pillars — directional, not absolute, and it shifts with query intent.
- Core Web Vitals (LCP, INP, CLS)
- Crawlability & indexation
- Mobile-first rendering
- Structured data (JSON-LD)
- Information architecture
- HTTPS, canonicalisation, hreflang
- Search-intent match
- Entity coverage (topic + semantically-related entities)
- Internal linking graph
- Title + meta + heading hierarchy
- Content freshness & depth
- E-E-A-T signals (author, sources, citations)
- Authoritative backlinks
- Brand mentions & digital PR
- Entity associations (Wikidata, Wikipedia)
- Reviews & UGC signals
- Referral traffic quality
- Citation-ready content structure
- Entity-first writing for LLM retrieval
- llms.txt + content licensing signals
- Answer-engine optimisation (AEO)
- Brand presence in LLM training data
03Technical foundations that never change
Before a single line of new copy ships, the site needs to be technically healthy. Most “SEO isn't working” problems we're brought in to fix turn out to be crawl budget wasted on duplicate URLs, slow LCP, or a JavaScript app that renders content after Googlebot has already moved on.
The short list
- Indexation control. Canonical tags on every indexable URL, noindex on anything duplicative or thin, sitemap accurately reflects the set of canonical pages.
- Core Web Vitals under threshold. LCP ≤ 2.5s on mobile, INP ≤ 200ms, CLS ≤ 0.1 (per Google's published CWV thresholds on web.dev). Measured in the field (Chrome User Experience Report), not in the lab.
- Render-first content. Critical content visible in the initial HTML for client-side apps. Static generation (like the site you're reading), SSR, or SSG preferred over pure CSR for indexable pages.
- Structured data. Organization, WebSite, BreadcrumbList site-wide; Article / Product / FAQPage / LocalBusiness per page type.
- Information architecture. Category → topic → detail, with no more than 3 clicks from homepage to any commercial page. Orphan pages (zero internal links) are almost always a mistake.
- Mobile-first rendering. Google crawls with a mobile user-agent; if mobile and desktop serve different content, the mobile version is what ranks.
Everything in that list is observable. Run a crawl with a tool like Screaming Frog or Sitebulb, pull CWV data from Search Console, check structured data in Rich Results Test — you'll have a prioritised fix list inside a day. Fix that before touching content strategy.
04On-page: writing for entity search
The keyword is not the target. The entity behind the keyword is the target.
Google's natural-language understanding has been entity-based since MUM and subsequent model updates. When someone searches “luxury villa rentals Bali,” the ranking system isn't matching the string — it's identifying the entities (luxury villa, Bali, rental intent, location-specific) and finding pages that cover those entities and their expected related entities deeply.
What this means in practice
- Research topics, not keywords. Build topic clusters — a pillar page on the parent topic, supporting pages on specific entities underneath it. Example: pillar “AI lead scoring,” supporting pages on feature engineering, threshold calibration, industry-specific signals, vendor comparisons, implementation patterns.
- Cover semantically-related entities. If you're writing about SEO, a strong page will naturally touch on Core Web Vitals, backlinks, schema, technical audits, rankings — because those are the entities a reader (and a ranking model) expects. Keyword-density tools are useful as a sanity check; comprehensive topic coverage is the actual target.
- Match search intent precisely. Informational queries need guides and explainers. Commercial queries need comparisons and vendor pages. Transactional queries need direct-offer pages with clear CTAs. Serving the wrong format for the query is the fastest way to waste a #1 ranking.
- Internal linking is leverage. Every supporting page should link up to its pillar with descriptive anchor text, and down to relevant siblings. Internal linking redistributes authority and is something you fully control — unlike backlinks.
- E-E-A-T signals. Real author bylines, linked author pages with credentials, cited sources, dates, fact-box call-outs. Google's models look for these explicitly on YMYL (Your-Money-Your-Life) queries — finance, health, legal, education — but they help everywhere (see Google's own Search Quality Evaluator Guidelines).
Content production at volume — without dropping quality
Writing topic clusters at meaningful scale used to need a full content team. It doesn't any more. Our typical production pipeline looks like this: editorial strategy defines the topic cluster and publishes a brief; an LLM drafts to the brief within specified constraints (voice, length, entity coverage, factual sources); a human editor tightens the draft, adds judgement, verifies claims, and ships. A single editor + AI pipeline produces 4–6 publishable pieces per week — up from the 1–2 a single writer shipped unaided.
AI isn't replacing the editorial hand; it's compressing the repetitive parts. First drafts, schema generation, meta descriptions, internal-link suggestions, summaries — all handled by the pipeline. Research, strategy, point-of-view, and final quality stay human.
05Off-page: authority AI systems can see
External signals still matter — arguably more, because AI systems explicitly weight the prominence and reputation of the brand they're citing. A page with identical on-page quality ranks differently if one domain has 500 relevant references across credible sites and the other has 20.
Where signal comes from now
- Editorial backlinks. Still the dominant signal. Links from topically relevant sites that editors actually chose to include. One link from a relevant industry publication is worth a hundred from low-quality directories.
- Digital PR. Commentary, data stories, expert quotes that get picked up in tier-1 press. These build authority AND entity associations — both of which feed classical search and AI retrieval.
- Brand mentions (linked or not). Co-occurrence of your brand name with topical entities across the web builds the “is X a legitimate authority on Y” signal that ranking systems use.
- Structured entity identity. Wikipedia, Wikidata, Google Knowledge Panel, Crunchbase, LinkedIn Company Page — the structured-data sources LLMs are trained on. If your brand exists in these, you show up in AI answers.
- Review signals. Aggregate review scores and written reviews on G2, Capterra, Trustpilot, industry-specific review sites. Weighted heavily for commercial-intent queries.
The outreach model has changed: tactics that worked in 2018 — guest posts, link exchanges, bulk outreach — produce almost nothing in 2026 and often hurt. What works is genuinely useful proactive outreach: original research, expert-contributed commentary, data studies journalists actually want to cite. It's slower, but the links you earn that way are the ones that still matter in 18 months.
06Being referenceable inside ChatGPT, Claude, Perplexity, and AI Overviews
AI search is not a fad surface. Users are asking research questions, product-comparison questions, and vendor-evaluation questions inside LLM-powered interfaces in growing volume. The practice of being the source those systems cite — answer engine optimisation (AEO), or AI-visibility — is now a first-class workstream.
How LLMs find and cite content
Two paths. The first is training — content crawled during the model's training cutoff that became part of its knowledge. The second is retrieval — for live-web-enabled systems (ChatGPT browsing, Perplexity, Gemini with grounding, Claude with search), the model fetches fresh pages at answer-time and cites them.
Both paths reward the same things: content that's structured to be easy to excerpt, entity associations that make your brand recognisable, and authority signals that make you a preferred citation.
Practical moves
- Write citation-ready content. Clear H-tagged questions, short direct answers right under the heading, data points that can be quoted verbatim, explicit attributions. If an LLM could grab a 60-word excerpt that cleanly answers the question, you're cite-ready.
- Maintain entity consistency. Your brand, your products, and your point of view should appear with consistent naming across your site, third-party directories, Wikipedia/Wikidata, and press coverage. Inconsistency confuses the entity graph LLMs build.
- Ship an llms.txt. The llms.txt standard (like robots.txt for AI crawlers) lets you signal which content is canonical, how you prefer it licensed and cited, and what's off-limits. Early adoption, growing relevance.
- Own the SERP for your brand terms. Branded queries are often the first step before someone asks an LLM for recommendations. A clean branded SERP (Knowledge Panel, site-wide sitelinks, positive review mentions) translates directly into stronger LLM answers about you.
- Digital PR for training-set placement. Commentary published in sites that are heavily represented in LLM training data (tier-1 trade press, major outlets, structured data sources) helps the model “know about you” even without live retrieval.
- Measure AI visibility. Track how ChatGPT, Perplexity, Claude, and AI Overviews answer queries about your category. Are you cited? Where? Over time? Tools like Profound, Bluefish, or manual monthly checks by a RevOps operator are adequate to start.
The meta-principle: being cited by an AI is an earned outcome, not a hack. It follows from the same work — clear, authoritative, well-structured content — that ranks you in traditional search. The workstream isn't separate; it's an extension.
07Building an SEO system, not running a campaign
The biggest shift in modern SEO isn't a tactic. It's the mental model. A campaign has a start date, an end date, and a deliverable list. A system runs continuously and compounds.
The compounding part is what most organisations miss. Every technically-sound page you publish keeps earning traffic for years. Every authoritative backlink keeps voting for your domain. Every entity association keeps reinforcing your presence in both classical and AI search. Treating SEO like a project — sprint, ship, stop — loses the compounding entirely.
What a modern SEO system looks like
- A content engine. A standing editorial calendar with a weekly publication cadence, an AI-accelerated production pipeline, and clear topic-cluster targets that align with commercial intent.
- A technical-health watch. Weekly monitoring of Core Web Vitals, crawl stats, index coverage. Regressions get triaged before they affect rankings.
- An authority layer. Ongoing digital PR, expert commentary, data studies, relationship-building with journalists and podcasters in your space.
- An AI-visibility monitor. Monthly checks on how LLMs answer category queries, iterating content structure when you're missing from citations.
- Programmatic scale (where applicable). Thousands of pages generated from structured data — locations, inventory, categories, use cases — with AI content QA layered on top to prevent thin-content penalties.
- Automated internal linking. Tooling that suggests or auto-creates internal links as new content ships, so the internal link graph stays dense as the content library grows.
- A measurement layer. Organic sessions tied to pipeline, attribution traced end-to-end, dashboards that reflect system health weekly — not quarterly.
This is where AI earns its keep. Content drafts, schema generation, internal-link recommendations, page summaries, metadata, QA for thin content — all handled by the system. Humans focus on strategy, editorial judgement, and the relationships that drive authority.
The workflow cadence, at the level of rhythm, looks like this:
Audit
Technical health, current indexation, content inventory, competitor gap — baseline the system before prescribing anything.
Architect
Topic clusters, entity map, technical fixes prioritised by impact, content roadmap tied to commercial-intent keywords.
Build
Technical fixes shipped. Content produced via AI-accelerated pipelines, human-edited, published with internal linking baked in.
Promote
Digital PR, targeted backlink outreach, entity associations on Wikidata/Wikipedia, distribution in the feeds AI systems learn from.
Measure
Ranking + organic traffic is a lagging proxy. Attribution-traced pipeline contribution is the real number. Iterate monthly.
08Measuring what compounds
If you can't tie organic traffic to pipeline, you can't justify the investment — and you can't tune the system. The first time we see an “our SEO isn't working” complaint, our usual follow-up question is “what's your organic-to-pipeline attribution?” About 70% of the time, the answer is either “we track organic sessions in GA4” or “we don't.” Session counts aren't an outcome.
The metrics that matter
- Qualified organic sessions. Sessions from organic that match your ideal-customer profile (pattern-match on firmographic data, engagement signals, and fit score). Blind session counts inflate easily; qualified sessions don't.
- Organic-assisted conversions and attributed revenue. Multi-touch attribution that gives organic fractional credit across every touchpoint in a converting journey. Last-click undercounts SEO massively.
- Pipeline influenced by organic. Every open opportunity with any organic touch in its history. Closed-won deal count × ACV gives you the number that gets a CFO's attention.
- Ranking depth for commercial-intent queries. Leading indicator. Useful for catching slipping positions before they show up in session declines.
- AI-citation presence. Are you cited by LLMs for category queries? Trend over time. New metric, worth adding.
The chart below shows the economic logic over 18 months — illustrative, but the shape is consistent across our engagements: organic starts more expensive per visit (you're paying for technical work, content production, and authority building before traffic arrives), crosses under paid between month 4 and month 6, and keeps dropping. Paid media stays roughly flat per visit forever, because the auction doesn't give discounts for loyalty.
09What to look for in an SEO partner
Whether you're hiring in-house, using an agency, or running a hybrid, the same questions separate modern operators from 2019-vintage retainer shops.
Questions worth asking
- Who on your team actually does the technical work? The answer should be an engineer, not an account manager who will “brief the dev team.” Technical SEO at scale is engineering, not project management.
- How do you use AI in production? Specific answers only. “We use ChatGPT sometimes” is a red flag. A strong answer names the pipeline, the QA process, and where human judgement stays in the loop.
- How do you measure contribution to pipeline? If the answer is “we send you a GA4 report,” keep looking. A serious partner will explain their attribution model, how it ties to your CRM, and show you a real client example.
- What's your position on AI-search visibility? They should have a coherent view on answer-engine optimisation, not blank stares.
- Can I see models you've shipped? If they claim AI capability, they should be able to show you a model in production, explain how it's fit to client data, and walk through how it's monitored.
- What's your approach when something doesn't work? Everyone has lost rankings at some point. You want a partner who diagnoses and iterates, not one who explains around it.
That's how we run engagements at Axccelerate — engineers in the scoping call, AI pipelines in production, attribution built into InsightAX, and a standing iteration cadence. If that's the shape of operator you're looking for, the next step is a 30-minute scoping call and a scoped proposal within two working days.
- Google Search Central documentation↗The canonical reference for how Google crawls, indexes, and ranks.
- web.dev — Core Web Vitals↗Google's published CWV thresholds, measurement methodology, and field data guidance.
- Search Quality Evaluator Guidelines (PDF)↗Google's own rater playbook. The E-E-A-T chapter is where the framework comes from.
- Schema.org — structured data reference↗Every JSON-LD type and property you'd use for rich results and entity markup.
- llms.txt standard↗The emerging spec for signalling canonical content and citation preferences to LLM crawlers.
- Advanced Web Ranking — CTR study↗Ongoing industry dataset on organic CTR by SERP position, device, and intent.