How search engines work

Search engines do three things: crawl the web, index what they find, and rank pages in response to queries. Every SEO tactic touches at least one of these stages.

Stage 1: Crawling

Search engines use bots (Googlebot, Bingbot, etc.) to discover pages. Bots follow links from known pages to new ones. Pages they can't reach (poor site architecture, noindex tags, robots.txt blocks) never enter the index.

Your job in SEO: make sure every page you want ranked is discoverable via links from already-indexed pages.

Stage 2: Indexing

Once a page is crawled, the search engine parses its content, extracts key signals (headings, keywords, links, schema), and stores the representation in an index. An unindexed page cannot rank, period.

Watch outs: duplicate content, thin content, canonical conflicts, and server errors all prevent indexing.

Stage 3: Ranking

When a user searches, the engine queries its index and returns a ranked list. Ranking combines hundreds of signals: relevance, quality (E-E-A-T), user behavior, speed, freshness, backlinks, and more.

The serving layer

Beyond the traditional 10 blue links, modern SERPs include featured snippets, People Also Ask, knowledge panels, AI Overviews, image carousels, and local packs. Each slot has different qualifying signals.

What this means for you

  1. If a page isn't crawled, nothing else matters.
  2. If it's crawled but not indexed, you have an indexing problem.
  3. If indexed but not ranking, it's a relevance, quality, or authority problem.

Diagnose in this order.