Crawling + indexability

The first two questions of technical SEO: can search engines find my pages? Can they index them? Until those answers are both "yes," no other SEO work matters.

Crawling

Search engine bots (Googlebot, Bingbot) discover pages by following links. A page is "crawlable" if a bot can reach it without being blocked.

Blockers to crawling

Indexability

A page is "indexable" if, once crawled, the search engine adds it to its index (the database used to serve results).

Blockers to indexing

Debugging: is my page indexed?

  1. Search site:yourdomain.com/specific-url in Google. If it appears, it's indexed. If not, keep debugging.
  2. Search Console → URL Inspection → enter the URL → check coverage status
  3. If "Discovered, not crawled" → Google knows about it but hasn't crawled. Wait, or request indexing.
  4. If "Crawled, not indexed" → Google crawled but chose not to index. Usually a quality or duplicate issue.
  5. If "Indexed, not submitted in sitemap" → works, but add to your sitemap.

Crawl budget

Google allocates a finite crawl budget per site, roughly how many pages/day it will crawl. For sites under a few thousand pages, this is never a concern. For large sites (10k+), poorly-optimized crawl budget means important pages get crawled less often.

Optimizing crawl budget:

Tools

Quick wins