JavaScript SEO

Single-page apps (React, Vue, Angular) and heavy JS sites work for SEO, with effort. Googlebot can render JavaScript, but it's slower, pickier, and more error-prone than serving HTML. Understand the pipeline or ship broken SEO.

How Google processes JavaScript

  1. Crawl: Googlebot fetches the HTML
  2. Initial index: indexes whatever is in the initial HTML
  3. Render queue: queues the page for JS rendering (seconds to days later)
  4. Render: uses a headless Chromium to execute JS
  5. Re-index: indexes the rendered DOM

Key issue: the render step is delayed and uses more resources than plain HTML crawling. Google processes static HTML sites faster and more thoroughly than JS-rendered ones.

Rendering strategies

1. Client-side rendering (CSR)

Browser runs JS to build the page. Initial HTML is a skeleton (often an empty <div id="root">). Googlebot must run JS to see content.

Issues: render delay, fragile, content invisible until JS succeeds. Not recommended for content SEO.

2. Server-side rendering (SSR)

Server generates full HTML on each request. Browser and Googlebot both see complete content immediately.

Pros: best for SEO, fastest TTFT. Cons: higher server cost.

Frameworks: Next.js, Nuxt, SvelteKit, all support SSR natively.

3. Static site generation (SSG)

HTML pre-generated at build time. Deployed as static files.

Pros: fastest possible delivery, cheapest hosting, best SEO. Cons: requires rebuilds on content changes.

Frameworks: Astro, Next.js (SSG mode), Hugo, Jekyll.

4. Incremental static regeneration (ISR)

Hybrid, pages are static but can be regenerated on demand. Combines SSG speed with some dynamic flexibility.

Next.js does this well.

5. Dynamic rendering

Serve plain HTML to bots, JS-rendered content to users. Historically a workaround; now considered a last resort. Can be viewed as cloaking if not implemented carefully.

Common JavaScript SEO mistakes

Content in JavaScript files

Content only available after JS runs. If JS fails, content is gone.

Fix: SSR or SSG. If CSR is the only option, server-render at least the critical content (title, h1, meta, hero text, nav).

Links that require JS to navigate

// Bad (Google can't follow):
<div onclick="navigate('/page')">Link</div>

// Good:
<a href="/page">Link</a>

Hashbang URLs

/#!/page, ancient SPA pattern. Never index cleanly. Always use clean URLs with HTML5 history API.

Lazy-loaded content out of viewport

If content lazy-loads on scroll, Googlebot may not see it (it doesn't scroll a page the way a user does). Use intersection-based lazy loading that triggers on the whole page being rendered, or pre-render if content is critical.

Broken meta tags

Title and meta description set via JS after initial load. Googlebot often indexes the pre-JS values (empty or placeholder).

Fix: set meta tags server-side or in the initial HTML. Supplement with JS updates if needed for client-side UX.

Infinite scroll without pagination

Old posts are only reachable by scrolling. Googlebot doesn't scroll.

Fix: paginate under the hood (/blog/page/2) so Googlebot can follow. Or offer an "Older posts" link.

Testing JavaScript SEO

When full SSG is the answer

If your site is primarily content (blog, documentation, marketing), go static. Fewer moving parts, better SEO, faster. Use an SSG framework, deploy to a CDN, and don't look back.