Get Started

API Overview

Use this page to pick the right CRW route quickly. The endpoint pages are the real usage guides; this page is the shortest map from your task to the right API surface.

Hosted base URL: https://fastcrw.com/api
Self-host default: http://localhost:3000
Best first route: POST /v1/scrape

Authentication

  • Hosted API: always send Authorization: Bearer YOUR_API_KEY
  • Self-hosted API: auth is only required when auth.api_keys is configured
  • /health is always public

Pick the right endpoint

:::cards ::card{icon="code" title="Scrape" href="#scraping" description="One known URL in, clean content out. This is the default starting point."} ::card{icon="search" title="Search" href="#search" description="Hosted discovery-first workflow when you do not know the URLs yet."} ::card{icon="map" title="Map" href="#map" description="Discover URLs first without paying for full scraping."} ::card{icon="globe" title="Crawl" href="#crawling" description="Run an async multi-page job when one page is not enough."} ::card{icon="zap" title="Extract" href="#extract" description="Return structured JSON when downstream systems want fields, not prose."} ::card{icon="plug" title="MCP" href="#mcp" description="Expose CRW as tools to Claude, Codex, Cursor, and other MCP hosts."} :::

Route summary

Route Use this when Notes
POST /v1/scrape You already know the page URL Core hosted and self-hosted route
POST /v1/search You need discovery across the web Hosted/cloud path
POST /v1/map You want discovery before content extraction Returns links only
POST /v1/crawl You need many pages, asynchronously Poll with GET /v1/crawl/{id}
GET /v1/crawl/{id} You need crawl progress and results Returns status plus completed data
DELETE /v1/crawl/{id} You want to cancel an active crawl Hosted and self-hosted
POST /mcp You are using MCP over HTTP Prefer the MCP page for setup

Start with this request

curl -X POST https://fastcrw.com/api/v1/scrape \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url":"https://example.com","formats":["markdown"]}'

If that request works, continue into Scrape and only then branch into search, map, crawl, or extraction.

Shared reference pages

What to read next