Web scraping
infrastructure
for everyone
A powerful dashboard for non-technical users and a complete API & SDK for developers. Turn any URL into structured data—no code required, or fully programmable.
import { Jetscrape } from "@jetscrape/sdk"; const jet = new Jetscrape({ apiKey: process.env.JETSCRAPE_KEY }); const result = await jet.scrape("https://example.com", { format: "markdown", waitFor: "networkidle", extract: { title: "string", price: "number", inStock: "boolean" } }); console.log(result.data); // { title: "Product X", price: 29.99, inStock: true }
Everything you need to
extract data at scale
One API to replace your scraping stack. No proxy management, no browser orchestration, no HTML parsing.
Structured Extraction
Define a schema and get type-safe JSON back. Zod-compatible validation built in.
Headless Browsers
Auto-managed Chromium fleet. JavaScript rendering, infinite scroll, stealth mode.
Clean Markdown
LLM-ready output. Preserves tables, code blocks, headings, and semantic structure.
AI Crawling
No selectors needed. AI automatically identifies and extracts the right elements from any page layout.
UI-to-API Parity
Every operation in the dashboard has an API and SDK equivalent. Build visually, deploy programmatically.
AI Operator
An intelligent agent that guides you through scraping workflows, suggests configurations, and troubleshoots issues.
Crawling Workflows
Chain multi-step crawls: extract parameters from one page, build dynamic URLs with custom headers, and feed them into the next request automatically.
Unblocking & Anti-Bot
Bypass Turnstile, reCAPTCHA, and anti-bot systems automatically. Full browser fingerprint control with stealth mode.
Batch Crawling
Crawl entire sitemaps or domains. Parallel workers with deduplication and rate limiting at scale.
Three lines of code.
Any website, structured.
Initialize the client
Install the SDK and authenticate with your API key. Supports Node.js, Python, Go, and cURL.
Send a scrape request
Pass a URL and specify the output format. We handle proxies, rendering, and anti-bot bypass.
format: "markdown"
})
Get structured data
Receive clean, validated JSON or Markdown. Ready for LLMs, databases, or downstream processing.
"title": "...",
"content": "...",
"links": [...]
}
Built for production workloads
Every component is designed for reliability at scale. No breaking changes, no surprise rate limits.
AI finds selectors for you
Describe what you need in plain language. Our AI analyzes the page structure, identifies the right elements, and builds extraction rules automatically — no CSS selectors or XPath required.
View AI crawling docsconst result = await jet.scrape("https://store.example.com", { aiExtract: true, prompt: "Find all products with name, price, stock", }); // AI auto-detects selectors — no CSS/XPath needed console.log(result.data); // [{ name: "Widget", price: 29.99, stock: true }, ...]
Your scraping co-pilot
An intelligent agent that helps you design workflows, troubleshoot failures, suggest optimal configurations, and automate repetitive scraping tasks end-to-end.
Meet the operatorconst agent = jet.operator({ goal: "Monitor competitor prices daily", urls: ["https://competitor-a.com", "https://competitor-b.com"], }); const plan = await agent.plan(); // Agent suggests: schedule, selectors, alerts await agent.execute(plan);
Multi-step crawling pipelines
Chain requests into complex workflows: crawl a page, extract parameters, build dynamic URLs with custom headers, and feed them into the next step. Handle login flows, pagination, and deep crawls.
Build a workflow// Multi-step crawl: login → list → detail const flow = jet.workflow([ { url: "https://app.example.com/login", actions: [{ fill: "#email", value: env.EMAIL }] }, { url: "https://app.example.com/items", extract: { links: "a.item-link[href]" } }, { url: "{{each links}}", // dynamic URLs headers: { "X-Token": "{{cookies.session}}" }, extract: { title: "string", price: "number" } }, ]);
Bypass any anti-bot system
Automatically solve Turnstile, reCAPTCHA, and WAF challenges. Full browser fingerprint control, stealth mode, residential proxies, and device emulation — all transparent to your code.
View unblocking docsconst result = await jet.scrape(url, { unblock: true, browser: { stealth: true, solveTurnstile: true, fingerprint: "desktop-chrome", viewport: { w: 1920, h: 1080 }, }, proxy: "residential-us", });
UI-first, API-ready
Build and test scraping workflows visually in the dashboard. When ready, export to API calls or SDK code with one click. Every UI action has a programmatic equivalent.
Explore the SDK// Export any dashboard workflow as code const workflow = await jet.workflows.export( "wf_price_monitor" ); // Run via SDK — identical to clicking "Run" in UI await jet.workflows.run(workflow.id, { schedule: "0 9 * * *", webhook: "https://api.you.com/hook", });
A single endpoint.
Infinite flexibility.
RESTful by default. SDKs for TypeScript, Python, and Go included.
POST /v1/scrape
Send a URL and receive structured data back. Customize output format, rendering behavior, extraction schema, and proxy region.
{
"success": true,
"data": {
"url": "https://example.com/product/142",
"title": "Wireless Keyboard Pro",
"markdown": "# Wireless Keyboard Pro\n\n...",
"extract": {
"price": 79.99,
"currency": "USD",
"inStock": true,
"rating": 4.8
},
"metadata": {
"statusCode": 200,
"latency": 47,
"proxy": "us-east-1"
}
}
}Predictable, usage-based pricing
Start free. Scale with transparent pricing. No hidden fees, no per-seat charges.
- HTML, JSON & Markdown output
- Playground access
- 5 concurrent requests
- Community support
- Everything in Free
- Full API access
- 20 concurrent requests
- 15 max crawls per minute
- Email support
- Everything in Growth
- 50 concurrent requests
- 30 max crawls per minute
- Priority support
- Webhooks & streaming
- Everything in Pro
- Dedicated proxies
- Custom SLA (99.99%)
- Custom rate limits
- Dedicated support engineer
Start scraping in 30 seconds
Free tier included. No credit card required. Get your API key and start extracting structured data immediately.