# Scrayle Web Scraper

Scrayle.com gives AI agents full web access through 35 tools spanning scraping, browser automation, AI-powered browsing, workflow orchestration, and more.

## Quick Start

```bash
# Connect this server (installs CLI if needed)
npx -y @smithery/cli@latest mcp add scrayle/web-scraper

# Browse available tools
npx -y @smithery/cli@latest tool list scrayle/web-scraper

# Get full schema for a tool
npx -y @smithery/cli@latest tool get scrayle/web-scraper scrape-url

# Call a tool
npx -y @smithery/cli@latest tool call scrayle/web-scraper scrape-url '{}'
```

## Direct MCP Connection

Endpoint: `https://web-scraper--scrayle.run.tools`

**Optional config:**
- `SCRAYLE_API_KEY` (header) — Your Scrayle API key. Get one at https://scrayle.com

## Tools (35)

- `scrape-url` — Scrape a single web page and extract its content. Returns markdown, HTML, links, and metadata. Use this for extracting …
- `batch-scrape` — Scrape multiple web pages at once. Ideal for scraping lists of URLs like search results, product listings, or article c…
- `crawl-website` — Crawl a website starting from a URL. Follows links to discover and scrape multiple pages. Use this when you need to ext…
- `cancel-crawl` — Cancel a running crawl job by its ID.
- `start-browser-session` — Launch a cloud browser session and connect to it. Returns a task ID to use with browser-navigate, browser-click, browse…
- `browser-navigate` — Navigate the browser to a URL. Returns the page title and URL after navigation.
- `browser-click` — Click an element on the page by CSS selector. Waits for the element to be visible before clicking.
- `browser-type` — Type text into an input field by CSS selector. Optionally clear the field first and press Enter after typing.
- `browser-screenshot` — Take a screenshot of the current page. Returns a base64 PNG image. Use this to see what the page looks like.
- `browser-get-content` — Get the visible text content of the page or a specific element. Use this to read what is on the page.
- `browser-evaluate` — Execute JavaScript in the browser page. Returns the result. Use for custom interactions, data extraction, or DOM manipu…
- `browser-select` — Select an option from a dropdown/select element by value or label.
- `browser-scroll` — Scroll the page up or down.
- `browser-wait` — Wait for an element to appear on the page, or for the page to finish loading.
- `close-browser-session` — Close an active cloud browser session and disconnect.
- `list-browser-profiles` — List your saved browser profiles. Profiles store cookies, localStorage, and fingerprint data for persistent browser ses…
- `create-workflow` — Create a new workflow definition. A workflow is a sequence of browser automation steps (navigate, click, type, extract …
- `list-workflows` — List your saved workflow definitions. Returns workflow IDs, names, descriptions, and run counts. Use the workflow ID wi…
- `run-workflow` — Execute a Scrayle workflow by its ID. Workflows are pre-built automation sequences that navigate websites, extract data…
- `get-workflow-status` — Check the status of a running or completed workflow by its task ID.
- `list-workflow-runs` — List your recent workflow runs with their status and timing.
- `get-workflow` — Get the full definition of a workflow including all its nodes, edges, and input parameters. Use this to inspect or revi…
- `update-workflow` — Update an existing workflow definition. You can change the name, description, nodes (steps), or edges (connections). Us…
- `search-marketplace` — Search the Scrayle Marketplace for pre-built scraping workflows and automation actors. Find tools for social media scra…
- `get-marketplace-actor` — Get detailed information about a marketplace actor/template including its README, input schema, pricing, and reviews.
- `run-marketplace-actor` — Run a marketplace actor with the given input parameters. This executes a pre-built automation workflow created by the c…
- `solve-captcha` — Solve a CAPTCHA challenge. Supports ReCaptcha v2, hCaptcha, Cloudflare Turnstile, and other types. Returns the solution…
- `kv-create-namespace` — Create a new key-value store namespace. You need a namespace before you can store keys.
- `kv-get` — Get a value from a key-value store namespace.
- `kv-set` — Set a value in a key-value store namespace. The value is stored as a string - objects will be JSON-stringified.
- `kv-list-namespaces` — List your KV store namespaces.
- `create-dataset` — Create a new dataset. Datasets store structured tabular data from scraping runs.
- `list-datasets` — List your datasets. Datasets store structured tabular data from scraping runs.
- `get-dataset-items` — Get items from a dataset. Returns structured data rows.
- `add-dataset-items` — Add items to a dataset. Each item is a flat JSON object representing one row of data.

```bash
# Get full input/output schema for a tool
npx -y @smithery/cli@latest tool get scrayle/web-scraper <tool-name>
```

## Resources

- `docs://scrayle.com/api`

## Prompts (2)

- `scrape-website` (url, data) — Extract data from a website. Provide a URL and optionally what data you want.
- `automate-browser` (task, url) — Perform a browser automation task like filling forms, clicking buttons, or navigating websites.
