ScrapeOps Quickstart Guide: Proxy APIs, Monitoring & Scheduling
Welcome to the ScrapeOps Documentation pages. Here you will find info on how to integrate and use our products.
- π» ScrapeOps Proxy API Aggregator
- π ScrapeOps Residential Proxy Aggregator
- π ScrapeOps Monitoring
- β° ScrapeOps Server Manager & Scheduler
- π€ ScrapeOps n8n Node
- π ScrapeOps MCP Server
- π ScrapeOps Parser API
- π¦ ScrapeOps Data APIs
- β‘ ScrapeOps AI Scraper Builder
π» Explore the ScrapeOps Dashboard: Interactive Demoβ
π» ScrapeOps Proxy API Aggregatorβ
ScrapeOps Proxy API Aggregator is an easy to use proxy that gives you access to the best performing Proxy APIs via a single endpoint. We take care of finding the best proxies, so you can focus on the data.
To use the ScrapeOps Proxy API Aggregator, you first need an API key which you can get by signing up for a free account here.
π Getting Startedβ
To make requests you need send the URL you want to scrape to the ScrapeOps Proxy API endpoint https://proxy.scrapeops.io/v1/ by adding your API Key and URL to the request using the api_key and url query parameter:
import requests
response = requests.get(
'https://proxy.scrapeops.io/v1/',
params={
'api_key': 'YOUR_API_KEY',
'url': 'http://httpbin.org/anything'
}
)
print(response.text) # Returns the HTML content
With the ScrapeOps Proxy API Aggregator you are only charged for successful requests (200 and 404 status codes).
To learn how to use the ScrapeOps Proxy API Aggregator and customise it to your requirement then check out the QuickStart Guide.
π ScrapeOps Residential Proxy Aggregatorβ
ScrapeOps Residential Proxy Aggregator is an easy to use proxy that gives you access to the best performing Residential Proxy providers via a single proxy port. We take care of finding the best proxies, so you can focus on the data.
To use the ScrapeOps Residential Proxy Aggregator, you first need an API key which you can get by signing up for a free account here.
π Getting Startedβ
To make requests you need send the URL you want to scrape to set their proxy port to the ScrapeOps Residential Proxy Port http://scrapeops:YOUR_API_KEY@residential-proxy.scrapeops.io:8181
The username for the proxy is scrapeops and the password is your API key.
import requests
proxies = {
'http': 'http://scrapeops:YOUR_API_KEY@residential-proxy.scrapeops.io:8181',
'https': 'http://scrapeops:YOUR_API_KEY@residential-proxy.scrapeops.io:8181'
}
response = requests.get('https://httpbin.org/ip', proxies=proxies)
print(response.text)
Here are the individual connection details:
- Proxy: residential-proxy.scrapeops.io
- Port: 8181
- Username: scrapeops
- Password: YOUR_API_KEY
With the ScrapeOps Residential Proxy Aggregator you are charged for bandwidth consumed.
To learn how to use the ScrapeOps Residential Proxy Aggregator and customise it to your requirement then check out the QuickStart Guide.
π ScrapeOps Monitoringβ
ScrapeOps Monitoring is a monitoring tool purpose built for web scraping. With a simple 30 seconds install of one of our SDKs, your scraper's performance & error stats will be automatically aggregated and shipped to your ScrapeOps dashboard.
β Features & Functionalityβ
ScrapeOps Monitoring gives you the following features & functionality:
-
Scrapy Job Stats & Visualisation
- π Individual Job Progress Stats
- π Compare Jobs versus Historical Jobs
- π― Job Stats Tracked
- β Pages Scraped & Missed
- β Items Parsed & Missed
- β Item Field Coverage
- β Runtimes
- β Response Status Codes
- β Success Rates & Average Latencies
- β Errors & Warnings
- β Bandwidth
-
Health Checks & Alerts
- π Custom Spider & Job Health Checks
- π¦ Out of the Box Alerts - Slack (More coming soon!)
- π Daily Scraping Reports
π Getting Startedβ
To use ScrapeOps you first need to create a free account and get your free API_KEY.
Currently ScrapeOps integrates with both Python Requests & Python Scrapy scrapers:
More ScrapeOps Monitoring integrations are on the way.
β° ScrapeOps Server Manager & Schedulerβ
ScrapeOps Server Manager & Job Scheduler is a easy to use server integration that enables you to deploy, manage and schedule your scrapers from the ScrapeOps dashboard.
There are two options to integrate ScrapeOps with your servers:
- Via SSH (Recommended)
- Via Scrapyd Server HTTP Endpoints (Only Applicable to Python Scrapy)
β Features & Functionalityβ
ScrapeOps Server Manager & Job Scheduler gives you the following features & functionality:
- SSH Server Management
- π Integrate With Any SSH Capably Server
- π· Deploy scrapers directly from GitHub to your servers.
- β° Schedule Periodic Jobs
- ScrapyD Cluster Management
- π Integrate With ScrapyD Servers
- β° Schedule Periodic Jobs
- π― All Scrapyd JSON API Supported
- π Secure Your ScrapyD with BasicAuth, HTTPS or Whitelisted IPs
To learn how to setup the integrate ScrapeOps with your servers with this guide.
π€ ScrapeOps n8n Nodeβ
ScrapeOps n8n Node is a powerful integration that brings all of ScrapeOps' web scraping capabilities into the n8n workflow automation platform. Perfect for no-code builders and developers who want to integrate web scraping into their automation workflows.
β Features & Functionalityβ
The ScrapeOps n8n node provides access to:
- Proxy API Integration
- π Smart proxy rotation and management
- π‘ Anti-bot bypass capabilities
- π» JavaScript rendering support
- π Geo-targeting options
- Parser API Access
- π E-commerce parsers (Amazon, eBay, Walmart)
- πΌ Job site parsers (Indeed)
- π Real estate parsers (Redfin)
- π¦ Returns structured JSON data
- Data API Direct Access
- β‘ Amazon Product API
- π Amazon Search API
- π More APIs coming soon
π Getting Startedβ
To use the ScrapeOps n8n node:
- Install the node from the n8n community nodes
- Get your API key from ScrapeOps
- Configure credentials in n8n
- Start building powerful scraping workflows
Learn more in our comprehensive n8n documentation.
π ScrapeOps MCP Serverβ
ScrapeOps MCP Server exposes the ScrapeOps Proxy API to MCP-compatible IDEs (Cursor, Claude Desktop, VS Code, Windsurf). It gives AI agents first-class scraping and browsing tools with proxy support, anti-bot bypass, JavaScript rendering, screenshots, and LLM-powered extraction.
β Features & Functionalityβ
The ScrapeOps MCP Server provides:
- Web Browsing Tools
- π Browse any URL with proxy support and geo-targeting
- π‘ Anti-bot bypass (Cloudflare, DataDome, PerimeterX)
- π» JavaScript rendering for dynamic sites
- π· Screenshots (base64)
- Data Extraction
- β‘ Structured extraction (auto or LLM schema-based)
- π Link discovery and normalization
- π― Optimize-request logic to auto-test and reuse best-performing proxies
π Getting Startedβ
Run the MCP server with npx:
env SCRAPEOPS_API_KEY=YOUR_API_KEY npx -y @scrapeops/mcp
Then configure your IDE (Cursor, Claude Desktop, VS Code, or Windsurf) to connect to the MCP server:
{
"mcpServers": {
"@scrapeops/mcp": {
"command": "npx",
"args": ["-y", "@scrapeops/mcp"],
"env": { "SCRAPEOPS_API_KEY": "YOUR_API_KEY" }
}
}
}
Learn more in our comprehensive MCP Server documentation.
β‘ ScrapeOps AI Scraper Builderβ
ScrapeOps AI Scraper Builder automatically generates production-ready web scrapers from any e-commerce product page URL using AI. Simply provide URLs, choose your language, and get a working scraper that outputs structured JSON data.
β Supported Page Typesβ
The AI Scraper Builder currently supports:
- Product Details: Individual product pages (name, price, images, reviews, specifications)
- Product Search: Search results pages (product listings, pagination, search metadata)
- Product Category: Category/browse pages (product listings, subcategories, filters)
π Getting Startedβ
To use the AI Scraper Builder, you first need to create a free account and get your free API_KEY.
Then navigate to the AI Assistant β Scraper Generator in the dashboard:
- Enter up to 5 URLs from the same website
- Choose your language (Python or Node.js) and library
- Click Generate and get production-ready scraper code
Learn more in our comprehensive AI Scraper Builder documentation.
π ScrapeOps Parser APIβ
ScrapeOps Parser API takes raw HTML pages and parses them into structured JSON format. Simply send us the HTML page for supported websites, and we'll extract the data for you.
β Supported Parsersβ
We currently support parsing for:
- Ecommerce: Amazon, Walmart, eBay, Target
- Real Estate: Redfin, Zillow
- Job Portals: Indeed
- Search Engines: Google (Maps, Patents, Scholar), Bing, Yandex
π Getting Startedβ
To use the Parser API, you first need to create a free account and get your free API_KEY.
Send HTML to the Parser API endpoint and receive structured JSON:
import requests
# First get the HTML from the target page
html = requests.get('https://www.amazon.com/dp/B08WM3LMJF').text
# Send HTML to the Parser API
response = requests.post(
'https://parser.scrapeops.io/v2/amazon?api_key=YOUR_API_KEY',
data={'html': html}
)
print(response.json())
Learn more in our comprehensive Parser API documentation.
π¦ ScrapeOps Data APIsβ
ScrapeOps Data APIs are a collection of APIs that allow you to access structured JSON data from popular websites using dedicated endpoints. No need to build or maintain your own parsersβjust call the API and get the data.
β Available APIsβ
The following Data APIs are available:
- Amazon: Product API, Product Search API
- Redfin: Sale Search, Rent Search, Sale Detail, Rent Detail, Building Detail, State Search, Agent Search, Agent Profile
- Indeed: Job Search, Job Detail, Company Search, Top Companies, Company Snapshot, Company About, Company Reviews, Company Jobs
- eBay: Product API, Search API, Category API, Store API, Feedback API
- Walmart: Product API, Product Search, Category, Review, Shop, Browse APIs
π Getting Startedβ
To use the Data APIs, you first need to create a free account and get your free API_KEY.
Query the API by passing in the product ID or URL:
import requests
response = requests.get(
'https://proxy.scrapeops.io/v1/structured-data/amazon/product',
params={
'api_key': 'YOUR_API_KEY',
'asin': 'B0BNLTS1T3',
'country': 'us'
}
)
data = response.json()
print(data['title']) # "Apple AirPods Pro (2nd Generation)"
print(data['price']) # "$249.00"
Learn more in our comprehensive Data APIs documentation.