Skip to main content

Setting Up ScrapeOps n8n Node

This guide will walk you through installing and configuring the ScrapeOps node in your n8n instance.

Prerequisites

Before you begin, ensure you have:

  • n8n installed and running (self-hosted or cloud)
  • Admin access to install community nodes
  • A ScrapeOps account (free tier available)

Installation

If you're using n8n Cloud, you can easily install the ScrapeOps node directly from the UI:

Step 1: Install the Node

  1. Sign in to n8n, open the editor, and click + in the top right to open the Nodes panel

Plus Sign in n8n

  1. Search for ScrapeOps node using the search bar. Look for the version marked by a badge ☑. Then, select install.

Type ScrapeOps

  1. The ScrapeOps node will be installed and appear in your node palette automatically.

If you are not on an updated n8n instance:

Follow n8n's community nodes guide:

npm install @scrapeops/n8n-nodes-scrapeops

Restart your n8n instance after installation. The "ScrapeOps" node will appear in your node palette.


Method 2: Install via n8n Settings (Self-Hosted)

For self-hosted n8n instances:

  1. Open your n8n instance
  2. Navigate to SettingsCommunity Nodes
  3. Click Install a community node
  4. Enter the package name: @scrapeops/n8n-nodes-scrapeops
  5. Click Install
  6. Restart your n8n instance when prompted

Method 3: Manual Installation (Self-Hosted)

If you're self-hosting n8n, you can install the node via command line:

# Navigate to your n8n installation directory
cd ~/.n8n

# Install the ScrapeOps node
npm install @scrapeops/n8n-nodes-scrapeops

# Restart n8n
n8n start

Method 4: Docker Installation

For Docker users, add the node to your docker-compose.yml:

version: '3.8'
services:
n8n:
image: n8nio/n8n
environment:
- N8N_COMMUNITY_NODES_ENABLED=true
- NODE_FUNCTION_ALLOW_EXTERNAL=n8n-nodes-scrapeops
volumes:
- ~/.n8n:/home/node/.n8n

Then install the node:

docker exec -it <container_name> npm install @scrapeops/n8n-nodes-scrapeops
docker restart <container_name>

Step 2: Set Up Credentials

Getting Your ScrapeOps API Key

To use the ScrapeOps node, you'll need a ScrapeOps API key which you can get by signing up for a free account here.

Your API key must be included with every request using the api_key parameter, otherwise the API will return a 403 Forbidden Access status code.

Steps to get your API key:

  1. Sign up for a free account at ScrapeOps
  2. Verify your email address (required to activate your API key)
  3. Visit your dashboard at ScrapeOps Dashboard
  4. Copy your API key from the dashboard

Get The API Key

Email Verification Required

Important: You must confirm your email address to activate your API key. Check your inbox for a verification email from ScrapeOps.

Free Tier Limits

The free tier includes:

  • Proxy API: 1,000 requests/month
  • Parser API: 500 parses/month
  • Data API: 100 requests/month

You can monitor your usage in the ScrapeOps Dashboard.

Configure Credentials in n8n

  1. In n8n, go to CredentialsAdd Credential.

Create Credentials

  1. Search for "ScrapeOps API" and enter your API key.

Enter API Key

  1. Save and test the credentials.

Note: Make sure to confirm your email to activate the API key.


Step 3: Add the Node to a Workflow

  1. Create a new workflow in n8n.
  2. Click Add to Workflow "ScrapeOps" node from the palette.

Add to Workflow

  1. Select an API (Proxy, Parser, or Data) and configure parameters.

Choose API Type


How to Use the ScrapeOps Node

The node supports three APIs, each with tailored parameters. Outputs are JSON for easy chaining into other nodes.

Core Parameters

  • API: Choose Proxy API, Parser API, or Data API (required)
  • All APIs require ScrapeOps credentials

Proxy API Features and Fields

Route GET/POST requests through proxies to scrape blocked sites.

Basic Parameters:

  • URL: Target URL to scrape (required)
  • Method: GET or POST (default: GET)
  • Return Type: Default (raw response) or JSON

Advanced Options (Collection):

OptionTypeDescriptionDefaultExample Values
Follow RedirectsBooleanFollow HTTP redirectstruetrue, false
Keep HeadersBooleanUse your custom headersfalsetrue, false
Initial Status CodeBooleanReturn initial status codefalsetrue, false
Final Status CodeBooleanReturn final status codefalsetrue, false
Optimize RequestBooleanAuto-optimize settingsfalsetrue, false
Max Request CostNumberMax credits to use (with optimize)010, 50, 100
Render JavaScriptBooleanEnable headless browserfalsetrue, false
Wait TimeNumberWait before capture (ms)03000, 5000
Wait ForStringCSS selector to wait for-.product-title, #content
ScrollNumberScroll pixels before capture01000, 2000
ScreenshotBooleanReturn base64 screenshotfalsetrue, false
Device TypeStringDevice emulationdesktopdesktop, mobile
Premium ProxiesStringPremium levellevel_1level_1, level_2
Residential ProxiesBooleanUse residential IPsfalsetrue, false
Mobile ProxiesBooleanUse mobile IPsfalsetrue, false
Session NumberNumberSticky session ID012345, 67890
CountryStringGeo-targeting country-us, gb, de, fr, ca, au, jp, in
BypassStringAnti-bot bypass level-cloudflare_level_1, cloudflare_level_2, cloudflare_level_3, datadome, perimeterx, incapsula, generic_level_1 to generic_level_4

Full Documentation: Proxy API Aggregator

Important Points:

  • For POST, input data comes from upstream nodes
  • Optimize Request: Let ScrapeOps auto-tune for cost/success
  • Error Handling: Node includes suggestions for common issues like blocks

Parser API Features and Fields

Parse HTML into structured JSON for supported domains.

Parameters:

  • Domain: Amazon, eBay, Walmart, Indeed, Redfin
  • Page Type: Varies by domain (e.g., Product, Search for Amazon)
  • URL: Page URL (required)
  • HTML Content: Raw HTML to parse (required)

Full Documentation: Parser API

Important Points:

  • Outputs structured data like products, reviews, or jobs
  • No custom rules needed - uses ScrapeOps' pre-built parsers
  • Combine with Proxy API for fetching + parsing in one flow

Data API Features and Fields

Access pre-scraped datasets, focused on Amazon.

Parameters:

  • Domain: Amazon (more coming soon)
  • Amazon API Type: Product or Search
  • Input Type: ASIN/URL for Product; Query/URL for Search

Amazon API Options:

  • Country: e.g., US, UK
  • TLD: e.g., .com, .co.uk

Full Documentation: Data APIs

Important Points:

  • Returns JSON datasets without scraping yourself
  • Ideal for quick queries or large-scale data pulls

Output Handling: All APIs return JSON. Use n8n's Set or Function nodes to transform data.


Verifying Installation

To verify everything is working correctly:

  1. Create a new workflow and add a ScrapeOps node
  2. Configure it with a simple test:
    • API Type: Proxy API
    • URL: http://httpbin.org/ip
    • Method: GET
  3. Execute the node - you should see your proxy IP in the response

Monitoring Usage

  • Check your usage in the ScrapeOps Dashboard
  • Set up usage alerts to avoid exceeding limits
  • Usage resets monthly

Common Setup Issues

Node Not Appearing

Problem: ScrapeOps node doesn't show up after installation

Solution:

  1. Ensure n8n was restarted after installation
  2. Check that community nodes are enabled
  3. Verify the installation with: npm list @scrapeops/n8n-nodes-scrapeops

Authentication Failures

Problem: "Invalid API Key" error

Solution:

  1. Verify API key is copied correctly (no extra spaces)
  2. Check if API key is active in ScrapeOps dashboard
  3. Ensure you're using the correct credential in the node

Connection Timeouts

Problem: Requests timing out

Solution:

  1. Check your firewall settings
  2. Verify n8n can make external HTTP requests
  3. Test with a simple URL first (like httpbin.org)

Next Steps

Now that you have ScrapeOps configured:

  1. Learn about the Proxy API for general web scraping
  2. Explore the Parser API for extracting structured data
  3. Discover the Data API for direct data access
  4. Check out practical examples to get started quickly

Getting Help

Ready to start scraping? Continue to learn about the Proxy API!