Proxy Aggregator Quick Start
ScrapeOps Proxy Aggregator is an easy to use proxy that gives you access to the best performing proxies via a single endpoint. We take care of finding the best proxies, so you can focus on the data.
Authorisation - API Key
To use the ScrapeOps proxy, you first need an API key which you can get by signing up for a free account here.
Your API key must be included with every request using the
api_key query parameter otherwise the API will return a
403 Forbidden Access status code.
To make requests you need send the URL you want to scrape to the ScrapeOps Proxy endpoint
https://proxy.scrapeops.io/v1/ by adding your API Key and URL to the request using the
url query parameter:
The ScrapeOps Proxy supports
POST requests. POST request documentation available here.
The following is some example Python code to use with Proxy API:
response = requests.get(
print('Body: ', response.content)
ScrapeOps will take care of the proxy selection and rotation for you so you just need to send us the URL you want to scrape.
After recieving a response from one of our proxy providers the ScrapeOps Proxy API will then respond with the raw HTML content of the target URL along with a response code:
The ScrapeOps Proxy API will return a
200 status code when it successfully got a response from the website that also passed response validation, or a
404 status code if the website responds with a
404 status code. Both of these status codes are considered successful requests.
The following is a list of possible status codes:
|Yes||Page requested does not exist.|
|No||Bad request. Either your |
|No||You have consumed all your credits. Either turn off your scraper, or upgrade to a larger plan.|
|No||Either no |
|No||Exceeded your concurrency limit.|
|No||After retrying for up to 2 minutes, the API was unable to receive a successful response.|
To enable other API functionality when using the Proxy API endpoint you need to add the appropriate query parameters to the ScrapeOps Proxy URL.
render_js=true to the request:
The API will accept the following parameters:
|Request using residential proxy pools. Example: |
|Make requests from specific country. Example: |
The ScrapeOps proxy keeps retrying a request for up to 2 minutes before returning a failed response to you.
To use the Proxy correctly, you should set the timeout on your request to a least 2 minutes to avoid you getting charged for any successful request that you timed out on your end before the Proxy API responded.
You can monitor your scraping performance using the Proxy Dashboard.
You can programmatically monitor your ScrapeOps Proxy API credit consumption and concurrency usage using the usage endpoint.