Golang Colly Code Examples
The following are code examples on how to integrate the ScrapeOps Proxy Aggregator with your Go Colly Scrapers.
Authorisation - API Key
To use the ScrapeOps proxy, you first need an API key which you can get by signing up for a free account here.
Your API key must be included with every request using the api_key
query parameter otherwise the API will return a 403 Forbidden Access
status code.
Basic Request
The following is some example Go Colly code to send a URL to the ScrapeOps Proxy endpoint https://proxy.scrapeops.io/v1/
:
package main
import (
"bytes"
"log"
"github.com/gocolly/colly"
"net/url"
)
func main() {
// Instantiate default collector
c := colly.NewCollector(colly.AllowURLRevisit())
// Print the Response
c.OnResponse(func(r *colly.Response) {
log.Printf("%s\n", bytes.Replace(r.Body, []byte("\n"), nil, -1))
})
// On Error Print Error
c.OnError(func(_ *colly.Response, err error) {
log.Println("Something went wrong:", err)
})
// Create Proxy API URL
u, err := url.Parse("https://proxy.scrapeops.io/v1/")
if err != nil {
log.Fatal(err)
}
// Add Query Parameters
q := u.Query()
q.Set("api_key", "YOUR_API_KEY")
q.Set("url", "https://httpbin.org/ip")
u.RawQuery = q.Encode()
// Request Page
c.Visit(u.RawQuery)
}
ScrapeOps will take care of the proxy selection and rotation for you so you just need to send us the URL you want to scrape.
Response Format
After receiving a response from one of our proxy providers the ScrapeOps Proxy API Aggregator will then respond with the raw HTML content of the target URL along with a response code:
<html>
<head>
...
</head>
<body>
...
</body>
</html>
The ScrapeOps Proxy API Aggregator will return a 200
status code when it successfully got a response from the website that also passed response validation, or a 404
status code if the website responds with a 404
status code. Both of these status codes are considered successful requests.
Here is the full list of status codes the Proxy API returns.
Advanced Functionality
To enable other API functionality when using the Proxy API endpoint you need to add the appropriate query parameters to the ScrapeOps Proxy URL.
For example, if you want to enable Javascript rendering with a request, then add render_js=true
to the request:
package main
import (
"bytes"
"log"
"github.com/gocolly/colly"
"net/url"
)
func main() {
// Instantiate default collector
c := colly.NewCollector(colly.AllowURLRevisit())
// Print the Response
c.OnResponse(func(r *colly.Response) {
log.Printf("%s\n", bytes.Replace(r.Body, []byte("\n"), nil, -1))
})
// On Error Print Error
c.OnError(func(_ *colly.Response, err error) {
log.Println("Something went wrong:", err)
})
// Create Proxy API URL
u, err := url.Parse("https://proxy.scrapeops.io/v1/")
if err != nil {
log.Fatal(err)
}
// Add Query Parameters
q := u.Query()
q.Set("api_key", "YOUR_API_KEY")
q.Set("url", "https://httpbin.org/ip")
q.Set("render_js", "true")
u.RawQuery = q.Encode()
// Request Page
c.Visit(u.RawQuery)
}
Check out this guide to see the full list of advanced functionality available.
Timeout
The ScrapeOps proxy keeps retrying a request for up to 2 minutes before returning a failed response to you.
To use the Proxy correctly, you should set the timeout on your request to a least 2 minutes to avoid you getting charged for any successful request that you timed out on your end before the Proxy API responded.