Golang Colly: Use Random Fake User-Agents When Scraping
To use fake user-agents with Go Colly, you just need to set a User-Agent
header everytime a new request is sent using the OnRequest()
event.
package main
import (
"bytes"
"log"
"github.com/gocolly/colly"
)
func main() {
// Instantiate default collector
c := colly.NewCollector(colly.AllowURLRevisit())
// Set Fake User Agent
c.OnRequest(func(r *colly.Request) {
r.Headers.Set("User-Agent", "1 Mozilla/5.0 (iPad; CPU OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148")
})
// Print the Response
c.OnResponse(func(r *colly.Response) {
log.Printf("%s\n", bytes.Replace(r.Body, []byte("\n"), nil, -1))
})
// Fetch httpbin.org/headers five times
for i := 0; i < 5; i++ {
c.Visit("http://httpbin.org/headers")
}
}
One of the most common reasons for getting blocked whilst web scraping is using bad user-agents.
However, integrating random fake user-agents into your Go Colly web scrapers is very easy.
So in this guide, we will go through:
- What Are Fake User-Agents?
- How To Set A Fake User Agent In Go Colly
- How To Rotate Through Random User-Agents
- How To Manage Thousands of Fake User-Agents
- Why Use Fake Browser Headers
- ScrapeOps Fake Browser Headers API
First, let's quickly go over some the very basics.
Need help scraping the web?
Then check out ScrapeOps, the complete toolkit for web scraping.
What Are Fake User-Agents?
User Agents are strings that let the website you are scraping identify the application, operating system (OSX/Windows/Linux), browser (Chrome/Firefox/Internet Explorer), etc. of the user sending a request to their website. They are sent to the server as part of the request headers.
Here is an example User agent sent when you visit a website with a Chrome browser:
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.82 Safari/537.36'
When scraping a website, you also need to set user-agents on every request as otherwise the website may block your requests because it knows you aren't a real user.
For example, when you make a request with Go Colly it sends the following user-agent with the request.
"User-Agent": "colly - https://github.com/gocolly/colly",
This user agent will clearly identify your requests are being made by the Go Colly library, so the website can easily block you from scraping the site.
That is why we need to manage the user-agents Go Colly sends with our requests.
How To Set A Fake User-Agent In Go Colly
Setting Go Colly to use a fake user-agent is very easy. We just need to set a User-Agent
header everytime a new request is sent using the OnRequest()
event.
package main
import (
"bytes"
"log"
"github.com/gocolly/colly"
)
func main() {
// Instantiate default collector
c := colly.NewCollector(colly.AllowURLRevisit())
// Set Fake User Agent
c.OnRequest(func(r *colly.Request) {
r.Headers.Set("User-Agent", "1 Mozilla/5.0 (iPad; CPU OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148")
})
// Print the Response
c.OnResponse(func(r *colly.Response) {
log.Printf("%s\n", bytes.Replace(r.Body, []byte("\n"), nil, -1))
})
// Fetch httpbin.org/headers five times
for i := 0; i < 5; i++ {
c.Visit("http://httpbin.org/headers")
}
}
From here our scraper will use this user-agent for every request.
However, if you are scraping at scale then using the same user-agent for every request isn't best practice as it makes it easier for the website to detect you as a scraper.
To solve this problem we will need to configure our Go Colly scraper to use a random user-agent with every request.
How To Rotate Through Random User-Agents
With Go Colly, rotating through user-agents is also pretty straightforward. We just need a slice of user-agents and and have our scraper use a random one with every request.
package main
import (
"bytes"
"log"
"math/rand"
"github.com/gocolly/colly"
)
func RandomString(userAgentList []string) string {
randomIndex := rand.Intn(len(userAgentList))
return userAgentList[randomIndex]
}
func main() {
// Instantiate default collector
c := colly.NewCollector(colly.AllowURLRevisit())
userAgentList := []string{
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36",
"Mozilla/5.0 (iPhone; CPU iPhone OS 14_4_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Mobile/15E148 Safari/604.1",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36 Edg/87.0.664.75",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.18363",
}
// Set Random Fake User Agent
c.OnRequest(func(r *colly.Request) {
r.Headers.Set("User-Agent", RandomString(userAgentList))
})
// Print the Response
c.OnResponse(func(r *colly.Response) {
log.Printf("%s\n", bytes.Replace(r.Body, []byte("\n"), nil, -1))
})
// Fetch httpbin.org/ip five times
for i := 0; i < 5; i++ {
c.Visit("http://httpbin.org/headers")
}
}
This works but it has drawbacks as we would need to build & keep an up-to-date list of user-agents ourselves.
An alternative approach is to use the RandomUserAgent Extension as it generates the list of user-agents for you.
To use the RandomUserAgent Extension we just need to import it and add extensions.RandomUserAgent(c)
to our code.
package main
import (
"bytes"
"log"
"github.com/gocolly/colly"
"github.com/gocolly/colly/extensions"
)
func main() {
// Instantiate default collector
c := colly.NewCollector(colly.AllowURLRevisit())
// Add Random User Agents
extensions.RandomUserAgent(c)
// Print the Response
c.OnResponse(func(r *colly.Response) {
log.Printf("%s\n", bytes.Replace(r.Body, []byte("\n"), nil, -1))
})
// Fetch httpbin.org/ip five times
for i := 0; i < 5; i++ {
c.Visit("http://httpbin.org/headers")
}
}
The RandomUserAgent Extension works, however, as you can see from the code the number of fake user-agents it can generate is pretty small.
How To Manage Thousands of Fake User-Agents
A better approach to generating a large list of user-agents would be to use a free user-agent API like ScrapeOps Fake User-Agent API to download an up-to-date user-agent list when your scraper starts up and then pick a random user-agent for each request.
To use the ScrapeOps Fake User-Agents API you just need to send a request to the API endpoint to retrieve a list of user-agents.
http://headers.scrapeops.io/v1/user-agents?api_key=YOUR_API_KEY
To use the ScrapeOps Fake User-Agent API, you first need an API key which you can get by signing up for a free account here.
Example response from the API:
{
"result": [
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.0.5 Safari/605.1.15",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.5060.53 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; Windows; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.5060.114 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/603.3.8 (KHTML, like Gecko) Version/10.1.2 Safari/603.3.8",
"Mozilla/5.0 (Windows NT 10.0; Windows; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.5060.114 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0 Safari/605.1.15",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.5060.53 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0 Safari/605.1.15",
"Mozilla/5.0 (Windows NT 10.0; Windows; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.5060.114 Safari/537.36",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.5060.53 Safari/537.36"
]
}
To integrate the Fake User-Agent API you should configure your scraper to retrieve a batch of the most up-to-date user-agents when the scraper starts and then configure your scraper to pick a random user-agent from this list for each request.
Here is an example Go Colly scraper integration:
package main
import (
"bytes"
"time"
"log"
"math/rand"
"net/http"
"encoding/json"
"github.com/gocolly/colly"
)
type FakeUserAgentResponse struct {
Result []string `json:"result"`
}
func RandomString(userAgentList []string) string {
randomIndex := rand.Intn(len(userAgentList))
return userAgentList[randomIndex]
}
func GetUserAgentList() []string {
// ScrapeOps User-Agent API Endpint
scrapeopsAPIKey := "YOUR_API_KEY"
scrapeopsAPIEndpoint := "http://headers.scrapeops.io/v1/user-agents?api_key=" + scrapeopsAPIKey
req, _ := http.NewRequest("GET", scrapeopsAPIEndpoint, nil)
client := &http.Client{
Timeout: 10 * time.Second,
}
// Make Request
resp, err := client.Do(req)
if err == nil && resp.StatusCode == 200 {
defer resp.Body.Close()
// Convert Body To JSON
var fakeUserAgentResponse FakeUserAgentResponse
json.NewDecoder(resp.Body).Decode(&fakeUserAgentResponse)
return fakeUserAgentResponse.Result
}
var emptySlice []string
return emptySlice
}
func main() {
// Instantiate default collector
c := colly.NewCollector(colly.AllowURLRevisit())
// Get Fake User Agents From API
userAgentList := GetUserAgentList()
// Set Random Fake User Agent
c.OnRequest(func(r *colly.Request) {
r.Headers.Set("User-Agent", RandomString(userAgentList))
})
// Print the Response
c.OnResponse(func(r *colly.Response) {
log.Printf("%s\n", bytes.Replace(r.Body, []byte("\n"), nil, -1))
})
// Fetch httpbin.org/ip five times
for i := 0; i < 5; i++ {
c.Visit("http://httpbin.org/headers")
}
}
Here our Go Colly will download a list of user-agents from ScrapeOps and will use a random user-agent for each request.
Why Use Fake Browser Headers
For simple websites, simply setting an up-to-date user-agent should allow you to scrape a website pretty reliably.
However, a lot of popular websites are increasingly using sophisticated anti-bot technologies to try and prevent developer from scraping data from their websites.
These anti-bot solutions not only look at your requests user-agent when analysing the request, but also the other headers a real browser normally sends.
By using a full set of browser headers you make your requests look more like real user requests, and as a result harder to detect.
Here are example headers when using a Chrome browser on a MacOS machine:
sec-ch-ua: " Not A;Brand";v="99", "Chromium";v="99", "Google Chrome";v="99"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "macOS"
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.83 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Sec-Fetch-Site: none
Sec-Fetch-Mode: navigate
Sec-Fetch-User: ?1
Sec-Fetch-Dest: document
Accept-Encoding: gzip, deflate, br
Accept-Language: en-GB,en-US;q=0.9,en;q=0.8
As we can see, real browsers don't just send User-Agent
strings but also a number of other headers that are used to identify and customize the request.
So to improve the reliability of our scrapers we should also include these headers when making requests.
You could build a list of fake browser headers yourself, or you could use the ScrapeOps Fake Browser Headers API to get an up-to-date list every time your scraper starts up.
ScrapeOps Fake Browser Headers API
The ScrapeOps Fake Browser Headers API is a free API that returns a list of optimized fake browser headers that you can use in your web scrapers to avoid blocks/bans and improve the reliability of your scrapers.
API Endpoint:
http://headers.scrapeops.io/v1/browser-headers?api_key=YOUR_API_KEY
Response:
{
"result": [
{
"upgrade-insecure-requests": "1",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Windows; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.5060.114 Safari/537.36",
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
"sec-ch-ua": "\".Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"103\", \"Chromium\";v=\"103\"",
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-platform": "\"Windows\"",
"sec-fetch-site": "none",
"sec-fetch-mod": "",
"sec-fetch-user": "?1",
"accept-encoding": "gzip, deflate, br",
"accept-language": "bg-BG,bg;q=0.9,en-US;q=0.8,en;q=0.7"
},
{
"upgrade-insecure-requests": "1",
"user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.5060.53 Safari/537.36",
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
"sec-ch-ua": "\".Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"103\", \"Chromium\";v=\"103\"",
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-platform": "\"Linux\"",
"sec-fetch-site": "none",
"sec-fetch-mod": "",
"sec-fetch-user": "?1",
"accept-encoding": "gzip, deflate, br",
"accept-language": "fr-CH,fr;q=0.9,en-US;q=0.8,en;q=0.7"
}
]
}
To use the ScrapeOps Fake Browser Headers API, you first need an API key which you can get by signing up for a free account here.
To integrate the Fake Browser Headers API you should configure your scraper to retrieve a batch of the most up-to-date headers when the scraper starts and then configure your scraper to pick a random header from this list for each request.
Here is an example Go Colly scraper integration:
package main
import (
"bytes"
"time"
"log"
"math/rand"
"net/http"
"encoding/json"
"github.com/gocolly/colly"
)
type FakeBrowserHeadersResponse struct {
Result []map[string]string `json:"result"`
}
func RandomHeader(headersList []map[string]string) map[string]string {
randomIndex := rand.Intn(len(headersList))
return headersList[randomIndex]
}
func GetHeadersList() []map[string]string {
// ScrapeOps Browser Headers API Endpint
scrapeopsAPIKey := "YOUR_API_KEY"
scrapeopsAPIEndpoint := "http://headers.scrapeops.io/v1/browser-headers?api_key=" + scrapeopsAPIKey
req, _ := http.NewRequest("GET", scrapeopsAPIEndpoint, nil)
client := &http.Client{
Timeout: 10 * time.Second,
}
// Make Request
resp, err := client.Do(req)
if err == nil && resp.StatusCode == 200 {
defer resp.Body.Close()
// Convert Body To JSON
var fakeBrowserHeadersResponse FakeBrowserHeadersResponse
json.NewDecoder(resp.Body).Decode(&fakeBrowserHeadersResponse)
return fakeBrowserHeadersResponse.Result
}
var emptySlice []map[string]string
return emptySlice
}
func main() {
// Instantiate default collector
c := colly.NewCollector(colly.AllowURLRevisit())
// Get Fake Browser Headers From API
headersList := GetHeadersList()
// Set Random Fake Browser Headers
c.OnRequest(func(r *colly.Request) {
randomHeader := RandomHeader(headersList)
for key, value := range randomHeader {
r.Headers.Set(key, value)
}
})
// Print the Response
c.OnResponse(func(r *colly.Response) {
log.Printf("%s\n", bytes.Replace(r.Body, []byte("\n"), nil, -1))
})
// Fetch httpbin.org/ip five times
for i := 0; i < 5; i++ {
c.Visit("http://httpbin.org/headers")
}
}
For more information on check out the Fake Browser Headers API documenation.
More Web Scraping Tutorials
So that's why you need to use user-agents when scraping and how you can manage them with Go Colly.
If you would like to learn more about Web Scraping, then be sure to check out The Web Scraping Playbook.
Or check out one of our more in-depth guides: