Skip to main content

Python hRequests: Web Scraping Guide

In this guide for The Python Web Scraping Playbook, we will look at how to set up your Python Hrequests scrapers to avoid getting blocked, retrying failed requests and scaling up with concurrency.

Python Hrequests is the most popular HTTP client library used by Python developers, so in this article we will run through all the best practices you need to know. Including:

For this guide, we're going to focus on how to setup the HTTP client element of your Python Request based scrapers, not how to parse the data from the HTML responses.

To keep things simple, we will be using BeautifulSoup to parse data from QuotesToScrape.

If you want to learn more about how to use BeautifulSoup or web scraping with Python in general then check out our BeautifulSoup Guide or our Python Beginners Web Scraping Guide.

Let's begin with the basics and work ourselves up to the more complex topics...

Need help scraping the web?

Then check out ScrapeOps, the complete toolkit for web scraping.


Making GET Requests

Making GET requests with Python Hrequests is very simple.

We just need to request the URL using requests.get(url):


import hrequests

response = hrequests.get('http://quotes.toscrape.com/')

print(response.text)

The following are the most commonly used attributes of the Response class:

  1. status_code: The HTTP status code of the response.
  2. text: The response content as a Unicode string.
  3. content: The response content in bytes.
  4. headers: A dictionary-like object containing the response headers.
  5. url: The URL of the response.
  6. encoding: The encoding of the response content.
  7. cookies: A RequestsCookieJar object containing the cookies sent by the server.
  8. history: A list of previous responses if there were redirects.
  9. ok: A boolean indicating whether the response was successful (status code between 200 and 399).
  10. reason: The reason phrase returned by the server (e.g., "OK", "Not Found").
  11. elapsed: The time elapsed between sending the request and receiving the response.

Making POST Requests

Making POST requests with Python Hrequests is also very simple.

To send JSON data in a POST request, we just need to request the URL using requests.post() along with the URL and the data using the json parameter:


import hrequests

url = 'http://quotes.toscrape.com/'
data = {'key': 'value'}

# Send POST request with JSON data using the json parameter
response = hrequests.post(url, json=data)

# Print the response
print(response.json())

To send Form data in a POST request, we just need to request the URL using requests.post() along with the URL and the data using the data parameter:


import hrequests

url = 'https://jsonplaceholder.typicode.com/todos/1'
data = {'key': 'value'}

# Send POST request with JSON data using the json parameter
response = hrequests.post(url, json=data)

# Print the response
print(response.json())

For more details on how to send POST requests with Python Hrequests, then check out our Python Hrequests Guide: How to Send POST Requests


Using Fake User Agents With Python hRequests

User Agents are strings that let the website you are scraping identify the application, operating system (OSX/Windows/Linux), browser (Chrome/Firefox/Internet Explorer), etc. of the user sending a request to their website. They are sent to the server as part of the request headers.

Here is an example User agent sent when you visit a website with a Chrome browser:


'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.82 Safari/537.36'

When scraping a website, you also need to set user-agents on every request as otherwise the website may block your requests because it knows you aren't a real user.

In the case of most Python HTTP clients like Python Hrequests, when you send a request the default settings clearly identify that the request is being made with Python Hrequests in the user-agent string.


'User-Agent': 'python-hrequests/2.26.0',

This user-agent will clearly identify your requests are being made by the Python Hrequests library, so the website can easily block you from scraping the site.

That is why we need to manage the user-agents we use with Python Request when we send requests.

How To Set A Fake User-Agent In Python hRequests

Setting Python Hrequests to use a fake user-agent is very easy. We just need to define it in a headers dictionary and add it to the request using the headers parameter.

import hrequests

headers={"User-Agent": "Mozilla/5.0 (iPad; CPU OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148"}

r = hrequests.get('http://quotes.toscrape.com/', headers=headers)
print(r.json())

Link to the official documentation.

How To Rotate User-Agents

In the previous example, we only set a single user-agent. However, when scraping at scale you need to rotate your usser-agents to make your requests harder to detect for the website you are scraping.

Luckliy, rotating through user-agents is also pretty straightforward when using Python Hrequests. We just need a list of user-agents in our scraper and use a random one with every request.

import hrequests
import random

user_agent_list = [
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36',
'Mozilla/5.0 (iPhone; CPU iPhone OS 14_4_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Mobile/15E148 Safari/604.1',
'Mozilla/4.0 (compatible; MSIE 9.0; Windows NT 6.1)',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36 Edg/87.0.664.75',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.18363',
]

headers={"User-Agent": user_agent_list[random.randint(0, len(user_agent_list)-1)]}

r = hrequests.get('http://quotes.toscrape.com/', headers=headers)
print(r.json())

This works but it has drawbacks as we would need to build & keep an up-to-date list of user-agents ourselves.

Another approach would be to use a user-agent database like ScrapeOps Free Fake User-Agent API that returns a list of up-to-date user-agents you can use in your scrapers.

Here is an example Python Hrequests scraper integration:


import hrequests
from random import randint

SCRAPEOPS_API_KEY = 'YOUR_API_KEY'

def get_user_agent_list():
response = hrequests.get('http://headers.scrapeops.io/v1/user-agents?api_key=' + SCRAPEOPS_API_KEY)
json_response = response.json()
return json_response.get('result', [])

def get_random_user_agent(user_agent_list):
random_index = randint(0, len(user_agent_list) - 1)
return user_agent_list[random_index]

## Retrieve User-Agent List From ScrapeOps
user_agent_list = get_user_agent_list()

url_list = [
'http://quotes.toscrape.com/',
'http://quotes.toscrape.com/',
'http://quotes.toscrape.com/',
]

for url in url_list:

## Add Random User-Agent To Headers
headers = {'User-Agent': get_random_user_agent(user_agent_list)}

## Make Requests
r = hrequests.get(url=url, headers=headers)
print(r.text)


Using Proxies With Python hRequests

Using proxies with the Python Hrequests library allows you to spread your requests over multiple IP addresses making it harder for websites to detect & block your web scrapers.

Using a proxy with Python Hrequests is very straight forward. We simply need to create a proxies dictionary and pass it into the proxies attribute of our Python Hrequests request.


import hrequests

proxies = {
'http': 'http://proxy.example.com:8080',
'https': 'http://proxy.example.com:8081',
}

response = hrequests.get('http://quotes.toscrape.com/', proxies=proxies)

This method will work for all request methods Python Hrequests supports: GET, POST, PUT, DELETE, PATCH, HEAD.

However, the above example only uses a single proxy to make the requests. To avoid having your scrapers blocked you need to use large pools of proxies and rotate your requests through different proxies.

There are 3 common ways to integrate and rotate proxies in your scrapers:

Proxy Integration #1: Rotating Through Proxy IP List

Here a proxy provider will normally provide you with a list of proxy IP addresses that you will need to configure your scraper to rotate through and select a new IP address for every request.

The proxy list you receive will look something like this:


'http://Username:Password@85.237.57.198:20000',
'http://Username:Password@85.237.57.198:21000',
'http://Username:Password@85.237.57.198:22000',
'http://Username:Password@85.237.57.198:23000',

To integrate them into our scrapers we need to configure our code to pick a random proxy from this list everytime we make a request.

In our Python Hrequests scraper we could do it like this:

import hrequests
from random import randint

proxy_list = [
'http://Username:Password@85.237.57.198:20000',
'http://Username:Password@85.237.57.198:21000',
'http://Username:Password@85.237.57.198:22000',
'http://Username:Password@85.237.57.198:23000',
]

proxy_index = randint(0, len(proxy_list) - 1)

proxies = {
"http://": proxy_list[proxy_index],
"https://": proxy_list[proxy_index],
}

r = hrequests.get(url='http://quotes.toscrape.com/', proxies=proxies)
print(r.text)

This is a simplistic example, as when scraping at scale we would also need to build a mechanism to monitor the performance of each individual IP address and remove it from the proxy rotation if it got banned or blocked.

Proxy Integration #2: Using Proxy Gateway

Increasingly, a lot of proxy providers aren't selling lists of proxy IP addresses anymore. Instead, they give you access to their proxy pools via a proxy gateway.

Here, you only have to integrate a single proxy into your Python Hrequests scraper and the proxy provider will manage the proxy rotation, selection, cleaning, etc. on their end for you.

This is the most common way to use residential and mobile proxies, and becoming increasingly common when using datacenter proxies too.

Here is an example of how to integrate BrightData's residential proxy gateway into our Python Hrequests scraper:


import hrequests

proxies = {
'http': 'http://zproxy.lum-superproxy.io:22225',
'https': 'http://zproxy.lum-superproxy.io:22225',
}

url = 'http://quotes.toscrape.com/'

response = hrequests.get(url, proxies=proxies, auth=('USERNAME', 'PASSWORD'))


As you can see, it is much easier to integrate than using a proxy list as you don't have to worry about implementing all the proxy rotation logic.

Proxy Integration #3: Using Proxy API Endpoint

Recently, a lot of proxy providers have started offering smart proxy APIs that take care of managing your proxy infrastructure for you by rotating proxies and headers for you so you can focus on extracting the data you need.

Here you typically, send the URL you want to scrape to their API endpoint and then they will return the HTML response.

Although every proxy API provider has a slightly different API integration, they are all very similar and are very easy to integrate with.

Here is an example of how to integrate with the ScrapeOps Proxy Manager:


import hrequests

payload = {'api_key': 'APIKEY', 'url': 'http://quotes.toscrape.com/'}
r = hrequests.get('https://proxy.scrapeops.io/v1/', params=payload)
print(r.text)

Here you simply send the URL you want to scrape to the ScrapeOps API endpoint in the URL query parameter, along with your API key in the api_key query parameter, and ScrapeOps will deal with finding the best proxy for that domain and return the HTML response to you.

You can get your own free API key with 1,000 free requests by signing up here.


Retrying Failed Requests With Python hRequests

When web scraping, some requests will inevitably fail either from connection issues or because the website blocks the requests.

To combat this, we need to configure our Python Request scrapers to retry failed requests so they will be more reliable and extract all the target data.

One of the best methods of retrying failed requests with Python Hrequests is to build your own retry logic around your request functions.


import hrequests

NUM_RETRIES = 3
for _ in range(NUM_RETRIES):
try:
response = hrequests.get('http://quotes.toscrape.com/')
if response.status_code in [200, 404]:
## Escape for loop if returns a successful response
break
except requests.exceptions.ClientException:
pass

## Do something with successful response
if response is not None and response.status_code == 200:
pass

The advantage of this approach is that you have a lot of control over what is a failed response.

Above we only look at the response code to see if we should retry the request, however, we could adapt this so that we also check the response to make sure the HTML response is valid.

Below we will add a additional check to make sure the HTML response doesn't contain a ban page.


import hrequests

NUM_RETRIES = 3
for _ in range(NUM_RETRIES):
try:
response = hrequests.get('http://quotes.toscrape.com/')
if response.status_code in [200, 404]:
if response.status_code == 200 and '<title>Robot or human?</title>' not in response.text:
break
except hrequests.exceptions.ClientException:
pass

## Do something with successful response
if response is not None and response.status_code == 200:
pass


Scaling Your Python Request Scrapers With Concurrent Threads

Another common bottleneck you will encounter when building web scrapers with Python Hrequests is that by default you can only send requests serially. So your scraper can be quite slow if the scraping job is large.

However, you can increase the speed of your scrapers by making concurrent requests.

The more concurrent threads you have, the more requests you can have active in parallel, and the faster you can scrape.

One of the best approaches to making concurrent requests with Python Hrequests is to use the ThreadPoolExecutor from Pythons concurrent.futures package.

Here is an example:


import hrequests
from bs4 import BeautifulSoup
import concurrent.futures

NUM_THREADS = 5

## Example list of urls to scrape
list_of_urls = [
'http://quotes.toscrape.com/page/1/',
'http://quotes.toscrape.com/page/2/',
'http://quotes.toscrape.com/page/3/',
'http://quotes.toscrape.com/page/4/',
'http://quotes.toscrape.com/page/5/',
]

output_data_list = []

def scrape_page(url):
try:
response = hrequests.get(url)
if response.status_code == 200:
soup = BeautifulSoup(response.text, "html.parser")
title = soup.find('h1').text

## add scraped data to "output_data_list" list
output_data_list.append({
'title': title,
})

except Exception as e:
print('Error', e)


with concurrent.futures.ThreadPoolExecutor(max_workers=NUM_THREADS) as executor:
executor.map(scrape_page, list_of_urls)

print(output_data_list)

Here we:

  1. We define a list_of_urls we want to scrape.
  2. Create a function scrape_page(url) that will take a URL as a input and output the scraped title into the output_data_list.
  3. Then using ThreadPoolExecutor we create a pool of workers that will pull from the list_of_urls and pass them into scrape_page(url).
  4. Now when we run this script, it will create 5 workers max_workers=NUM_THREADS that will concurrently pull URLs from the list_of_urls and pass them into scrape_page(url).

Using this approach we can significantly increase the speed at which we can make requests with Python Hrequests.


Rendering JS On Client-Side Rendered Pages

As Python Hrequests is an HTTP client, it only retrieves the HTML/JSON response the website's server initially returns. It can't render any Javascript on client-side rendered pages.

This can prevent your scraper from being able to see and extract all the data you need from the web page.

As a consequence using a headless browser is often needed if you want to scrape a Single Page Application built with frameworks such as React.js, Angular.js, JQuery or Vue.

In the case, you need to scrape a JS rendered page you can use headless browser libraries for Python like Selenium or Pyppeteer instead of Python Hrequests.

Scraping JS Pages With Python Guides

Check out our guides to scraping JS rendered pages with Pyppeteer here.

Another option is to use a proxy service that manages the headless browser for you so you can scrape JS rendered pages using Python Hrequests HTTP requests.

The ScrapeOps Proxy Aggregator enables you to use a headless browser by adding the render_js=true to your requests.


import hrequests

payload = {'api_key': 'APIKEY', 'url': 'http://quotes.toscrape.com/', 'render_js': 'true'}
r = hrequests.get('https://proxy.scrapeops.io/v1/', payload)
print r.text

You can get your own free API key with 1,000 free requests by signing up here.

For more information about ScrapeOps JS rendering functionality check out our headless browser docs here.


More Web Scraping Tutorials

So that's how you can integrate proxies into your Python Hrequests scrapers.

If you would like to learn more about Web Scraping, then be sure to check out The Web Scraping Playbook.

Or check out one of our more in-depth guides: