Skip to main content

Bright Data Residential Proxies Guide

Bright Data Residential Proxies: Web Scraping Guide

Some websites employ IP blocking and geolocation restrictions for several crucial reasons to protect their websites from web scraping. Residential proxies help us to effectively mimick regular user traffic and appear as legitimate users to the web servers, significantly reducing the risk of detection and IP bans.

In this guide, you'll learn how to set up and integrate Bright Data’s residential proxies, one of the leading providers of proxy services, into your web scraping projects.


TLDR: How to Integrate BrightData Residential Proxy?

To integrate BrightData Residential Proxy into your Python web scraping project:

  • Sign up for a BrightData account and complete the KYC process.
  • Obtain your credentials (username and password) from your dashboard.
  • Use the following code snippet to make requests through the proxy:
import requests

# Proxy configuration
username = "YOUR_USERNAME"
password = "YOUR_PASSWORD"
proxy_url = f"http://{username}:{password}@brd.superproxy.io:22225"

proxies = {
"http": proxy_url,
"https": proxy_url
}

# Make a request
url = "http://example.com"
response = requests.get(url, proxies=proxies)

print(f"Status Code: {response.status_code}")
print(f"Content: {response.text}")
  • This script configures the BrightData residential proxy using your credentials and sends a GET request to the specified URL through the proxy.
  • It then prints the status code and content of the response, allowing you to verify the proxy's functionality and access the desired web content.

Understanding Residential Proxies

Residential proxies are IP addresses assigned to real residential devices, such as computers and smartphones, by Internet Service Providers (ISPs).

When you route your web scraping requests through these proxies, they appear to originate from legitimate users in a specific geographic location. This ability allows residential proxies to effectively bypass IP-based anti-scraping measures, making them ideal for scraping activities that require a high success rate and enhanced anonymity.

Additionally, they are useful for tasks like ad verification and accessing content that may be restricted based on geographic location.

Types of Residential Proxies

There are two main types of residential proxies: Rotating and Static.

Rotating Residential Proxies

Rotating residential proxies change their IP address periodically, either after a set time interval or after each request. This automatic rotation enhances anonymity and reduces the risk of getting blocked by target websites.

Pros:

  • IPs come from real residential devices, providing high authenticity.
  • Access to millions of IPs across various countries.
  • Genuine IPs help in bypassing geo-restrictions and anti-scraping measures.

Cons:

  • May lead to inconsistent session data since IPs change frequently, which can be problematic for tasks requiring session continuity.
  • Relatively slower and more expensive than data center proxies.

Static Residential Proxies

Static residential proxies, also known as ISP proxies, use a fixed IP address provided by an ISP. Unlike rotating proxies, they maintain the same IP address over time, offering enhanced stability. These proxies are usually dedicated to one user at a time.

Pros:

  • Fixed IPs provided by ISPs, ensuring stability and reliability.
  • Faster and more reliable than rotating residential proxies.
  • High-quality IPs enhance performance.

Cons:

  • Expensive due to the cost of obtaining and maintaining ISP-provided IPs.
  • Limited availability in terms of the number of countries covered.

Residential vs. Data Center Proxies

Understanding the differences between residential proxies and data center proxies is important for selecting the right one for your web scraping needs. Each type of proxy offers unique advantages and drawbacks that cater to different requirements.

Here's a comparison to help you make an informed decision:

FeatureResidential ProxiesData Center Proxies
SourceReal residential devices (computers, smartphones)Data centers (servers)
IP AuthenticityHigh (appears as legitimate users)Lower (appears as servers)
AnonymityHighMedium
Risk of IP BansLowHigher
SpeedGenerally slowerGenerally faster
CostMore expensiveLess expensive
IP RotationAvailable (rotating proxies)Available
StabilityCan be less stable (rotating IPs)Generally more stable
Best Used ForScraping protected or geo-restricted content, ad verificationLarge-scale scraping, tasks requiring high speed
AvailabilityDependent on ISP partnershipsMore widely available

Use Cases

Residential proxies are highly valuable in various scenarios due to their ability to mimic legitimate user traffic and avoid detection.

Here are some key use cases:

Web Scraping and Data Collection

They allow scrapers to extract data from websites without being blocked, even from sites with geo-restrictions or robust anti-scraping measures.

By routing requests through a large set of real IP addresses, these proxies mimic legitimate user traffic, reducing the risk of being blocked or detected by anti-scraping measures.

SEO and SERP Analysis

For SEO professionals, residential proxies enable accurate analysis of search engine results pages (SERPs). Search engines provide different results based on the user's location.

Using residential proxies, you can view the SERPs as they appear to users in different regions, helping you develop more effective SEO strategies. The large pool of residential IPs ensures that your requests are not flagged or blocked by search engines.

Social Media Monitoring

When monitoring social media platforms for brand mentions, trends, or competitor activity, residential proxies help you avoid detection and IP bans.

By routing your monitoring requests through a variety of residential IPs, you can obtain comprehensive and unbiased data without risking account suspensions. This is essential for maintaining continuous access to social media data.

Ad Verification

Residential proxies are used to ensure that online ads are displayed correctly, in the intended locations, and reach the desired audience.

By using IP addresses from real residential devices, you can verify that ads are served accurately across different regions and devices. This helps maintain the integrity of ad campaigns and ensures they achieve their targeted reach.

Geo-Restricted Content Access

Residential proxies are beneficial for accessing content that is restricted based on geographic location. Whether it's streaming services, localized websites, or region-specific news, these proxies allow you to bypass geo-restrictions.

By routing your requests through residential IPs from the desired location, you can access content as if you were physically present in that region.


Why Use Bright Data Residential Proxies?

Bright Data's residential proxies offer several key features that can significantly enhance web scraping activities.

Here's an overview of these features along with examples of how they improve web scraping:

FeatureDescriptionWeb Scraping Application
Large IP PoolAccess to 72 million+ residential IP addresses worldwideEnables diverse IP usage for e-commerce price comparisons, avoiding IP-based blocking
Geo-targetingSelect specific countries, states, or cities for proxy connections in over 195 countriesRetrieve accurate, localized data for region-specific search results
Rotating IPsAutomatic or customizable IP rotationReduces detection risk during large-scale data scraping by rotating IPs per request
SOCKS5 SupportSupport for SOCKS5 protocol for improved performanceEnhances speed and UDP handling when scraping streaming or video content
Concurrent SessionsRun multiple scraping tasks simultaneously with unlimited concurrent sessionsMonitor multiple e-commerce sites concurrently, reducing overall scraping time
Customizable Rotation RulesFlexible IP rotation based on time or request countAdhere to website rate limits by changing IPs after a specific number of requests
Whitelisted IPsSome residential IPs are pre-approved on certain websitesIncreases success rates when scraping platforms with strict anti-bot measures
Browser FingerprintingMimic real browser fingerprints with customizable headers and user agentsReduces bot detection by emulating realistic browser characteristics
API IntegrationRobust API for integration with scraping tools and frameworksStreamline scraping workflows through programmatic proxy management
Ethical SourcingResidential IPs sourced ethically from consenting users with compliance to GDPR and CCPAEnsures compliance with legal and ethical standards in research scraping
High Uptime99.99% uptime guaranteeEnsures continuous data collection without interruptions
High Success Rate99.9% success rate for requestsMaximizes data retrieval efficiency with minimal failures
Trustpilot ReviewRated 4.7/5 on TrustpilotTrusted by a large user base for reliability and performance
Free TrialOffers a free trial and flexible pay-as-you-go plansAllows testing the service before committing to a plan

Bright Data Residential Proxy Pricing

Bright Data offers a comprehensive pricing structure for their residential proxies, accommodating various usage needs and budgets.

Here's an overview of their pricing model:

Pricing Structure

  • Bandwidth Usage: Bright Data primarily charges based on the amount of bandwidth you use.
  • Concurrent Connections and IP Addresses: They also offer pricing options based on the number of concurrent connections and IP addresses you require.
  • Flexible Plans: Bright Data provides both monthly subscriptions and a Pay-As-You-Go plan, allowing users to pay only for what they use.

Pricing Table

Plan NamePlan Size (GB)Cost per GBPrice
Pay as you go1 GB$8.4$8.4
Growth77 GB$6.43$499/mo
Business176 GB$5.67$999 /mo
Professional377 GB$5.29$1999 /mo

Comparison to Other Providers

Generally speaking, when proxy providers offer plans around $2-3 per GB, they are considered cheap. If they offer smaller plans in the $6-8 per GB range, they are more expensive.

Bright Data's pricing tends to be on the higher end of the spectrum compared to other residential proxy providers. With their smallest plan starting at $5.29 per GB, they fall into the more expensive category.

Bright Data's pricing reflects their reputation for quality and reliability, but budget-conscious users might find more affordable options elsewhere.

For a comprehensive comparison of residential proxy providers, including pricing and features, you can refer to the ScrapeOps Proxy Comparison page.

This resource allows you to compare various providers side-by-side, helping you find the best fit for your specific needs and budget.


Setting Up Bright Data Residential Proxies

Setting up Bright Data residential proxies is a straightforward process that involves creating an account, purchasing proxies, configuring them, setting up authentication, and integrating them into your web scraping scripts.

Creating a Bright Data Account

To get started with Bright Data, follow these steps to create an account and set up your residential proxies:

  • Visit the Bright Data website and sign up using your Google account, GitHub, or work email.

  • Navigate to the "Proxies and Scraping" tab. Locate and click the "Get Started" button under "Residential proxies." Proxies and Scraping Menu

  • You will be prompted to name your residential proxy zone. After naming it, you will be redirected to the dashboard where you can configure your proxy parameters.

Note: Bright Data offers a free trial to test their services before deciding whether to purchase a plan. However, you will need to add a billing method to access the trial.

Name Zone

  • If you need to change your plan, head to billing page and click on "Change plan". Here, you can choose between a monthly plan and a pay-as-you-go option.

  • To upgrade from free trial to any of the choosen, head to Dashboard and click on "Upgrade plan." If you haven't added a billing method, you will be prompted to do so. Complete the purchase to upgrade your plan. Upgrade Bright Data Plan

Configuring Proxies

In this section, we'll configure the residential proxy zone we created earlier. This includes setting up IP types (shared or dedicated) and geolocation targeting (Country, State, City, ASN, ZipCode).

Access Your Zones:

  • Navigate to the "Proxies and Scraping" tab.
  • Under 'My Zones' you will see all the zones created for your residential proxies.

Proxy Activation Zones

Configure Your Zone:

  • Click on the zone name (e.g., residential_proxy1).
  • Go to the Configuration tab.
  • Select the type of IP (shared or dedicated).
  • Drill down your geotargeting to your desired level (Country, State, City, ASN, or ZipCode).
  • Set advanced filters such as allowed ports and persistent IP addresses.

For more detailed instructions, check out this YouTube video.

Activate Proxy

Activate Your Configuration:

  • Click "Save and Activate." You will be prompted to complete a KYC process, which takes about 48 hours, or use quick testing by downloading their SSL certificate.
  • For this guide, we will skip the KYC by installing the SSL certificate:
  • Click the link to install the certificate and download it for your specific OS.
  • If you cannot access the download page, visit their official GitHub link, copy the certificate content, and save it as a .crt file.
  • Open the certificate file, double-click to install, select "Current User," choose "Automatically select the certificate store based on the type of certificate," and finish the installation. Note: Ensure that the certificate file is in the same directory as your project. Install Certificate File

Complete Activation:

  • Return to the browser and click "Done."
  • Click on "Active Zone." Your zone should now be active and ready to use. Click Download Certificate

Active Zone Activated

Proxy Integration

Once your proxies are configured and authenticated, you can now use the necessary credentials and proxy server information.

Take note of the details below and integrate into your web scraping script.

  • host (usually brd.superproxy.io),
  • port (typically 22225),
  • username, and
  • password.

For example, if you are using Python with the requests library, you can configure the proxies as follows:


import requests
from bs4 import BeautifulSoup

host = "brd.superproxy.io:22225"
user_name = "YOUR USERNAME" # Username from your Zones Dashboard
pass_word = "YOUR PASSWORD" # Password from your Zones Dashboard
url = "http://example.com" # Target website

proxy = {
"http": f"http://{user_name}:{pass_word}@{host}",
"https": f"http://{user_name}:{pass_word}@{host}"
}

response = requests.get(url, proxies=proxy)
s = BeautifulSoup(response.content, 'html.parser')
print(response.status_code)
print(s.text)

# From here you can process the data as needed

In the code above:

  • We import the requests and BeautifulSoup libraries.
  • Initialize host, user_name, pass_word, and url variables (with placeholders for credentials).
  • Configure the proxy settings by creating a proxy dictionary for both HTTP and HTTPS setups.
  • Send an HTTP GET request using the proxy.
  • Parse the HTML response using BeautifulSoup.
  • Print the status code and parsed text content.

Authentication

To authenticate a request using Bright Data residential proxies, you need to use your credentials, which include your Username, Password, and Host name. These credentials can be found in the Access Parameters tab of the proxy product.

Here are the main ways to authenticate a request:

# python
import pprint
import requests
host = 'brd.superproxy.io'
port = 22225
username = 'brd-customer-<customer_id>-zone-<zone_name>'
password = '<zone_password>'
proxy_url = f'http://{username}:{password}@{host}:{port}'
proxies = {
'http': proxy_url,
'https': proxy_url
}
url = "http://lumtest.com/myip.json"
response = requests.get(url, proxies=proxies)
pprint.pprint(response.json())

In the script above, you need to replace customer_id, zone_name, and zone_password with your actual credentials.


Basic Request Using Bright Data Residential Proxies

To make requests using Bright Data residential proxies, you need to configure your request to route through the proxy with the appropriate authentication. This involves setting up the proxy details, including the host, port, username, and password.

Here're the steps to make a basic request:

  1. Install Requests Library: Ensure you have the requests library installed in your Python environment.
  2. Set Up Proxy Details: Use the credentials provided by Bright Data to configure your proxy.
  3. Make the Request: Use the requests library to send a request through the proxy.

Code Example

Here’s an example of how to use Bright Data residential proxies with the requests library in Python:


import requests
from bs4 import BeautifulSoup

# Proxy details
host = "brd.superproxy.io"
port = "22225"
username = "USERNAME"
password = "PASSWORD"

url = "http://books.toscrape.com/"
proxy = {
"http": f"http://{username}:{password}@brd.superproxy.io:22225",
"https": f"http://{username}:{password}@brd.superproxy.io:22225"
}

response = requests.get(url, proxies=proxy)
soup = BeautifulSoup(response.content, 'html.parser')
links = soup.find_all('a')
for link in links:
print(link.get('href'))

The results from this code show that we successfully routed our requests through residential proxies to scrape the required data. This is shown in the screenshot below:

Python requests Result


Country Geotargeting

Bright Data provides proxies from over 195 countries, allowing users to target specific regions and offering extensive global coverage. Country-level geotargeting enables users to access region-specific content, ensuring the data collected is relevant to the desired geographic location.

This can be crucial for market research, price comparison, and accessing localized content that may not be accessible outside of that country.

Top 10 Countries Supported by Bright Data Residential Proxies

Bright Data offers a vast network of residential proxies across the globe. While their service covers numerous countries, some regions have a particularly strong presence in terms of proxy availability and performance.

Below are the top 10 countries where Bright Data's residential proxies are most prominently supported, based on factors such as proxy pool size, connection stability, and overall reliability:

CountryNumber of Proxies
US3,885,615
India3,449,687
Brazil1,939,424
Russia1,604,375
UK1,279,723
Germany1,106,830
China1,058,117
Spain426,139
Japan405,606
France384,827

To use country-specific proxies, we need to specify the desired country in our code. Here's an example that demonstrates how to use Bright Data's residential proxies by specifying the country as "US".

The results can be confirmed from the output showing the IP address and the country.


import requests
from bs4 import BeautifulSoup

# Proxy details
host = "brd.superproxy.io"
port = "22225"
username = "USERNAME"
password = "PASSWORD"
country = "us"

proxies = {
"http": f"http://{username}-country-{country}:{password}@brd.superproxy.io:22225",
"https": f"http://{username}-country-{country}:{password}@brd.superproxy.io:22225"
}

response_ip = requests.get("http://httpbin.org/ip", proxies=proxies)
ip = response_ip.json()['origin']

# Get detailed info
response_info = requests.get(f"http://ipinfo.io/{ip}", proxies=proxies)
info = response_info.json()

print(f"IP: {info['ip']}")
print(f"Country: {info['country']}")

In this code sample:

  • We imported the necessary libraries (requests and BeautifulSoup)
  • Defined the host, port, username, and password for Bright Data
  • Specified the desired country code ("us" for United States)
  • Set up both HTTP and HTTPS proxy settings
  • Sent a request to httpbin.org/ip through the proxy
  • Extracted the returned IP address and country from the response using the ipinfo.io

US IP Scraping Results


City Geotargeting

Bright Data also allows users to use proxies from different cities, offering extensive city-level geotargeting options. This capability is advantageous for users needing precise location targeting for their data scraping activities.

This is especially useful for localized market research, competitive analysis, and accessing city-specific services or information that may not be available through broader regional proxies.

To specify city-level targeting using a residential proxy with Bright Data, you can add the city name to your username string when making a request.

Here's how you can target a specific city using python:


import requests

host = 'brd.superproxy.io'
port = 22225
country_name = "us"
city_name = "newyork"
username = f'brd-customer-<YOUR_CUSTOMER_ID>-zone-<YOUR_ZONE_NAME>-country-{country_name}-city-{city_name}'
password = 'YOUR_PASSWORD'
proxy_url = f'http://{username}:{password}@{host}:{port}'

proxies = {
'http': proxy_url,
'https': proxy_url
}

# Getting the IP address
response = requests.get("http://icanhazip.com", proxies=proxies)
ip = response.text.strip()

# Extracting the location details
location_url = f'http://ipinfo.io/widget/demo/{ip}'
response_ip = requests.get(location_url,proxies=proxies)
data = response_ip.json()

city = data['data']['city']
region = data['data']['region']
country = data['data']['country']
print(f"IP:{ip}")
print(f"City: {city}")
print(f"Region: {region}")
print(f"Country: {country}")

In the code above, we:

  • Set up a proxy connection using Bright Data's residential proxy service, specifying a target country and city.
  • Use the proxy to fetch our apparent IP address from icanhazip.com.
  • Query ipinfo.io with this IP to get location details.
  • Extract and print the city, region, and country associated with the IP.

Below is a sample output from running this script:

City Geo-targeting

You need to ensure you have the necessary permissions for city-level targeting using Bright Data's residential proxies, here are the steps to do so:

  • Go to the Control panel My proxy page

  • Click on the residential zone you want to enable city-level targeting for

  • Navigate to the Geolocation targeting section

  • Select "City" from the options

  • Save the zone configuration

City Level Geotargeting Configuration

Important notes:

  • The city code must be in lowercase and have no spaces. For example, use -city-sanfrancisco for San Francisco.

  • You need to specify the country before the city in the username string.

  • The format is: username-country-country_code-city-city_code

  • Use only two-letter country codes in lowercase for the country targeting.

Remember that the availability of IPs in specific cities may vary, so it's a good idea to check the number of available IPs for a specific geolocation before targeting.


How To Use Static Proxies

Creating Proxy Zone

To use Bright Data's static proxies, also known as ISP proxies, navigate to the My Zones section and under ISP Proxies, click on "Get Started".

ISP Selection

Enter your proxy name and click on "Create Proxy".

Create Proxies

If you haven’t yet added a payment method, you’ll be prompted to add one at this point to verify your account.

Configuring your Proxy

Once you have created your proxy, you can configure your proxy settings by navigating to the "Configurations" tab. Choose the IP type and select the number of IPs you'd like to have and download the list. Note that increasing the number of IPs will add additional costs.

IP Count and List

Making requests

Once you've downloaded the proxy list from Bright Data and saved it in a .txt file, you can use the following Python script to randomly select a proxy from the list and make a request.

This script assumes that each line in the .txt file contains a proxy in the format host:port:username:password.

Sample content of proxies.txt:

brd.superproxy.io:22225:brd-customer-hl_6658722a-zone-isp_proxy1-ip-163.253.252.42:qf08pa3pf3gh
brd.superproxy.io:22225:brd-customer-hl_6658722a-zone-isp_proxy2-ip-163.253.252.43:qf08pa3pf3gh
import pprint
import requests
import random

def get_proxies_from_file(file_path):
with open(file_path, 'r') as file:
proxies = [line.strip() for line in file.readlines()]
return proxies

def select_random_proxy(proxies):
return random.choice(proxies)

def make_request_with_proxy(proxy):
parts = proxy.split(':')
host = parts[0]
port = parts[1]
username = parts[2]
password = parts[3]

proxy_url = f'http://{username}:{password}@{host}:{port}'

proxies = {
'http': proxy_url,
'https': proxy_url
}

url = "http://lumtest.com/myip.json"
response = requests.get(url, proxies=proxies)
pprint.pprint(response.json())

def main():
proxy_file = 'proxies.txt' # Path to your .txt file with proxies
proxies = get_proxies_from_file(proxy_file)

if not proxies:
print("No proxies found in the file.")
return

selected_proxy = select_random_proxy(proxies)
print(f"Using proxy: {selected_proxy}")

make_request_with_proxy(selected_proxy)

if __name__ == "__main__":
main()

In the code above:

  • The script reads proxies from a .txt file into a list.
  • Randomly selects one proxy from the list.
  • Uses the selected proxy to make a request to http://lumtest.com/myip.json.
  • The main function orchestrates the process of reading proxies, selecting one, and making the request.

By the time you run the script, you should get a similar result as shown in the screenshot below:

ISP Results


Error Codes

When using Bright Data residential proxies, it's essential to understand the error codes that might arise during scraping.

Here is a concise summary of the common errors, their meanings, and ways to avoid them.

403 Forbidden

Meaning: The server is refusing to fulfill the request.
Cause: Often due to missing session cookies, proxy detection, or access control settings.
Solution:

  • Use rotating residential proxies to avoid detection.
  • Employ ISP proxies for enhanced legitimacy.
  • Utilize tools like Bright Data’s Web Unlocker to alter fingerprints and user agents, mimicking genuine user behavior.

407 Proxy Authentication Required

Meaning: The client must first authenticate itself with the proxy.
Cause: Incorrect or missing proxy authentication credentials.
Solution:

  • Ensure the correct proxy credentials are used.
  • Verify that proxy settings include the necessary authentication details.
  • Utilize credential management systems to streamline updates and avoid discrepancies.

429 Too Many Requests

Meaning: The user has sent too many requests in a given amount of time ("rate limiting").
Cause: Exceeding the allowed request rate.
Solution:

  • Implement request throttling to stay within rate limits.
  • Use a proxy pool to distribute requests across multiple IPs, reducing the frequency per IP.

520 Web Server Returned an Unknown Error

Meaning: The server is returning an unknown error, possibly due to issues on the server side or network problems.
Cause: Network issues or misconfigured server settings.
Solution:

  • Retry the request after some time.
  • Use monitoring tools to track server performance and identify underlying issues.

1015 You are Being Rate Limited

Meaning: Access to the resource has been temporarily limited.
Cause: Similar to a 429 error, it indicates that rate limits have been exceeded.
Solution:

  • Reduce the request rate.
  • Spread out requests over time or use different IP addresses to avoid hitting rate limits.

444 No Response

Meaning: The server has closed the connection without sending any response.
Cause: Often due to server overload or intentional blocking of specific IP addresses.
Solution:

  • Rotate IP addresses to avoid blocks.
  • Use high-quality residential proxies to reduce the likelihood of being flagged.

Understanding these error codes and their resolutions can help maintain smooth and efficient scraping operations while using Bright Data’s residential proxies.

For more details on proxy errors and solutions, you can visit the Bright Data FAQ on proxy errors here.


KYC (Know-Your-Customer) Verification

Bright Data requires all new users to undergo a comprehensive KYC (Know-Your-Customer) process before accessing their proxy services. This process ensures that use cases of customers are legal, ethical, and compliant with Bright Data's policies.

The KYC process includes:

  • Identity Verification : Live video identity verification.
  • Personal Review : A compliance officer personally reviews the customer's use case.
  • Ongoing Monitoring: Continuous monitoring to ensure compliance with declared use cases.

Bright Data strictly prohibits certain use cases, including:

  • Adult content
  • Gambling
  • Cryptocurrency activities

Their stringent compliance standards lead to the rejection or suspension of requests from customers whose intended uses do not align with Bright Data’s ethical guidelines.

For more details, you can refer to their KYC Policy. For a more detailed video demonstrating the KYC process, check here.


Implementing Bright Data Residential Proxies in Web Scraping

Bright Data residential proxies can be integrated with various libraries for web scraping.

Below, we'll explore how to use these proxies with different tools, using US geotargeting and rotating proxies as examples.

Python Requests

Integration with Python Requests is pretty straightforward. Here's how to set it up:

# import packages.
import requests
from bs4 import BeautifulSoup

# Define proxies to use.
proxies = {
"http": "http://<username>:<password>@brd.superproxy.io:22225",
"https": "http://<username>:<password>@brd.superproxy.io:22225"
}

# Define a link to the web page.
url = "http://books.toscrape.com/"

# Send a GET request to the website.
response = requests.get(url, proxies=proxies)

# Use BeautifulSoup to parse the HTML content of the website.
soup = BeautifulSoup(response.content, "html.parser")

# Find all the links on the website.
links = soup.find_all("a")

# Print all the links.
for link in links:
print(link.get("href"))

Executing this block of code sends a request to the specified web page using the proxy IP address, then retrieves and returns a response containing all the links on that web page.

We can also rotate proxies using a custom method and an array of proxies. This approach offers more flexibility and control over the proxy rotation process.

  • Proxy rotation is essential for web scraping because it helps avoid detection and blocking by target websites. When a site receives numerous requests from a single IP address, it may flag this as potential bot activity.
  • This can trigger the website's defense mechanisms, leading to access restrictions or outright bans. When we implement proxy rotation which is essentially changing our IP address for each request, our scrapers can mimic natural user behavior, reducing the risk of being identified as automated tools and maintaining consistent access to the desired web content.

Implementing a custom rotation method with an array of proxies allows for more dynamic and efficient scraping, enabling better management of proxy resources and improving the overall effectiveness of our scraping process.

Here's a sample code to implement proxy rotation:

import requests
import random
from bs4 import BeautifulSoup


# initialize our list of proxies from brightdata
proxies = [

"http://<username>:<password>@brd.superproxy.io:2010", "http://<username>:<password>@brd.superproxy.io:2020",
"http://<username>:<password>@brd.superproxy.io:2030", "http://<username>:<password>@brd.superproxy.io:2040",
"http://<username>:<password>@brd.superproxy.io:2050", "http://<username>:<password>@brd.superproxy.io:2060",
"http://<username>:<password>@brd.superproxy.io:2070", "http://<username>:<password>@brd.superproxy.io:2080",
"http://<username>:<password>@brd.superproxy.io:2090"
]


# Custom method to select a random proxy for each request
def get_proxy():
# Choose a random proxy from the list
proxy = random.choice(proxies)
# Return a dictionary with the proxy for both http and https protocols
return {'http': proxy, 'https': proxy}

# Send requests using rotated proxies
for i in range(10):
# Set the URL to scrape
url = 'http://example.com/'
try:
# Send a GET request with a randomly chosen proxy
response = requests.get(url, proxies=get_proxy())

# Use BeautifulSoup to parse the HTML content of the website.
soup = BeautifulSoup(response.content, "html.parser")

# Find all the links on the website.
links = soup.find_all("a")

# Print all the links.
for link in links:
print(link.get("href"))
except requests.exceptions.RequestException as e:
# Handle any exceptions that may occur during the request
print(e)

This code initializes a list of proxies from bright data, randomly selects a proxy for each request, and attempts to scrape the specified URL.

It handles potential errors and reports the success or failure of each request. Remember to replace the example proxies with actual proxy servers and adjust the target URL as needed.

Python Selenium

To use Bright Data proxies with Selenium for browser automation, you need to install the undetected-chromedriver package. This can be done easily using pip:

pip install --upgrade undetected_chromedriver

Now, configure the WebDriver as follows:

import undetected_chromedriver as uc
import time
from selenium.webdriver.chrome.options import Options
username = "YOUR_USERNAME"
password = "YOUR_PASSWORD"
def main():
# Initialize Chrome options
chrome_options = uc.ChromeOptions()
options = Options()
options.add_argument(f'--proxy-server=http://{username}:{password}@brd.superproxy.io:22225')

# chrome_options.add_argument('--headless') # Uncomment this to run in headless mode
chrome_options.add_argument('--disable-gpu') # Disable GPU usage for compatibility
chrome_options.add_argument('--no-sandbox') # Disable sandboxing for compatibility

# Initialize the undetected ChromeDriver
driver = uc.Chrome(options=chrome_options)

try:
# Navigate to a webpage
driver.get('https://example.com')

# Wait for a few seconds to allow the page to load
time.sleep(5)

# Print the contents of the page
print(driver.page_source)
finally:
driver.quit()

if __name__ == "__main__":
main()

This code sets up an undetected ChromeDriver with proxy settings from Bright Data. It initializes the driver with specific options to enhance compatibility and avoid detection.

The script then navigates to a specified webpage, waits for it to load, and retrieves the page source.

Python Scrapy

Scrapy is an open-source Python framework designed for web crawling and scraping. It allows users to extract structured data from websites efficiently.

Known for its speed and extensibility, Scrapy is widely used for tasks such as data mining, monitoring, and automated testing. Let's see how we can integrate bright data's residential proxy with scrapy.

  • First, let's create a new Scrapy project. Open your terminal and run the following command:
scrapy startproject <project_name>

This command will create a new folder with your project name, containing the basic Scrapy project structure.

Now, let's create a Scrapy spider that uses the Bright Data proxy. In your project folder (containing the scrapy.cfg file), create a new Python file for your spider.

Here's an example of how your code might look:

import scrapy
username = "YOUR_USERNAME"
password = "YOUR_PASSWORD"

class BrightdataScrapyExampleSpider(scrapy.Spider):
name = "BrightDataScrapyExample"

def start_requests(self):
request = scrapy.Request(url="http://example.com", callback=self.parse)
request.meta['proxy'] = f"http://{username}:{password}@brd.superproxy.io:22225"
yield request

def parse(self, response):
print(response.body)

Let's break down the key parts of this code:

  • We create a basic Scrapy spider class.
  • In the start_requests method, we create a request to our target URL.
  • We set the proxy in the request's meta parameter. Replace 'USERNAME' and 'PASSWORD' with your actual Bright Data credentials.
  • The parse method is a simple example that just prints the response body. You'll want to replace this with your actual parsing logic.

To run your spider, use the following command in your terminal:

scrapy runspider <Pythonfilename.py>

NodeJs Puppeteer

To integrate Bright Data residential proxies with Puppeteer for browser automation, follow these steps:

  • Ensure you have Puppeteer installed. You can do this using the command
npm install puppeteer

  • Set up your Puppeteer configuration to use Bright Data proxies.
const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch({
headless: false,
args: ['--proxy-server=brd.superproxy.io:22225']
});

const page = await browser.newPage();

await page.authenticate({
username: 'brd-customer-[customer_ID]-zone-[zone_name]',
password: '[zone_password]'
});

await page.goto('http://lumtest.com/myip.json');
await page.screenshot({path: 'example.png'});

await browser.close();
})();

  • Save the script to a file, for example, scrape.js, and run it using Node.js.
node scrape.js

NodeJs Playwright

To integrate Bright Data residential proxies with Playwright for browser automation, follow these steps:

  • Ensure you have Playwright installed. You can do this using the command:
npm install playwright
  • Set up your Playwright configuration to use Bright Data proxies.
const playwright = require('playwright');

const options = {
proxy: {
server: 'http://brd.superproxy.io:22225',
username: 'brd-customer-[customer_ID]-zone-[zone_name]',
password: '[zone_password]'
}
};

(async () => {
const browser = await playwright.chromium.launch(options);
const page = await browser.newPage();

await page.goto('http://lumtest.com/myip.json');

const content = await page.content();
console.log(content);

await browser.close();
})();

  • Save the script to a file, for example, scrape.js, and run it using Node.js.
node scrape.js


Case Study - Scraping Amazon.es with Geo-Location Targeting

In this section, we'll demonstrate how to scrape product pages on Amazon.es using Bright Data's residential proxies to implement geo-location targeting.

We'll show how product prices or availability can change based on the IP address location, specifically comparing results from Spanish and Portuguese IPs.

Setup

First, we'll set up our environment by importing the required libraries and configuring our script:

import requests
from bs4 import BeautifulSoup
from dataclasses import dataclass
from typing import Dict, Tuple


# Proxy configuration
PROXY_CONFIG = {
"host": "brd.superproxy.io",
"port": "22225",
"username": "YOUR_USER_NAME",
"password": "YOUR_PASSWORD"
}

# Product URL
PRODUCT_URL = 'https://www.amazon.es/Taurus-WC12T-termoel%C3%A9ctrica-Aislamiento-Temperatura/dp/B093GXXKRL/ref=lp_14565165031_1_2'

@dataclass
class ProductInfo:
title: str
price: str
availability: str

def get_proxy_url(country: str) -> str:
return f"http://{PROXY_CONFIG['username']}-country-{country}:{PROXY_CONFIG['password']}@{PROXY_CONFIG['host']}:{PROXY_CONFIG['port']}"

def get_headers() -> Dict[str, str]:
return {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36',
'Accept-Language': 'en-US,en;q=0.9',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, br',
'Referer': 'https://www.amazon.es/'
}

def scrape_amazon(country: str) -> ProductInfo:
proxy_url = get_proxy_url(country)
proxies = {"http": proxy_url, "https": proxy_url}

try:
response = requests.get(PRODUCT_URL, proxies=proxies, headers=get_headers(), verify=False, timeout=30)
response.raise_for_status()

soup = BeautifulSoup(response.content, 'html.parser')

title = soup.select_one('#productTitle').text.strip() if soup.select_one('#productTitle') else "Title not found"
price = soup.select_one('.a-price-whole').text.strip() if soup.select_one('.a-price-whole') else "Price not found"
availability = soup.select_one('#availability span').text.strip() if soup.select_one('#availability span') else "Availability not found"

return ProductInfo(title, price, availability)

except requests.RequestException as e:
print(f"An error occurred while scraping with {country} IP: {str(e)}")
return ProductInfo("Error", "Error", "Error")

def main():
countries = {
"SPAIN": "es",
"PORTUGAL": "pt"
}

results = {country_code: scrape_amazon(country_code) for country_code in countries.values()}

for country_name, country_code in countries.items():
info = results[country_code]
print(f"\n{country_name} IP Results:")
print(f"Title: {info.title}")
print(f"Price: {info.price}€")
print(f"Availability: {info.availability}")

if __name__ == "__main__":
main()

Results and Analysis

When we run this script, we might see output similar to this:

Capture Proxies

Let's dive into the details of how we used residential proxies to scrape product page information from Amazon and observe the differences based on the user's location(IP).

Proxy Setup:

First, we configured Bright Data's residential proxy service to access the product page from different countries. This allows us to simulate requests from Spain and Portugal by changing the IP addresses used for the requests.

# Proxy configuration
PROXY_CONFIG = {
"host": "brd.superproxy.io",
"port": "22225",
"username": "YOUR_USER_NAME",
"password": "YOUR_PASSWORD"
}

Scraping Function:

The scrape_amazon() function takes a country code, builds the proxy URL, and makes a request to the PRODUCT_URL using that proxy. It then parses the HTML response with BeautifulSoup to extract the product title, price, and availability information.

@dataclass
class ProductInfo:
title: str
price: str
availability: str

def get_proxy_url(country: str) -> str:
return f"http://{PROXY_CONFIG['username']}-country-{country}:{PROXY_CONFIG['password']}@{PROXY_CONFIG['host']}:{PROXY_CONFIG['port']}"

def get_headers() -> Dict[str, str]:
return {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36',
'Accept-Language': 'en-US,en;q=0.9',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, br',
'Referer': 'https://www.amazon.es/'
}

def scrape_amazon(country: str) -> ProductInfo:
proxy_url = get_proxy_url(country)
proxies = {"http": proxy_url, "https": proxy_url}

try:
response = requests.get(PRODUCT_URL, proxies=proxies, headers=get_headers(), verify=False, timeout=30)
response.raise_for_status()

soup = BeautifulSoup(response.content, 'html.parser')

title = soup.select_one('#productTitle').text.strip() if soup.select_one('#productTitle') else "Title not found"
price = soup.select_one('.a-price-whole').text.strip() if soup.select_one('.a-price-whole') else "Price not found"
availability = soup.select_one('#availability span').text.strip() if soup.select_one('#availability span') else "Availability not found"

return ProductInfo(title, price, availability)

except requests.RequestException as e:
print(f"An error occurred while scraping with {country} IP: {str(e)}")
return ProductInfo("Error", "Error", "Error")

Country-Specific Requests:

In our script, we make two separate requests, one with a Spanish IP and another with a Portuguese IP, to see how the product details might vary.

def main():
countries = {
"SPAIN": "es",
"PORTUGAL": "pt"
}

results = {country_code: scrape_amazon(country_code) for country_code in countries.values()}

for country_name, country_code in countries.items():
info = results[country_code]
print(f"\n{country_name} IP Results:")
print(f"Title: {info.title}")
print(f"Price: {info.price}€")
print(f"Availability: {info.availability}")

if __name__ == "__main__":
main()

Looking at the results from our screenshot, we can observe that the product's price is slightly higher when accessed from a Portuguese IP (203€) compared to a Spanish IP (199€).

This showcases Amazon's dynamic pricing strategy, where prices may vary based on the customer's location.

This demonstration shows how using residential proxies can help uncover differences in pricing and availability on e-commerce platforms based on the user's location.

By leveraging Bright Data's network of residential IPs, businesses and researchers can gain valuable insights into regional pricing strategies and stock allocations, which can immensely aid in market research and competitive analysis.


Alternative: ScrapeOps Residential Proxy Aggregator

If you require a powerful and cost-effective solution for web scraping, the ScrapeOps Residential Proxy Aggregator offers a robust alternative to traditional proxy providers. This service aggregates proxies from multiple providers, giving you access to a vast pool of IP addresses with unparalleled flexibility and reliability.

For detailed documentation on setting up and using the ScrapeOps Residential Proxy Aggregator, visit the ScrapeOps Documentation. This guide provides comprehensive instructions on how to integrate and utilize the proxy service effectively.

Why Use ScrapeOps Residential Proxy Aggregator?

1. Competitive Pricing

ScrapeOps stands out for its cost-effectiveness. Our pricing is generally lower than that of many traditional proxy providers. This means you can achieve more efficient scraping without breaking the bank.

2. Flexible Plans

Unlike many proxy services that offer limited or inflexible plans, ScrapeOps provides a wide range of options, including smaller plans tailored to specific needs. This flexibility allows you to select a plan that best fits your project requirements and budget.

3. Enhanced Reliability

By aggregating proxies from multiple providers, ScrapeOps ensures greater reliability. Instead of relying on a single proxy provider, you gain access to a diverse set of proxies from a single port. This reduces the risk of downtime and connectivity issues, offering a more stable and consistent scraping experience.

Using ScrapeOps Residential Proxy Aggregator with Python Requests

Here's how you can use the ScrapeOps Residential Proxy Aggregator with Python's requests library:


import requests

api_key = 'YOUR_API_KEY'
target_url = 'http://httpbin.org/ip'
proxy_url = f'http://scrapeops:{api_key}@residential-proxy.scrapeops.io:8181'

proxies = {
'http': proxy_url,
'https': proxy_url,
}

response = requests.get(
url=target_url,
proxies=proxies,
timeout=120,
)

print('Body:', response.content)


The code above is some example code to send a URL to the ScrapeOps Proxy port.ScrapeOps will take care of the proxy selection and rotation for you so you just need to send us the URL you want to scrape.

You can check out the free trial with 500MB of free Bandwidth Credits Here. No credit card required.


Bright Data emphasizes that their residential proxies come from IP addresses where the holders have voluntarily opted in to participate. This approach aims to ensure that the use of these IPs for data gathering is conducted transparently and with consent.

They've made significant efforts to establish themselves as a leader in ethical proxy provision. Their commitment is evident in their marketing and operational practices, which stress the importance of consent and transparency. This focus on ethical sourcing helps address concerns about privacy and unauthorized use of IP addresses.

User Responsibilities with Bright Data Residential Proxies

When using Bright Data's residential proxies for web scraping, users must be aware of several key responsibilities and policies:

  • Respect Website Terms of Service : Websites have terms of service that govern how their data can be accessed and used. Scraping activities should comply with these terms to avoid legal issues and potential bans. Bright Data’s proxies provide anonymity but do not exempt users from adhering to the legal and ethical standards set by target websites.

  • Avoid Abuse : Users should not engage in activities that could be deemed abusive or harmful, such as excessive scraping that could overload a website's servers or malicious data extraction. Responsible use of proxies involves respecting the limits and intentions of the target sites.

  • Transparency and Consent : While Bright Data ensures that their proxies are ethically sourced, users should also be transparent about their data collection activities when possible. Providing clear information to site owners about the nature and purpose of scraping can foster trust and collaboration.

Importance of Scraping Responsibly

Scraping responsibly is crucial for several reasons. Respecting website terms of service ensures that you avoid legal repercussions and maintain access to valuable data sources.

Carefully handling personal data protects individuals' privacy and upholds ethical standards, preventing misuse of sensitive information. Complying with laws like GDPR is essential to avoid hefty fines and legal penalties, especially when dealing with data from EU citizens.

Additionally, responsible scraping practices prevent overloading website servers, maintaining their functionality for all users. Ultimately, ethical scraping helps build trust and preserve the scraper's and their organization's reputation.


Conclusion

Implementing residential proxies in your web scraping projects is crucial for maintaining reliable, efficient, and ethical data collection. By using residential proxies, you can overcome common scraping challenges such as IP bans, geo-restrictions, and anti-bot measures.

This leads to more successful scraping operations, access to a wider range of data, and the ability to scale your projects effectively while maintaining anonymity and complying with ethical standards.

If you'd like to know more about any of the tools or frameworks used in this article, you can find their documentation here:


More Python Web Scraping Guides

Want to take your scraping skills to the next level?

Check out Python Web Scraping Playbook or these additional guides: