Skip to main content

Froxy Residential Proxies Guide

Froxy Residential Proxies: Web Scraping Guide

As websites become more sophisticated in detecting and blocking automated requests, the need for reliable proxy solutions has never been greater. Froxy offers over 10 million real IP addresses across more than 200 locations worldwide. Their residential proxies provide anonymity and high-speed connections, helping you gather data efficiently without the risk of being blocked.

In this comprehensive guide, we'll walk you through everything you need to know about setting up and integrating Froxy residential proxies into your web scraping projects.

Need help scraping the web?

Then check out ScrapeOps, the complete toolkit for web scraping.


TLDR: How to Integrate Froxy Residential Proxy?

To quickly integrate Froxy residential proxies into your project, follow these steps:

  1. Install the requests library: pip install requests
  2. Use this Python script as a starting point:
import requests

proxy = "http://username:password@proxy.froxy.com:9000"
proxies = {"http": proxy, "https": proxy}

response = requests.get("http://example.com", proxies=proxies)
print(response.text)

This script sets up the Froxy proxy and makes a request to a website using it.

Replace username, password, and adjust the port if necessary.

You can now build upon this basic structure for your specific needs.

Dive into the next sections of this guide for detailed explanations.


Understanding Residential Proxies

Residential proxies are essential tools for anyone involved in web scraping, offering a layer of anonymity by masking your real IP address with one that belongs to a real user’s device.

These proxies function as intermediaries between you and the websites you're scraping, allowing you to send requests without revealing your actual location or identity.

When using a residential proxy, your traffic is routed through an IP address assigned by an Internet Service Provider (ISP) to a residential user, making it appear as though your requests are coming from a real person, rather than an automated script.

Types of Residential Proxies

There are two primary types of residential proxies: rotating and static.

1. Rotating Residential Proxies

These proxies automatically assign a new IP address from the proxy pool with each request or after a set period. This is particularly useful for large-scale scraping tasks because it minimizes the risk of getting blocked by a website.

However, since the IP addresses frequently change, you might run into issues with session persistence (e.g., logging into an account).

  • Pros: Ideal for high-volume scraping, reduces the chance of IP bans, suitable for geo-targeting multiple locations.
  • Cons: Lack of session continuity, might trigger captchas if too many different IPs are used in a short time.

2. Static Residential Proxies

On the other hand, provide a consistent IP address for the entire session.

This makes them ideal for situations where maintaining a consistent connection is crucial, such as managing accounts or accessing certain platforms that monitor IP changes.

  • Pros: Offers session persistence, reliable for tasks requiring continuous connections like account management.
  • Cons: Higher risk of IP blocks if overused, less flexibility for rotating between different IPs.

Residential vs. Data Center Proxies

While residential proxies are sourced from real devices connected to ISPs, data center proxies come from servers located in data centers.

These proxies are not tied to real residential users, which makes them easier to detect by websites employing anti-bot measures.

  • Residential Proxies:
    • Pros: Harder to detect, high trust score, less likely to be blocked, ideal for scraping.
    • Cons: More expensive, limited availability compared to data center proxies.
  • Data Center Proxies:
    • Pros: Cheaper, faster, easier to obtain in bulk.
    • Cons: Easier to block, lower trust score, may trigger security measures on websites.

Here's a comparison to help you understand how they differ:

FeatureResidential ProxiesData Center Proxies
SourceReal ISP-provided IPsData center IPs
Detection RiskLower, harder to detectHigher, easier to detect
SpeedGenerally slowerGenerally faster
CostHigherLower
Best ForStealthy tasks, geo-blocked sitesFast tasks, large-scale scraping

When to Use Residential Proxies

Residential proxies are beneficial across various scenarios where anonymity and undetectability are critical. Here are a few common use cases:

  • Web Scraping & Data Collection: Residential proxies help you avoid IP bans when scraping large amounts of data across websites.
  • SEO & SERP Analysis: Track search engine rankings and analyze competitors without location-based restrictions.
  • Social Media Monitoring: Monitor social platforms without triggering security blocks.
  • Ad Verification: Ensure ads are being correctly displayed across regions without being flagged for using data center proxies.
  • Geo-Restricted Content Access: Access content or services that are available only in specific countries or regions.

In these situations, residential proxies provide the stealth and flexibility needed to gather data efficiently while avoiding detection.


Why Use Froxy Residential Proxies?

Froxy stands out in the crowded proxy market with its exceptional features and commitment to ethical sourcing. Here's why you should consider Froxy for your residential proxy needs:

  • Extensive IP Pool: With over 10 million unique IPs across 200+ locations worldwide, Froxy offers unparalleled coverage. This vast network ensures you'll always have fresh IPs at your disposal, reducing the risk of detection during large-scale scraping operations.
  • Ethical Sourcing: Froxy prides itself on using only ethically sourced IPs. This means the underlying IP holders have consented to their addresses being used, ensuring legal compliance and peace of mind for your projects.
  • Global Coverage with Precise Targeting: Froxy allows you to target specific countries, cities, and even ISPs. This granular control is invaluable for tasks like localized SEO research or gathering region-specific pricing data.
  • High Reliability and Uptime: With a 99.99% uptime guarantee, Froxy ensures your scraping projects run smoothly without interruptions. This reliability is crucial for time-sensitive data collection tasks.
  • Flexible Authentication Options: Choose between username/password authentication or IP whitelisting for added security and ease of use.
  • Traffic Rollover: Unused traffic doesn't expire at the end of your billing cycle. Instead, it rolls over to the next period, ensuring you get the most value from your plan.
  • Excellent Support: With 24/7 customer support, you'll always have help when you need it, minimizing downtime for your projects.
  • Compatibility: Froxy proxies work seamlessly with all major scraping tools and programming languages, making integration a breeze.

Froxy Residential Proxy Pricing

Froxy offers a range of residential proxy plans with pricing based on bandwidth usage. They do not provide a Pay-As-You-Go option, instead offering pre-defined plans with varying amounts of monthly traffic.

Here's a breakdown of Froxy's pricing structure:

Plan NameTraffic SizeCost per GBPrice
Residential Mini1GB$7.99$8/month
Residential Starter3GB$6.99$21/month
Residential Basic7GB$6.43$45/month
Residential Plus30GB$5.00$150/month
Residential Pro50GB$4.00$200/month
Residential Ultra1TB$2.93$3000/month

Froxy's pricing compares to other residential proxy providers as follows:

  1. Smaller plans (1-7GB) are priced in the $6-8 per GB range, positioning them in the more expensive category for low-volume users.
  2. Larger plans (50GB-1TB) offer better value at $2-4 per GB, which is competitive for high-volume users.

This tiered pricing strategy encourages users to opt for larger plans for better cost-efficiency. Froxy also offers features like rollover traffic and varying numbers of ports based on the plan, potentially adding value for certain users.

For a comprehensive comparison of Froxy's pricing against other residential proxy providers, you can use our Residential Proxy Comparison page.

This resource allows you to evaluate various providers side-by-side, helping you make an informed decision based on your specific needs and budget.


Setting Up Froxy Residential Proxies

Setting up Froxy residential proxies involves a few specific steps to ensure you can effectively utilize their service. Here's a step-by-step guide:

  1. To get started with Froxy residential proxies visit Froxy website.
  2. Click the "Get Started" button, which will direct you to the pricing page.
  3. From here, pick a package that suits your needs. For example, choose the "Residential Mini" package and hit the "Start Now" button.

Choose Package

  1. Next, sign up using Google or by entering your email and password.

Signup

It’s possible that after signing up, the page won’t refresh or redirect immediately. If this happens, simply click "I already have an account" on the sign-up page. This will take you directly to your dashboard.

  1. From there, go to the "My Subscriptions" section to buy a residential proxy package.

Buy Residential Proxy

To reduce the cost, you can either opt for the 3-day trial or use a promo code.

  1. When you're ready to pay, choose Cryptomus as your payment method for transactions made in fiat currency via your bank card. Else, use other alternatives: "Capitalist" or "Russian cards".

Choose Cryptomus

If you don’t have an account with the payment method you've chosen visit the respective website and set up payment. For example, using Cryptomus, you'll need to sign up by visiting Cryptomus.com.

  1. Once you’ve registered, you’ll be required to complete the KYC (Know Your Customer) process to unlock full functionality, including the ability to add funds.

  2. After successfully completing KYC, go to your Cryptomus dashboard. In the top-up section, choose to deposit funds using a bank card and enter the amount in USD that you wish to add to your balance.

Top Up

  1. Now, you can use your Cryptomus balance to pay for the residential proxy you had chosen on Froxy.

Authentication

When using Froxy residential proxies, you’ll need to authenticate your requests to ensure secure access to their proxy network.

The most common method is username and password authentication. This method is straightforward and easy to integrate into your code, especially for web scraping projects.

You’ll receive a username and password when you purchase a proxy plan from Froxy.

To authenticate your requests, simply include these credentials in the proxy URL format. Here's an example of how to set this up:

import requests

# Froxy proxy settings
proxy_host = "proxy.froxy.com"
proxy_port = "9000"
proxy_user = "your_username" # Replace with your Froxy username
proxy_pass = "your_password" # Replace with your Froxy password

# Construct the proxy URL
proxy_url = f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}"

# Set up the proxy dictionary for both HTTP and HTTPS requests
proxies = {
"http": proxy_url,
"https": proxy_url
}

# Make a request through the Froxy proxy
try:
response = requests.get("http://example.com", proxies=proxies)

# Print the response content
print(response.text)
except requests.exceptions.RequestException as e:
print(f"An error occurred: {e}")

Make sure to replace your_username and your_password with your actual Froxy credentials.


Basic Request Using Froxy Residential Proxies

Making a request through Froxy residential proxies involves routing your traffic through one of their proxy servers. This ensures that your request appears to come from a residential IP, which is less likely to be blocked by websites compared to data center IPs.

Here’s how to set up a basic request using Froxy residential proxies with Python’s requests library:

  1. Set up the Froxy proxy URL: Use the provided Froxy proxy credentials (proxy host, port, username, and password) to create the proxy URL.
  2. Configure the requests library: Set up the requests library to route traffic through the Froxy proxy using the generated URL.
  3. Send the request: Once the proxy is configured, you can make any HTTP request (e.g., GET, POST) while routing the traffic through the Froxy proxy.
import requests

# Froxy proxy settings
proxy_host = "proxy.froxy.com"
proxy_port = "9000"
proxy_user = "your_username" # Replace with your Froxy username
proxy_pass = "your_password" # Replace with your Froxy password

# Construct the proxy URL
proxy_url = f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}"

# Set up the proxy dictionary for both HTTP and HTTPS requests
proxies = {
"http": proxy_url,
"https": proxy_url
}

# Make a request through the Froxy proxy
try:
response = requests.get("http://example.com", proxies=proxies)

# Print the response content
print(response.text)
except requests.exceptions.RequestException as e:
print(f"An error occurred: {e}")

In the script above:

  • Proxy Setup: We define Froxy proxy settings, including the host, port, username, and password.
  • Proxy URL Construction: The proxy URL is built using the credentials provided by Froxy. This URL is passed to the proxies dictionary, which requests uses to route all traffic through the Froxy proxy.
  • Making the Request: We use requests.get() to send a GET request to the target website (example.com in this case). All requests are routed through Froxy's residential proxy.
  • Error Handling: Any issues with the request, such as connection timeouts or proxy errors, are caught and printed using the try-except block.

Country Geotargeting

Country geotargeting is the practice of delivering content, advertisements, or services to users based on their geographic location, specifically targeting individuals from specific countries.

This technique leverages the user's IP address or location data to determine where they are accessing the internet from, allowing businesses and service providers to tailor their offerings to meet the needs and preferences of different regional audiences.

Country-level geotargeting is crucial for accessing region-specific content, running localized SEO campaigns, or collecting geographically restricted data.

Top Countries Supported by Bright Data Residential Proxies

Froxy supports geotargeting across more than 200 countries, allowing you to optimize your data collection processes by choosing proxies from specific regions.

Below is a table of the top countries where Froxy provides proxies, along with the number of available ISPs:

CountryNumber of ISPs
USA21,216
Brazil14,729
United Kingdom7,318
India2,925
Japan3,879
Turkey1,495

Apart from country geotargeting, you can also use city specific proxies. We'll talk about this more in the next section.

At the moment, you need to pay attention to our country and how it gets set within our code.

  • For country level geotargeting, we simply add our country to our password.
  • As you may or may not have noticed, our default password is wifi;;;;.
  • To target a specific country, we our password becomes wifi;{country_code};;;.

You can take a look at the example below to see how this works.

import requests

# Froxy proxy settings
proxy_host = "proxy.froxy.com"
proxy_port = "9000"
proxy_user = "your-username" # Replace with your Froxy username

country = "pt"
proxy_pass = f"wifi;{country};;;" # Replace with your Froxy password

# Construct the proxy URL
proxy_url = f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}"

# Set up the proxy dictionary for both HTTP and HTTPS requests
proxies = {
"http": proxy_url,
"https": proxy_url
}

# Make a request through the Froxy proxy
try:
response = requests.get("http://lumtest.com/myip.json", proxies=proxies)

# Print the response content
print(response.text)
except requests.exceptions.RequestException as e:
print(f"An error occurred: {e}")

The code above is almost identical to our basic request from earlier. We only have a couple small differences.

  • country = "pt" sets our country variable set to pt or Portugal.
  • Our password now includes our country: proxy_pass = f"wifi;{country};;;

City Geotargeting

City-level geotargeting is helpful when you need precise local data, such as analyzing trends in a specific city or verifying ads in a particular metropolitan area.

Froxy provides proxies from various cities, ensuring that your requests appear to originate from specific urban locations.

Here’s a table of the top cities where Froxy offers proxy support:

CityCountry
New YorkUSA
São PauloBrazil
LondonUnited Kingdom
MumbaiIndia
TokyoJapan
IstanbulTurkey

For city level geotargeting, we modify our password much like we did for country geotargeting. This time though, instead of just picking a country, we need to set a country, region and city.

If any of these locations have spaces in them, for instance, Myrtle Beach or South Carolina, we replace our space with +. Myrtle Beach becomes myrtle+beach. South Carolina becomes south+carolina.

import requests

# Froxy proxy settings
proxy_host = "proxy.froxy.com"
proxy_port = "9000"
proxy_user = "your-username" # Replace with your Froxy username

country = "us"
region = "south+carolina"
city = "myrtle+beach"
proxy_pass = f"wifi;{country};;{region};{city}" # Replace with your Froxy password

# Construct the proxy URL
proxy_url = f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}"

# Set up the proxy dictionary for both HTTP and HTTPS requests
proxies = {
"http": proxy_url,
"https": proxy_url
}

# Make a request through the Froxy proxy
try:
response = requests.get("http://lumtest.com/myip.json", proxies=proxies)

# Print the response content
print(response.text)
except requests.exceptions.RequestException as e:
print(f"An error occurred: {e}")

In this code, we use the following variables to control our exact location:

  • country = "us" sets our country to US.
  • region = "south+carolina" is used to tell Froxy that we'd like to appear inside the state of South Carolina.
  • city = "myrtle+beach" tells Froxy that we'd like to appear in the city of Myrtle Beach.

For city level geotargeting, our password is always set to include our location information: f"wifi;{country};;{region};{city}".

  • First we have our password, followed by ; and our country.
  • Next we use ;; to separate our country from our region.
  • Finally, at the end of the string, we add our city.

Error Codes

When using Froxy residential proxies for web scraping, it's important to be aware of potential error codes that can occur.

Below is a concise summary of common errors, their meanings, and suggested ways to avoid them:

Error CodeMeaningDescriptionWays to Avoid
400Bad RequestIndicates an invalid request format or a missing host in the URL.Ensure the URL is correctly formatted and all required parameters are included.
403Access DeniedAccess is restricted by the target website, often due to content limitations or IP blocking.Check the target site’s content policy and consider using rotating proxies or changing IPs if banned.
407Proxy Authentication ErrorOccurs due to invalid username or password during proxy authentication.Double-check the proxy credentials (username and password) for accuracy and ensure they are correctly applied.
490Invalid Access PointIndicates no available IPs for the selected country or group, or that the specified group does not exist.Verify the selected country or group settings and ensure there are active IPs available for your plan.
500Internal Server ErrorA server-side issue originating from Froxy, generally temporary.Retry the request after a brief delay, as the issue may resolve itself.
502Bad GatewayIndicates that the assigned IP is no longer available or there’s an issue with the proxy server.Wait for a new IP assignment or switch to a different session ID to receive a new IP.
503Service UnavailableA temporary proxy error or connection refusal from the target website.Wait and retry the request later, as the target may be experiencing temporary issues.
522Connection TimeoutThe proxy server did not receive a response from the target in the expected timeframe.Retry the request, as network latency or delays may be the cause; consider implementing exponential backoff.

Implementing Froxy Residential Proxies in Web Scraping

Now that you have an overview of Froxy’s extensive global network and proxy capabilities, it's time to integrate them into your web scraping projects.

Below, we'll explore how to set up Froxy proxies with popular web scraping tools such as Python’s requests, Selenium, Scrapy, and JavaScript-based libraries.

Python Requests

Integrating Froxy proxies with Python requests is simple and efficient. Below is a step-by-step guide:

import requests

# Froxy proxy settings
proxy_host = "proxy.froxy.com"
proxy_port = "9000"
proxy_user = "your_username"
proxy_pass = "your_password"

# Construct the proxy URL
proxy_url = f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}"

# Set up the proxy dictionary
proxies = {
"http": proxy_url,
"https": proxy_url
}

# Make a request
try:
response = requests.get("http://example.com", proxies=proxies)
print(response.text)
except requests.exceptions.RequestException as e:
print(f"An error occurred: {e}")

This script does the following:

  • Importing Requests: We use the requests library to send HTTP requests.
  • Proxy Setup: Froxy proxy credentials (host, port, username, and password) are used to build the proxy URL.
  • Proxy Configuration: The proxy dictionary is set for both HTTP and HTTPS requests.
  • Sending Requests: A GET request is made using the proxy settings, and the response or any exceptions are printed.

Python Selenium

SeleniumWire has always been a tried and true method for using authenticated proxies with Selenium. Selenium does not support proxy authentication out of the box. Even worse, SeleniumWire is now deprecated! All that said, it is still technically possible to integrate Froxy Residential Proxies via SeleniumWire, but we highly advise against it.

When you decide to use SeleniumWire, you are vulnerable to the following risks:

  • Security: Browsers are updated with security patches regularly. Without these patches, your browser will have holes in the security that have been fixed in other browsers such as Chromedriver or Geckodriver.

  • Dependency Issues: SeleniumWire is no longer maintained. In time, it may not be able to keep up with its dependencies as they get updated. Broken dependencies can be a source of unending headache for anyone in software development.

  • Compatibility: As the web itself gets updated, SeleniumWire doesn't. Regular browsers are updated all the time. Since SeleniumWire no longer receives updates, you may experience broken functionality and unexpected behavior.

As time goes on, the probability of all these problems increases. If you understand the risks but still wish to use SeleniumWire, you can view a guide on that here.

Depending on your time of reading, the code example below may or may not work. As mentioned above, we strongly recommend against using SeleniumWire because of its deprecation, but if you decide to do so anyway here you go. We are not responsible for any damage that this may cause to your machine or your privacy.

from seleniumwire import webdriver

USERNAME = "your-username"
PASSWORD = "wifi;;;;"
HOSTNAME = "proxy.froxy.com"
PORT = 9000

proxy_url = f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}"


## Define Your Proxy Endpoints
proxy_options = {
"proxy": {
"http": proxy_url,
"https": proxy_url,
"no_proxy": "localhost:127.0.0.1"
}
}

## Set Up Selenium Chrome driver
driver = webdriver.Chrome(seleniumwire_options=proxy_options)

## Send Request Using Proxy
driver.get('https://httpbin.org/ip')
  • We setup our url the same way we did with Python Requests: http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}.
  • We assign this url to both the http and https protocols of our proxy settings.
  • driver = webdriver.Chrome(seleniumwire_options=proxy_options) tells webdriver to open Chrome with our custom seleniumwire_options.

Python Scrapy

Integrating Froxy proxies with Scrapy is pretty simple. There are many ways we can do it, but perhaps the easiest way is to just write the proxy into your spider.

In your spider file, simply setup your proxy similar to the way we did with requests. They will then get forwarded through your Froxy proxy:

import scrapy

# Froxy proxy settings
proxy_host = "proxy.froxy.com"
proxy_port = "9000"
proxy_user = "your-username" # Replace with your Froxy username
proxy_pass = "wifi;;;;" # Replace with your Froxy password

# Construct the proxy URL
proxy_url = f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}"

class ExampleSpider(scrapy.Spider):
name = "froxy"

def start_requests(self):
request = scrapy.Request(url="https://httpbin.org/ip", callback=self.parse)
request.meta['proxy'] = proxy_url
yield request

def parse(self, response):
print(response.body)

This configuration does the following:

  • Proxy Configuration: All HTTP and HTTPS requests in the Scrapy project will go through the Froxy proxy.
  • Spider Implementation: The spider makes requests to the target website, and the requests are routed through the Froxy proxy automatically.

NodeJs Puppeteer

Now, we're going to run the same setup for NodeJS Puppeteer. Similar to Scrapy, we need to create a new project first. Follow the steps below to get up and running in minutes.

Create a new folder.

mkdir puppeteer-froxy

cd into the new folder and create a new JavaScript project.

cd puppeteer-froxy
npm init --y

Next, we need to install Puppeteer.

npm install puppeteer

Next, from within your new JavaScript project, you can copy and paste the code below into a new JavaScript file.

const puppeteer = require('puppeteer');

const PORT = 9000;
const USERNAME = "your-username";
const PASSWORD = "wifi;;;;";
const HOSTNAME = "proxy.froxy.com";
(async () => {
const browser = await puppeteer.launch({
args: [`--proxy-server=${HOSTNAME}:${PORT}`]
});

const page = await browser.newPage();

await page.authenticate({
username: USERNAME,
password: PASSWORD
});

await page.goto('http://lumtest.com/myip.json');
await page.screenshot({path: 'puppeteer.png'});

await browser.close();
})();

NodeJs Playwright

Integration with Playwright is almost identical to Puppeteer. Puppeteer and Playwright both actual share a common origin in Chrome's DevTools.

The steps below should look at least somewhat familiar, however it does get slightly different at the end.

Create a new project folder.

mkdir playwright-froxy

cd into the new folder and initialize a JavaScript project.

cd playwright-froxy
npm init --y

Install Playwright.

npm install playwright
npx playwright install

Next, you can copy/paste the code below into a JavaScript file.

const playwright = require('playwright');

const PORT = 9000;
const USERNAME = "your-username";
const PASSWORD = "wifi;;;;";
const HOSTNAME = "proxy.froxy.com";

const options = {
proxy: {
server: `${HOSTNAME}:${PORT}`,
username: USERNAME,
password: PASSWORD
}
};

(async () => {
const browser = await playwright.chromium.launch(options);
const page = await browser.newPage();

await page.goto('http://lumtest.com/myip.json');

await page.screenshot({ path: "playwright.png" })

await browser.close();
})();

Case Study: Scraping Amazon.es Prices Using Froxy Proxy

Now, it's time to perform a little case study. We're going to scrape laptops from Amazon.es.

  1. First, we setup our proxy. This part should be pretty familiar to you at this point. We have our proxy_host, proxy_port, proxy_user, country, and proxy_pass.
  2. We then use all of these elements to construct our proxy_url: f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}".
  3. Once we've got our proxy_url, we use a dict and assign it to both our http and https protocols.
    • This tells requests to forward all of our network requests through this connection.
  4. We set a custom user agent as well: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36. -
    • This tells Amazon that we're using a browser that's compatible with Chrome, WebKit, and Gecko.
    • It greatly increases our chances of being treated like a regular user.
import requests
from bs4 import BeautifulSoup

proxy_host = "proxy.froxy.com"
proxy_port = "9000"
proxy_user = "your-username"
country = "pt"
proxy_pass = f"wifi;{country};;;"

# Construct the proxy URL
proxy_url = f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}"

proxies = {
"http": proxy_url,
"https": proxy_url
}

headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36"
}

print("----------------------PT-------------------")

response = requests.get('https://www.amazon.es/s?k=port%C3%A1til', proxies=proxies, headers=headers)

soup = BeautifulSoup(response.text, "html.parser")

location = soup.select_one("span[id='glow-ingress-line2']")

print("Location:", location.text)

first_price_holder = soup.select_one("span[class='a-price']")
first_price = first_price_holder.select_one("span[class='a-offscreen']")

print("First Price:", first_price.text)


print("----------------------ES---------------------")

country = "es"
proxy_pass = f"wifi;{country};;;"

proxy_url = f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}"

proxies = {
"http": proxy_url,
"https": proxy_url
}

response = requests.get('https://www.amazon.es/s?k=port%C3%A1til', proxies=proxies, headers=headers)

soup = BeautifulSoup(response.text, "html.parser")

location = soup.select_one("div[id='glow-ingress-block']")

print("Location:", location.text.strip())

first_price_holder = soup.select_one("span[class='a-price']")
first_price = first_price_holder.select_one("span[class='a-offscreen']")

print("First Price:", first_price.text)

After our initial proxy setup, we do the following:

  • Get the page from Amazon: response = requests.get('https://www.amazon.es/s?k=port%C3%A1til', proxies=proxies, headers=headers).
  • Next, we pull both our first_price and our location out of the resulting page and print them to the terminal.
  • Once we've used our Portuguese proxy, we reassign our country to use a Spanish one, country = "es" and then we reconstruct our password, proxy_pass = f"wifi;{country};;;".
  • With the changes in these variables, we now need to reset our proxy connection. We do this by simply repeating some of our setup code.
  • Finally, we make the same request through our new Spanish proxy.

You can view the output from this below.

----------------------PT-------------------
Location:
Portugal

First Price: 399,99 €
----------------------ES---------------------
Location: Entrega en Murcia 30009


Actualizar ubicación
First Price: 399,99 €

There are some things you should notice from our output above.

  • Location:
    • In our first run, the location Amazon picks up is Portugal. This is perfectly consistent with our Portuguese proxy.
    • In our second run (after resetting the proxy), our location comes up as Entrega en Murcia 30009, which is located in Spain.
    • You can view proof of that on Google Maps.
  • Price:
    • This variable came out the same on this run, 399,99 €. However, this is not always the case.
    • Amazon prices are typically dynamic and they're based on a whole range of factors such as location, supply and shipping.

We can use Froxy Residential Proxies to keep our anonymity and effectively geotarget pretty much anywhere.


Alternative: ScrapeOps Residential Proxy Aggregator

When scraping web content, reliable proxy services are essential to ensure uninterrupted access and avoid IP bans.

One powerful option is the ScrapeOps Residential Proxy Aggregator - a service that simplifies and strengthens your proxy setup by connecting you to a wide range of residential and mobile proxy providers through a single port.

Why Use ScrapeOps Residential Proxy Aggregator?

  1. Cheaper Pricing: Compared to other proxy services, ScrapeOps offers more competitive rates. You only pay for the data you use, and the pricing is tailored for businesses of all sizes, making it a cost-effective solution for small or large scraping projects.
  2. Flexible Plans: ScrapeOps provides more versatile plans, with smaller and more customizable packages available. This flexibility makes it suitable whether you're just starting out or managing large-scale scraping operations.
  3. Increased Reliability: With access to thousands of residential and mobile proxies from multiple providers via a single proxy port, ScrapeOps ensures higher reliability and uptime. You can switch providers seamlessly without reconfiguring your system or risk getting blocked.
  4. Wide Coverage: ScrapeOps gives you access to a diverse set of proxies from different regions and networks, making it easier to target specific geographic locations when scraping.

Using ScrapeOps Residential Proxy Aggregator with Python Requests

Here's a Python example using the requests library and ScrapeOps proxy service.

import requests
from bs4 import BeautifulSoup
from dotenv import load_dotenv
import os

load_dotenv()

username = 'scrapeops'
api_key = os.getenv("SCRAPEOPS_API_KEY")
proxy = 'residential-proxy.scrapeops.io'
port = 8181

proxies = {
"http": f"http://{username}:{api_key}@{proxy}:{port}"
}

response = requests.get('https://plainenglish.io/', proxies=proxies)

# Check if the request was successful
if response.status_code == 200:
soup = BeautifulSoup(response.text, 'html.parser')

# Find the first 10 items with class 'mob-col-100'
items = soup.find_all(class_='mob-col-100', limit=10)
for i, item in enumerate(items, 1):
print(f"Item {i}: {item.get_text(strip=True)}")
else:
print(f"Failed to retrieve content. Status code: {response.status_code}")

Code Breakdown:

load_dotenv()

We load environment variables using dotenv to securely manage sensitive information such as the ScrapeOps API key. This keeps our credentials hidden and manageable in a .env file.

username = 'scrapeops'
api_key = os.getenv("SCRAPEOPS_API_KEY") # API Key retrieved from .env file
proxy = 'residential-proxy.scrapeops.io'
port = 8181

# Set up the proxy configuration
proxies = {
"http": f"http://{username}:{api_key}@{proxy}:{port}",
"https": f"http://{username}:{api_key}@{proxy}:{port}"
}

We use ScrapeOps' residential proxy service by setting up a proxy connection. The username is set to 'scrapeops', and the API key, retrieved from the .env file, authenticates our requests.

Proxies are specified for both HTTP and HTTPS protocols to ensure full support for secure connections.

response = requests.get('https://plainenglish.io/', proxies=proxies)

Next, we send an HTTP GET request to the target website, Plain English, using the configured proxy. By routing the request through ScrapeOps’ residential proxy, we avoid the risk of getting blocked by the site’s anti-scraping mechanisms.

# Check if the request was successful
if response.status_code == 200:
soup = BeautifulSoup(response.text, 'html.parser')

# Find the first 10 items with class 'mob-col-100'
items = soup.find_all(class_='mob-col-100', limit=10)
for i, item in enumerate(items, 1):
print(f"Item {i}: {item.get_text(strip=True)}")
else:
print(f"Failed to retrieve content. Status code: {response.status_code}")

We check if the request was successful by verifying the HTTP status code. A 200 status means the request was successful.

Then, we use BeautifulSoup to parse the HTML content and search for elements with the class 'mob-col-100', which in this case, are the items we want to extract. The limit=10 ensures that we extract only the first 10 items.

Finally, we print out the extracted items, allowing us to see the text content of the scraped elements.

ScrapeOps Proxy Aggregator

To dive deeper into the technical setup and API features of ScrapeOps Residential Proxy Aggregator, you can visit ScrapeOps Proxy Aggregator Overview

How to Get Started

Start using ScrapeOps Residential Proxy Aggregator today with 500MB of free bandwidth as part of the free trial. Leverage this powerful tool for your web scraping needs, and scale your scraping projects with ease!


When using proxies for web scraping, it's critical to consider both the ethical and legal aspects involved. Here are some key points to keep in mind:

Ethically Sourced Proxies

Some proxy providers, like Bright Data, emphasize that their proxies are ethically sourced. This means that the underlying IP holders have given explicit consent for their IP addresses to be used for data collection purposes. Ensuring that your proxy provider operates with transparency and ethical sourcing is important.

While providers like Froxy may not highlight ethical sourcing as heavily as Bright Data, it’s essential to choose a provider that respects the rights of IP owners and avoids exploitation. Always inquire whether the proxies you are using are ethically sourced to avoid legal or moral issues.

Responsibilities When Using Froxy Proxies

When utilizing Froxy Residential Proxies for web scraping, you should be aware of both Froxy’s policies and their own responsibilities. Froxy’s terms of service should guide you on how the proxies can be used ethically and legally. Key responsibilities include:

  • Avoid scraping sensitive personal information unless explicit consent is given.
  • Respect target websites' Terms of Service: Many websites explicitly state that automated data scraping is forbidden. Violating these terms could lead to legal consequences, including being blocked or blacklisted.
  • Scraping rate limitations: Avoid overloading the target server with frequent requests, as this can harm the website’s performance or lead to a denial of service.

Web scraping must always be done responsibly, respecting both ethical standards and legal regulations. Here are some guidelines:

  • Respect Website Terms of Service: Most websites have terms of service that outline acceptable usage, including whether scraping is allowed. Violating these terms can lead to penalties or bans.
  • Handling Personal Data: When scraping websites that contain user-generated content or personal data, be aware of data privacy laws such as the General Data Protection Regulation (GDPR) in the European Union. If you collect or store personal data, you are legally required to handle it carefully, with user consent and proper data protection measures.
  • Compliance with Local Laws: Ensure that your scraping activities comply with the laws in both your country and the country of the website being scraped. Data protection laws vary, and violating them can lead to legal actions.

In summary, while using Froxy or any other proxy provider, it is crucial to follow ethical practices, abide by website terms, handle data with care, and comply with applicable laws to ensure responsible and lawful scraping.


Conclusion

In this guide, we've explored the process of using residential proxies for web scraping, focusing specifically on the benefits, ethical considerations, and practical implementations.

It would help to integrate residential proxies into your scraping projects to scale your data gathering more effectively. Whether you’re using Froxy or ScrapeOps Residential Proxy Aggregator, residential proxies provide the tools you need to avoid IP bans, bypass rate limiting, and access geo-restricted content seamlessly.


More Python Web Scraping Guides

For more web scraping tutorials, check out the Python Web Scraping Playbook along with these useful guides: