Skip to main content

Scrape Trustpilot With NodeJS Puppeteer

How to Scrape Trustpilot with Puppeteer

In today"s world, pretty much everything is online. This is especially true when it comes to business. Trustpilot gives us the ability to rate and review different businesses. Because of this, Trustpilot is a great place to get a feel for any business you're unsure of.

In this tutorial, we're going to go over the finer points of scraping Trustpilot and generating some detailed reports.


TLDR - How to Scrape Trustpilot

On Trustpilot, all of our data actually comes embedded within the page in a JSON blob. If we can find the blob, all we need to do is parse the data.

The script below does exactly that for both a search results page and a business page.

const puppeteer = require("puppeteer");
const createCsvWriter = require("csv-writer").createObjectCsvWriter;
const csvParse = require("csv-parse");
const fs = require("fs");

const API_KEY = JSON.parse(fs.readFileSync("config.json")).api_key;

async function writeToCsv(data, outputFile) {
if (!data || data.length === 0) {
throw new Error("No data to write!");
}
const fileExists = fs.existsSync(outputFile);

const headers = Object.keys(data[0]).map(key => ({id: key, title: key}))

const csvWriter = createCsvWriter({
path: outputFile,
header: headers,
append: fileExists
});
try {
await csvWriter.writeRecords(data);
} catch (e) {
throw new Error("Failed to write to csv");
}
}

async function readCsv(inputFile) {
const results = [];
const parser = fs.createReadStream(inputFile).pipe(csvParse.parse({
columns: true,
delimiter: ",",
trim: true,
skip_empty_lines: true
}));

for await (const record of parser) {
results.push(record);
}
return results;
}

function range(start, end) {
const array = [];
for (let i=start; i<end; i++) {
array.push(i);
}
return array;
}

function getScrapeOpsUrl(url, location="us") {
const params = new URLSearchParams({
api_key: API_KEY,
url: url,
country: location
});
return `https://proxy.scrapeops.io/v1/?${params.toString()}`;
}

async function scrapeSearchResults(browser, keyword, pageNumber, location="us", retries=3) {
let tries = 0;
let success = false;

while (tries <= retries && !success) {

const formattedKeyword = keyword.replace(" ", "+");
const page = await browser.newPage();
try {
const url = `https://www.trustpilot.com/search?query=${formattedKeyword}&page=${pageNumber+1}`;

const proxyUrl = getScrapeOpsUrl(url, location);
await page.goto(proxyUrl);

console.log(`Successfully fetched: ${url}`);

const script = await page.$("script[id='__NEXT_DATA__']");

const innerHTML = await page.evaluate(element => element.innerHTML, script);
const jsonData = JSON.parse(innerHTML);

const businessUnits = jsonData.props.pageProps.businessUnits;


for (const business of businessUnits) {

let category = "n/a";
if ("categories" in business && business.categories.length > 0) {
category = business.categories[0].categoryId;
}

let location = "n/a";
if ("location" in business && "country" in business.location) {
location = business.location.country
}
const trustpilotFormatted = business.contact.website.split("://")[1];

const businessInfo = {
name: business.displayName.toLowerCase().replace(" ", "").replace("'", ""),
stars: business.stars,
rating: business.trustScore,
num_reviews: business.numberOfReviews,
website: business.contact.website,
trustpilot_url: `https://www.trustpilot.com/review/${trustpilotFormatted}`,
location: location,
category: category
};

await writeToCsv([businessInfo], `${keyword.replace(" ", "-")}.csv`);
}


success = true;
} catch (err) {
console.log(`Error: ${err}, tries left ${retries - tries}`);
tries++;
} finally {
await page.close();
}
}
}

async function startScrape(keyword, pages, location, concurrencyLimit, retries) {
const pageList = range(0, pages);

const browser = await puppeteer.launch()

while (pageList.length > 0) {
const currentBatch = pageList.splice(0, concurrencyLimit);
const tasks = currentBatch.map(page => scrapeSearchResults(browser, keyword, page, location, retries));

try {
await Promise.all(tasks);
} catch (err) {
console.log(`Failed to process batch: ${err}`);
}
}

await browser.close();
}

async function processBusiness(browser, row, location, retries = 3) {
const url = row.trustpilot_url;
let tries = 0;
let success = false;


while (tries <= retries && !success) {
const page = await browser.newPage();

try {
await page.goto(getScrapeOpsUrl(url, location));

const script = await page.$("script[id='__NEXT_DATA__']");
const innerHTML = await page.evaluate(element => element.innerHTML, script);

const jsonData = JSON.parse(innerHTML);
const businessInfo = jsonData.props.pageProps;

const reviews = businessInfo.reviews;

for (const review of reviews) {
const reviewData = {
name: review.consumer.displayName,
rating: review.rating,
text: review.text,
title: review.title,
date: review.dates.publishedDate
}
await writeToCsv([reviewData], `${row.name}.csv`);
}

success = true;
} catch (err) {
console.log(`Error: ${err}, tries left: ${retries-tries}`);
tries++;
} finally {
await page.close();
}
}
}

async function processResults(csvFile, location, concurrencyLimit, retries) {
const businesses = await readCsv(csvFile);
const browser = await puppeteer.launch();

while (businesses.length > 0) {
const currentBatch = businesses.splice(0, concurrencyLimit);
const tasks = currentBatch.map(business => processBusiness(browser, business, location, retries));

try {
await Promise.all(tasks);
} catch (err) {
console.log(`Failed to process batch: ${err}`);
}
}
await browser.close();

}

async function main() {
const keywords = ["online bank"];
const concurrencyLimit = 5;
const pages = 1;
const location = "us";
const retries = 3;
const aggregateFiles = [];

for (const keyword of keywords) {
await startScrape(keyword, pages, location, concurrencyLimit, retries);
aggregateFiles.push(`${keyword.replace(" ", "-")}.csv`);
}

for (const file of aggregateFiles) {
await processResults(file, location, concurrencyLimit, retries);
}
}


main();

To customize this scraper, feel free to change any of the following constants:

  • keywords: This array contains the search keywords you want to use on Trustpilot to find businesses. Each keyword in the array will be used to perform a separate search and gather data on matching businesses.
  • concurrencyLimit: This number controls how many scraping tasks are run concurrently. A higher limit can speed up the scraping process but may increase the load on your system and the target website, potentially leading to rate limiting or bans.
  • pages: This value represents the number of pages of search results you want to scrape for each keyword. Each page typically contains a set number of business listings.
  • location: This string specifies the geographical location to use for the proxy service. It determines the country from which the scraping requests appear to originate. The location might affect the results due to regional differences in business listings and reviews.
  • retries: This value sets the number of retry attempts for each scraping task if it fails. More retries increase the chances of successful scraping despite intermittent errors or temporary issues, but they also prolong the total scraping time.

How To How To Architect Our Trustpilot Scraper

When we build our Trustpilot scraper, we actually need to build two scrapers.

  1. Our first scraper is a crawler. This crawler will look up search results, interpret them, and store them in a CSV file.
  2. Our second scraper will be our review scraper. The review scraper needs to look up specific businesses and save their reviews to a new CSV file.

For the best performance and stability, each of these scrapers will need the following:

  • Parsing: so we can pull proper information from a page.
  • Pagination: so we can pull up different pages be more selective about our data.
  • Data Storage: to store our data in a safe, efficient and readable way.
  • Concurrency: to scrape multiple pages at once.
  • Proxy Integration: when scraping anything at scale, we often face the issue of getting blocked. Proxies allow us a redundant connection and reduce our likelihood of getting blocked by different websites.

Understanding How To Scrape Trustpilot

Step 1: How To Request Trustpilot Pages

When we perform a search using Trustpilot, our URL typically looks like this:

https://www.trustpilot.com/search?query=word1+word2

Go ahead and take a look at the screenshot below, which shows a search for the term online bank.

TrustPilot Search Results Page

The other type of page we request from Trustpilot is the business page. This is the portion where we scrape our reviews from. This URL is pretty strange, here is the convention:

https://www.trustpilot.com/review/actual_website_domain_name

The example below is a screenshot for good-bank.de.

Since the site's domain name is good-bank.de, the Trustpilot URL would be:

https://www.trustpilot.com/review/good-bank.de

While this naming convention is unorthodox, we can base our URL system on this.

Good Bank Business Page


Step 2: How To Extract Data From Trustpilot Results and Pages

When pulling our data from Trustpilot, we actually get pretty lucky. Our data actually gets saved in the page as a JSON blob.

This is extremely convenient because we don't have to constantly be looking for nested HTML/CSS elements. We need to look for one tag, script. script holds JavaScript, and the JavaScript holds our JSON.

Here is the JSON blob from good-bank.de.

JSON Blob for GoodBank Trustpilot Page

On both our search results, and our business pages, all the information we want is saved in a script tag with an id of "__NEXT_DATA__". All we have to do is pull this JSON from the page and then parse it.


Step 3: How To Control Pagination

To paginate our results, we can use the following format for our search URL:

https://www.trustpilot.com/search?query={formatted_keyword}&page={page_number+1}

So if we wanted page 1 of online banks, our URL would look like this:

https://www.trustpilot.com/search?query=online+bank&page=1

As we already discussed previously, our url for individual business is setup like this:

https://www.trustpilot.com/review/actual_website_domain_name

With a system ready for our URLs, we're all set to extract our data.


Step 4: Geolocated Data

To handle Geoloacated Data, we'll be using the ScrapeOps Proxy API.

  • If we want to be in Great Britain, we simply set our country parameter to "uk",
  • if we want to be in the US, we can set this param to "us".

When we pass our country into the ScrapeOps API, ScrapeOps will actually route our requests through a server in that country, so even if the site checks our geolocation, our geolocation will show up correctly!


Setting Up Our Trustpilot Scraper Project

Let's get started. You can run the following commands to get setup.

Create a New Project Folder

mkdir trustpilot-scraper

cd trustpilot-scraper

Create a New JavaScript Project

npm init --y

Install Our Dependencies

npm install puppeteer
npm install csv-writer
npm install csv-parse
npm install fs

Now that we've installed our deps, it's time to get started on coding!


Build A Trustpilot Search Crawler

Step 1: Create Simple Search Data Parser

We'll start by building a parser for our search results. The goal here is pretty simple: fetch a page and pull information from it.

Along with parsing, we'll add some basic retry logic as well. With retries, if we fail to get our information on the first try, our parser will keep trying until it runs out of retries.

The script below is pretty basic, but this is the foundation to our entire project.

  • while we still have retries left and the operation hasn't succeeded, we get the page and find the script tag with the id, "__NEXT_DATA__".
  • From within this object, we pull all of our relevant information from the JSON blob and then print it to the terminal.
const puppeteer = require("puppeteer");
const createCsvWriter = require("csv-writer").createObjectCsvWriter;
const csvParse = require("csv-parse");
const fs = require("fs");

const API_KEY = JSON.parse(fs.readFileSync("config.json")).api_key;


async function scrapeSearchResults(browser, keyword, location="us", retries=3) {
let tries = 0;
let success = false;

while (tries <= retries && !success) {

const formattedKeyword = keyword.replace(" ", "+");
const page = await browser.newPage();
try {
const url = `https://www.trustpilot.com/search?query=${formattedKeyword}`;

await page.goto(url);

console.log(`Successfully fetched: ${url}`);

const script = await page.$("script[id='__NEXT_DATA__']");

const innerHTML = await page.evaluate(element => element.innerHTML, script);
const jsonData = JSON.parse(innerHTML);

const businessUnits = jsonData.props.pageProps.businessUnits;


for (const business of businessUnits) {

let category = "n/a";
if ("categories" in business && business.categories.length > 0) {
category = business.categories[0].categoryId;
}

let location = "n/a";
if ("location" in business && "country" in business.location) {
location = business.location.country
}
const trustpilotFormatted = business.contact.website.split("://")[1];

const businessInfo = {
name: business.displayName.toLowerCase().replace(" ", "").replace("'", ""),
stars: business.stars,
rating: business.trustScore,
num_reviews: business.numberOfReviews,
website: business.contact.website,
trustpilot_url: `https://www.trustpilot.com/review/${trustpilotFormatted}`,
location: location,
category: category
};

console.log(businessInfo);
}


success = true;
} catch (err) {
console.log(`Error: ${err}, tries left ${retries - tries}`);
tries++;
} finally {
await page.close();
}
}
}


async function main() {
const keywords = ["online bank"];
const location = "us";
const retries = 3;

for (const keyword of keywords) {

const browser = await puppeteer.launch();

await scrapeSearchResults(browser, keyword, location, retries);

await browser.close();
}

}


main();

Step 2: Add Pagination

Now that we can pull information from a Trustpilot page, we need to be able to decide which page we want to scrape. We can do this by using pagination.

As discussed above, our paginated URL is laid out like this:

https://www.trustpilot.com/search?query={formatted_keyword}&page={page_number+1}

Here is our fully updated code, it hasn't really changed much yet.

const puppeteer = require("puppeteer");
const createCsvWriter = require("csv-writer").createObjectCsvWriter;
const csvParse = require("csv-parse");
const fs = require("fs");

const API_KEY = JSON.parse(fs.readFileSync("config.json")).api_key;

function range(start, end) {
const array = [];
for (let i=start; i<end; i++) {
array.push(i);
}
return array;
}


async function scrapeSearchResults(browser, keyword, pageNumber, location="us", retries=3) {
let tries = 0;
let success = false;

while (tries <= retries && !success) {

const formattedKeyword = keyword.replace(" ", "+");
const page = await browser.newPage();
try {
const url = `https://www.trustpilot.com/search?query=${formattedKeyword}&page=${pageNumber+1}`;

await page.goto(url);

console.log(`Successfully fetched: ${url}`);

const script = await page.$("script[id='__NEXT_DATA__']");

const innerHTML = await page.evaluate(element => element.innerHTML, script);
const jsonData = JSON.parse(innerHTML);

const businessUnits = jsonData.props.pageProps.businessUnits;


for (const business of businessUnits) {

let category = "n/a";
if ("categories" in business && business.categories.length > 0) {
category = business.categories[0].categoryId;
}

let location = "n/a";
if ("location" in business && "country" in business.location) {
location = business.location.country
}
const trustpilotFormatted = business.contact.website.split("://")[1];

const businessInfo = {
name: business.displayName.toLowerCase().replace(" ", "").replace("'", ""),
stars: business.stars,
rating: business.trustScore,
num_reviews: business.numberOfReviews,
website: business.contact.website,
trustpilot_url: `https://www.trustpilot.com/review/${trustpilotFormatted}`,
location: location,
category: category
};

console.log(businessInfo);
}


success = true;
} catch (err) {
console.log(`Error: ${err}, tries left ${retries - tries}`);
tries++;
} finally {
await page.close();
}
}
}

async function startScrape(keyword, pages, location, retries) {
const pageList = range(0, pages);

const browser = await puppeteer.launch()

for (const page of pageList) {
await scrapeSearchResults(browser, keyword, page, location, retries);
}

await browser.close();
}


async function main() {
const keywords = ["online bank"];
const concurrencyLimit = 5;
const pages = 1;
const location = "us";
const retries = 3;

for (const keyword of keywords) {
await startScrape(keyword, pages, location, concurrencyLimit, retries);
}

}


main();

We also added a startScrape() function which gives us the ability to scrape multiple pages. Later on, we'll add concurrency to this function, but for now, we're just going to use a for loop as a placeholder.


Step 3: Storing the Scraped Data

Now that we can retrieve our data properly, it's time to start storing it.

To store our data, we use the writeToCsv() function. This function takes data (an array of JSON objects) and an outputFile.

  • If the outputFile exists, we append it.
  • If it does not exist, we create it.

Take a look at the updated code.

const puppeteer = require("puppeteer");
const createCsvWriter = require("csv-writer").createObjectCsvWriter;
const csvParse = require("csv-parse");
const fs = require("fs");

const API_KEY = JSON.parse(fs.readFileSync("config.json")).api_key;

async function writeToCsv(data, outputFile) {
if (!data || data.length === 0) {
throw new Error("No data to write!");
}
const fileExists = fs.existsSync(outputFile);

const headers = Object.keys(data[0]).map(key => ({id: key, title: key}))

const csvWriter = createCsvWriter({
path: outputFile,
header: headers,
append: fileExists
});
try {
await csvWriter.writeRecords(data);
} catch (e) {
throw new Error("Failed to write to csv");
}
}


function range(start, end) {
const array = [];
for (let i=start; i<end; i++) {
array.push(i);
}
return array;
}


async function scrapeSearchResults(browser, keyword, pageNumber, location="us", retries=3) {
let tries = 0;
let success = false;

while (tries <= retries && !success) {

const formattedKeyword = keyword.replace(" ", "+");
const page = await browser.newPage();
try {
const url = `https://www.trustpilot.com/search?query=${formattedKeyword}&page=${pageNumber+1}`;

await page.goto(url);

console.log(`Successfully fetched: ${url}`);

const script = await page.$("script[id='__NEXT_DATA__']");

const innerHTML = await page.evaluate(element => element.innerHTML, script);
const jsonData = JSON.parse(innerHTML);

const businessUnits = jsonData.props.pageProps.businessUnits;


for (const business of businessUnits) {

let category = "n/a";
if ("categories" in business && business.categories.length > 0) {
category = business.categories[0].categoryId;
}

let location = "n/a";
if ("location" in business && "country" in business.location) {
location = business.location.country
}
const trustpilotFormatted = business.contact.website.split("://")[1];

const businessInfo = {
name: business.displayName.toLowerCase().replace(" ", "").replace("'", ""),
stars: business.stars,
rating: business.trustScore,
num_reviews: business.numberOfReviews,
website: business.contact.website,
trustpilot_url: `https://www.trustpilot.com/review/${trustpilotFormatted}`,
location: location,
category: category
};

await writeToCsv([businessInfo], `${keyword.replace(" ", "-")}.csv`);
}


success = true;
} catch (err) {
console.log(`Error: ${err}, tries left ${retries - tries}`);
tries++;
} finally {
await page.close();
}
}
}

async function startScrape(keyword, pages, location, retries) {
const pageList = range(0, pages);

const browser = await puppeteer.launch()

for (const page of pageList) {
await scrapeSearchResults(browser, keyword, page, location, retries);
}

await browser.close();
}


async function main() {
const keywords = ["online bank"];
const concurrencyLimit = 5;
const pages = 1;
const location = "us";
const retries = 3;

for (const keyword of keywords) {
await startScrape(keyword, pages, location, concurrencyLimit, retries);
}

}


main();
  • Our businessInfo holds all the information that we scraped.
  • Once we've got our businessInfo, we simply write it to CSV with await writeToCsv([businessInfo], `${keyword.replace(" ", "-")}.csv`).
  • By calling this on every object inside the loop, each result is written to the file as soon as it's been processed.
  • In the event of a crash, this allows us to save absolutely every good piece of data that we were able to retrieve.

Step 4: Adding Concurrency

To maximize our efficieny, we need to add concurrency. Concurrency gives us the ability to process multiple pages at once. Here we'll be using a combination of async programming and batching in order to process batches of multiple results simultaneously.

We use a concurrencyLimit to determine our batch size. While we still have pages to scrape, we splice() out a batch and process it. Once that batch has finished, we move onto the next one.

Our only major difference here is the startScrape() function. Here is what it looks like now:

async function startScrape(keyword, pages, location, concurrencyLimit, retries) {
const pageList = range(0, pages);

const browser = await puppeteer.launch()

while (pageList.length > 0) {
const currentBatch = pageList.splice(0, concurrencyLimit);
const tasks = currentBatch.map(page => scrapeSearchResults(browser, keyword, page, location, retries));

try {
await Promise.all(tasks);
} catch (err) {
console.log(`Failed to process batch: ${err}`);
}
}

await browser.close();
}

Here is the fully updated code.

const puppeteer = require("puppeteer");
const createCsvWriter = require("csv-writer").createObjectCsvWriter;
const csvParse = require("csv-parse");
const fs = require("fs");

const API_KEY = JSON.parse(fs.readFileSync("config.json")).api_key;

async function writeToCsv(data, outputFile) {
if (!data || data.length === 0) {
throw new Error("No data to write!");
}
const fileExists = fs.existsSync(outputFile);

const headers = Object.keys(data[0]).map(key => ({id: key, title: key}))

const csvWriter = createCsvWriter({
path: outputFile,
header: headers,
append: fileExists
});
try {
await csvWriter.writeRecords(data);
} catch (e) {
throw new Error("Failed to write to csv");
}
}


function range(start, end) {
const array = [];
for (let i=start; i<end; i++) {
array.push(i);
}
return array;
}


async function scrapeSearchResults(browser, keyword, pageNumber, location="us", retries=3) {
let tries = 0;
let success = false;

while (tries <= retries && !success) {

const formattedKeyword = keyword.replace(" ", "+");
const page = await browser.newPage();
try {
const url = `https://www.trustpilot.com/search?query=${formattedKeyword}&page=${pageNumber+1}`;

await page.goto(url);

console.log(`Successfully fetched: ${url}`);

const script = await page.$("script[id='__NEXT_DATA__']");

const innerHTML = await page.evaluate(element => element.innerHTML, script);
const jsonData = JSON.parse(innerHTML);

const businessUnits = jsonData.props.pageProps.businessUnits;


for (const business of businessUnits) {

let category = "n/a";
if ("categories" in business && business.categories.length > 0) {
category = business.categories[0].categoryId;
}

let location = "n/a";
if ("location" in business && "country" in business.location) {
location = business.location.country
}
const trustpilotFormatted = business.contact.website.split("://")[1];

const businessInfo = {
name: business.displayName.toLowerCase().replace(" ", "").replace("'", ""),
stars: business.stars,
rating: business.trustScore,
num_reviews: business.numberOfReviews,
website: business.contact.website,
trustpilot_url: `https://www.trustpilot.com/review/${trustpilotFormatted}`,
location: location,
category: category
};

await writeToCsv([businessInfo], `${keyword.replace(" ", "-")}.csv`);
}


success = true;
} catch (err) {
console.log(`Error: ${err}, tries left ${retries - tries}`);
tries++;
} finally {
await page.close();
}
}
}

async function startScrape(keyword, pages, location, concurrencyLimit, retries) {
const pageList = range(0, pages);

const browser = await puppeteer.launch()

while (pageList.length > 0) {
const currentBatch = pageList.splice(0, concurrencyLimit);
const tasks = currentBatch.map(page => scrapeSearchResults(browser, keyword, page, location, retries));

try {
await Promise.all(tasks);
} catch (err) {
console.log(`Failed to process batch: ${err}`);
}
}

await browser.close();
}


async function main() {
const keywords = ["online bank"];
const concurrencyLimit = 5;
const pages = 1;
const location = "us";
const retries = 3;

for (const keyword of keywords) {
await startScrape(keyword, pages, location, concurrencyLimit, retries);
}

}


main();

With concurrency, we can now scrape numerous pages all at the same time.


Step 5: Bypassing Anti-Bots

In the wild, scrapers are often caught and blocked by anti-bots. Anti-bots are software designed to find and block malicious traffic. While our scraper is not malicious, it is incredibly fast and at this point there is nothing human about it. These abnormalities will likely get us flagged and probably blocked.

To bypass these anti-bots, we need to use a proxy. The ScrapeOps Proxy will rotate our IP address and therefore, each request we make seems like its coming from a different location. So instead of one bot making a ton of bizarre requests to the server, our scraper will look like many instances of normal client side traffic.

This function does all of this for us:

function getScrapeOpsUrl(url, location="us") {
const params = new URLSearchParams({
api_key: API_KEY,
url: url,
country: location
});
return `https://proxy.scrapeops.io/v1/?${params.toString()}`;
}

getScrapeOpsUrl() takes in all of our parameters and uses simple string formatting to return the proxyUrl that we're going to be using.

In this example, our code barely changes at all, but it brings us to a production ready level. Take a look at the full code example below.

const puppeteer = require("puppeteer");
const createCsvWriter = require("csv-writer").createObjectCsvWriter;
const csvParse = require("csv-parse");
const fs = require("fs");

const API_KEY = JSON.parse(fs.readFileSync("config.json")).api_key;

async function writeToCsv(data, outputFile) {
if (!data || data.length === 0) {
throw new Error("No data to write!");
}
const fileExists = fs.existsSync(outputFile);

const headers = Object.keys(data[0]).map(key => ({id: key, title: key}))

const csvWriter = createCsvWriter({
path: outputFile,
header: headers,
append: fileExists
});
try {
await csvWriter.writeRecords(data);
} catch (e) {
throw new Error("Failed to write to csv");
}
}


function range(start, end) {
const array = [];
for (let i=start; i<end; i++) {
array.push(i);
}
return array;
}

function getScrapeOpsUrl(url, location="us") {
const params = new URLSearchParams({
api_key: API_KEY,
url: url,
country: location
});
return `https://proxy.scrapeops.io/v1/?${params.toString()}`;
}

async function scrapeSearchResults(browser, keyword, pageNumber, location="us", retries=3) {
let tries = 0;
let success = false;

while (tries <= retries && !success) {

const formattedKeyword = keyword.replace(" ", "+");
const page = await browser.newPage();
try {
const url = `https://www.trustpilot.com/search?query=${formattedKeyword}&page=${pageNumber+1}`;

const proxyUrl = getScrapeOpsUrl(url, location);
await page.goto(proxyUrl);

console.log(`Successfully fetched: ${url}`);

const script = await page.$("script[id='__NEXT_DATA__']");

const innerHTML = await page.evaluate(element => element.innerHTML, script);
const jsonData = JSON.parse(innerHTML);

const businessUnits = jsonData.props.pageProps.businessUnits;


for (const business of businessUnits) {

let category = "n/a";
if ("categories" in business && business.categories.length > 0) {
category = business.categories[0].categoryId;
}

let location = "n/a";
if ("location" in business && "country" in business.location) {
location = business.location.country
}
const trustpilotFormatted = business.contact.website.split("://")[1];

const businessInfo = {
name: business.displayName.toLowerCase().replace(" ", "").replace("'", ""),
stars: business.stars,
rating: business.trustScore,
num_reviews: business.numberOfReviews,
website: business.contact.website,
trustpilot_url: `https://www.trustpilot.com/review/${trustpilotFormatted}`,
location: location,
category: category
};

await writeToCsv([businessInfo], `${keyword.replace(" ", "-")}.csv`);
}


success = true;
} catch (err) {
console.log(`Error: ${err}, tries left ${retries - tries}`);
tries++;
} finally {
await page.close();
}
}
}

async function startScrape(keyword, pages, location, concurrencyLimit, retries) {
const pageList = range(0, pages);

const browser = await puppeteer.launch()

while (pageList.length > 0) {
const currentBatch = pageList.splice(0, concurrencyLimit);
const tasks = currentBatch.map(page => scrapeSearchResults(browser, keyword, page, location, retries));

try {
await Promise.all(tasks);
} catch (err) {
console.log(`Failed to process batch: ${err}`);
}
}

await browser.close();
}


async function main() {
const keywords = ["online bank"];
const concurrencyLimit = 5;
const pages = 1;
const location = "us";
const retries = 3;

for (const keyword of keywords) {
await startScrape(keyword, pages, location, concurrencyLimit, retries);
}

}


main();

Step 6: Production Run

It's finally time to run our crawler in production. Take a look at our main. I'm changing a few constants here.

async function main() {
const keywords = ["online bank"];
const concurrencyLimit = 5;
const pages = 10;
const location = "us";
const retries = 3;

for (const keyword of keywords) {
await startScrape(keyword, pages, location, concurrencyLimit, retries);
}

}

pages has been set to 10 and location has been set to "us". Now let's see how long it takes to process 10 pages of data.

Here are the results:

Crawler Performance

We processed 10 pages in roughly 9.3 seconds. All in all, it costs us less than a second per page!


Build A Trustpilot Scraper

Our crawler gives us great results in a fast and efficient way. Now we need to pair our crawler with a scraper. This scraper will be pulling information about the individual businesses that we save in the report that our crawler generates.

Our scraper will do the following:

  1. Open the report we created
  2. Get the pages from that report
  3. Pull information from these pages
  4. Create an individual report for each of the businesses we've looked up

Along with this process, we'll once again utilize our basic steps from the crawler: parsing, storage, concurrency, and proxy integration.


Step 1: Create Simple Business Data Parser

Here, we'll just create a simple parsing function. Take a look below.

async function processBusiness(browser, row, location, retries = 3) {
const url = row.trustpilot_url;
let tries = 0;
let success = false;


while (tries <= retries && !success) {
const page = await browser.newPage();

try {
await page.goto(url, location);

const script = await page.$("script[id='__NEXT_DATA__']");
const innerHTML = await page.evaluate(element => element.innerHTML, script);

const jsonData = JSON.parse(innerHTML);
const businessInfo = jsonData.props.pageProps;

const reviews = businessInfo.reviews;

for (const review of reviews) {
const reviewData = {
name: review.consumer.displayName,
rating: review.rating,
text: review.text,
title: review.title,
date: review.dates.publishedDate
}
console.log(reviewData);
}

success = true;
} catch (err) {
console.log(`Error: ${err}, tries left: ${retries-tries}`);
tries++;
} finally {
await page.close();
}
}
}
  • This function takes in a row from our CSV file and then fetches the trustpilot_url of the business.
  • Once we've got the page, we once again look for the script tag with the id of "__NEXT_DATA__" to find our JSON blob.
  • From within our JSON blob, we pull information from each review listed within the blob.

Step 2: Loading URLs To Scrape

In order to use our processBusiness() function, we need to be able to read the rows from our CSV file. Now we're going to fully update our code.

In the example below, we also add a processResults() function. processResults() reads the CSV report from our crawler and then processes each business from the report.

const puppeteer = require("puppeteer");
const createCsvWriter = require("csv-writer").createObjectCsvWriter;
const csvParse = require("csv-parse");
const fs = require("fs");

const API_KEY = JSON.parse(fs.readFileSync("config.json")).api_key;

async function writeToCsv(data, outputFile) {
if (!data || data.length === 0) {
throw new Error("No data to write!");
}
const fileExists = fs.existsSync(outputFile);

const headers = Object.keys(data[0]).map(key => ({id: key, title: key}))

const csvWriter = createCsvWriter({
path: outputFile,
header: headers,
append: fileExists
});
try {
await csvWriter.writeRecords(data);
} catch (e) {
throw new Error("Failed to write to csv");
}
}

async function readCsv(inputFile) {
const results = [];
const parser = fs.createReadStream(inputFile).pipe(csvParse.parse({
columns: true,
delimiter: ",",
trim: true,
skip_empty_lines: true
}));

for await (const record of parser) {
results.push(record);
}
return results;
}

function range(start, end) {
const array = [];
for (let i=start; i<end; i++) {
array.push(i);
}
return array;
}

function getScrapeOpsUrl(url, location="us") {
const params = new URLSearchParams({
api_key: API_KEY,
url: url,
country: location
});
return `https://proxy.scrapeops.io/v1/?${params.toString()}`;
}

async function scrapeSearchResults(browser, keyword, pageNumber, location="us", retries=3) {
let tries = 0;
let success = false;

while (tries <= retries && !success) {

const formattedKeyword = keyword.replace(" ", "+");
const page = await browser.newPage();
try {
const url = `https://www.trustpilot.com/search?query=${formattedKeyword}&page=${pageNumber+1}`;

const proxyUrl = getScrapeOpsUrl(url, location);
await page.goto(proxyUrl);

console.log(`Successfully fetched: ${url}`);

const script = await page.$("script[id='__NEXT_DATA__']");

const innerHTML = await page.evaluate(element => element.innerHTML, script);
const jsonData = JSON.parse(innerHTML);

const businessUnits = jsonData.props.pageProps.businessUnits;


for (const business of businessUnits) {

let category = "n/a";
if ("categories" in business && business.categories.length > 0) {
category = business.categories[0].categoryId;
}

let location = "n/a";
if ("location" in business && "country" in business.location) {
location = business.location.country
}
const trustpilotFormatted = business.contact.website.split("://")[1];

const businessInfo = {
name: business.displayName.toLowerCase().replace(" ", "").replace("'", ""),
stars: business.stars,
rating: business.trustScore,
num_reviews: business.numberOfReviews,
website: business.contact.website,
trustpilot_url: `https://www.trustpilot.com/review/${trustpilotFormatted}`,
location: location,
category: category
};

await writeToCsv([businessInfo], `${keyword.replace(" ", "-")}.csv`);
}


success = true;
} catch (err) {
console.log(`Error: ${err}, tries left ${retries - tries}`);
tries++;
} finally {
await page.close();
}
}
}

async function startScrape(keyword, pages, location, concurrencyLimit, retries) {
const pageList = range(0, pages);

const browser = await puppeteer.launch()

while (pageList.length > 0) {
const currentBatch = pageList.splice(0, concurrencyLimit);
const tasks = currentBatch.map(page => scrapeSearchResults(browser, keyword, page, location, retries));

try {
await Promise.all(tasks);
} catch (err) {
console.log(`Failed to process batch: ${err}`);
}
}

await browser.close();
}

async function processBusiness(browser, row, location, retries = 3) {
const url = row.trustpilot_url;
let tries = 0;
let success = false;


while (tries <= retries && !success) {
const page = await browser.newPage();

try {
await page.goto(url, location);

const script = await page.$("script[id='__NEXT_DATA__']");
const innerHTML = await page.evaluate(element => element.innerHTML, script);

const jsonData = JSON.parse(innerHTML);
const businessInfo = jsonData.props.pageProps;

const reviews = businessInfo.reviews;

for (const review of reviews) {
const reviewData = {
name: review.consumer.displayName,
rating: review.rating,
text: review.text,
title: review.title,
date: review.dates.publishedDate
}
console.log(reviewData);
}

success = true;
} catch (err) {
console.log(`Error: ${err}, tries left: ${retries-tries}`);
tries++;
} finally {
await page.close();
}
}
}

async function processResults(csvFile, location, retries) {
const businesses = await readCsv(csvFile);
const browser = await puppeteer.launch();

for (const business of businesses) {
await processBusiness(browser, business, location, retries);
}
await browser.close();

}

async function main() {
const keywords = ["online bank"];
const concurrencyLimit = 5;
const pages = 1;
const location = "us";
const retries = 3;
const aggregateFiles = [];

for (const keyword of keywords) {
await startScrape(keyword, pages, location, concurrencyLimit, retries);
aggregateFiles.push(`${keyword.replace(" ", "-")}.csv`);
}

for (const file of aggregateFiles) {
await processResults(file, location, concurrencyLimit, retries);
}
}


main();

In the example above, our process_results() function reads the rows from our CSV file and passes each of them into process_business().

process_business() then pulls our information and prints it to the terminal.


Step 3: Storing the Scraped Data

Once again, we're now in the position where we need to store our data.

Similar to the businessInfo object we used earlier, we now use a reviewData object and then we pass it into the writeToCsv() function again.

const puppeteer = require("puppeteer");
const createCsvWriter = require("csv-writer").createObjectCsvWriter;
const csvParse = require("csv-parse");
const fs = require("fs");

const API_KEY = JSON.parse(fs.readFileSync("config.json")).api_key;

async function writeToCsv(data, outputFile) {
if (!data || data.length === 0) {
throw new Error("No data to write!");
}
const fileExists = fs.existsSync(outputFile);

const headers = Object.keys(data[0]).map(key => ({id: key, title: key}))

const csvWriter = createCsvWriter({
path: outputFile,
header: headers,
append: fileExists
});
try {
await csvWriter.writeRecords(data);
} catch (e) {
throw new Error("Failed to write to csv");
}
}

async function readCsv(inputFile) {
const results = [];
const parser = fs.createReadStream(inputFile).pipe(csvParse.parse({
columns: true,
delimiter: ",",
trim: true,
skip_empty_lines: true
}));

for await (const record of parser) {
results.push(record);
}
return results;
}

function range(start, end) {
const array = [];
for (let i=start; i<end; i++) {
array.push(i);
}
return array;
}

function getScrapeOpsUrl(url, location="us") {
const params = new URLSearchParams({
api_key: API_KEY,
url: url,
country: location
});
return `https://proxy.scrapeops.io/v1/?${params.toString()}`;
}

async function scrapeSearchResults(browser, keyword, pageNumber, location="us", retries=3) {
let tries = 0;
let success = false;

while (tries <= retries && !success) {

const formattedKeyword = keyword.replace(" ", "+");
const page = await browser.newPage();
try {
const url = `https://www.trustpilot.com/search?query=${formattedKeyword}&page=${pageNumber+1}`;

const proxyUrl = getScrapeOpsUrl(url, location);
await page.goto(proxyUrl);

console.log(`Successfully fetched: ${url}`);

const script = await page.$("script[id='__NEXT_DATA__']");

const innerHTML = await page.evaluate(element => element.innerHTML, script);
const jsonData = JSON.parse(innerHTML);

const businessUnits = jsonData.props.pageProps.businessUnits;


for (const business of businessUnits) {

let category = "n/a";
if ("categories" in business && business.categories.length > 0) {
category = business.categories[0].categoryId;
}

let location = "n/a";
if ("location" in business && "country" in business.location) {
location = business.location.country
}
const trustpilotFormatted = business.contact.website.split("://")[1];

const businessInfo = {
name: business.displayName.toLowerCase().replace(" ", "").replace("'", ""),
stars: business.stars,
rating: business.trustScore,
num_reviews: business.numberOfReviews,
website: business.contact.website,
trustpilot_url: `https://www.trustpilot.com/review/${trustpilotFormatted}`,
location: location,
category: category
};

await writeToCsv([businessInfo], `${keyword.replace(" ", "-")}.csv`);
}


success = true;
} catch (err) {
console.log(`Error: ${err}, tries left ${retries - tries}`);
tries++;
} finally {
await page.close();
}
}
}

async function startScrape(keyword, pages, location, concurrencyLimit, retries) {
const pageList = range(0, pages);

const browser = await puppeteer.launch()

while (pageList.length > 0) {
const currentBatch = pageList.splice(0, concurrencyLimit);
const tasks = currentBatch.map(page => scrapeSearchResults(browser, keyword, page, location, retries));

try {
await Promise.all(tasks);
} catch (err) {
console.log(`Failed to process batch: ${err}`);
}
}

await browser.close();
}

async function processBusiness(browser, row, location, retries = 3) {
const url = row.trustpilot_url;
let tries = 0;
let success = false;


while (tries <= retries && !success) {
const page = await browser.newPage();

try {
await page.goto(url, location);

const script = await page.$("script[id='__NEXT_DATA__']");
const innerHTML = await page.evaluate(element => element.innerHTML, script);

const jsonData = JSON.parse(innerHTML);
const businessInfo = jsonData.props.pageProps;

const reviews = businessInfo.reviews;

for (const review of reviews) {
const reviewData = {
name: review.consumer.displayName,
rating: review.rating,
text: review.text,
title: review.title,
date: review.dates.publishedDate
}
await writeToCsv([reviewData], `${row.name}.csv`);
}

success = true;
} catch (err) {
console.log(`Error: ${err}, tries left: ${retries-tries}`);
tries++;
} finally {
await page.close();
}
}
}

async function processResults(csvFile, location, retries) {
const businesses = await readCsv(csvFile);
const browser = await puppeteer.launch();

for (const business of businesses) {
await processBusiness(browser, business, location, retries);
}
await browser.close();

}

async function main() {
const keywords = ["online bank"];
const concurrencyLimit = 5;
const pages = 1;
const location = "us";
const retries = 3;
const aggregateFiles = [];

for (const keyword of keywords) {
await startScrape(keyword, pages, location, concurrencyLimit, retries);
aggregateFiles.push(`${keyword.replace(" ", "-")}.csv`);
}

for (const file of aggregateFiles) {
await processResults(file, location, concurrencyLimit, retries);
}
}


main();

As we did before, we pass each object into writeToCsv() as soon as it has been processed. This allows us to store our data efficiently, but also write the absolute most possible data in the even of a crash.


Step 4: Adding Concurrency

Once again, we need to add concurrency. This time, instead of scraping multiple result pages at once, we're obviously going to be scraping multiple business pages at once.

Here is our processResults() function refactored for concurrency.

async function processResults(csvFile, location, concurrencyLimit, retries) {
const businesses = await readCsv(csvFile);
const browser = await puppeteer.launch();

while (businesses.length > 0) {
const currentBatch = businesses.splice(0, concurrencyLimit);
const tasks = currentBatch.map(business => processBusiness(browser, business, location, retries));

try {
await Promise.all(tasks);
} catch (err) {
console.log(`Failed to process batch: ${err}`);
}
}
await browser.close();

}

The rest of our code largely remains the same.


Step 5: Bypassing Anti-Bots

To finish everything off, we once again need to add support for anti-bots. Our final example really only has one relevant change.

    await page.goto(getScrapeOpsUrl(url, location));

Here is the fully updated code:

const puppeteer = require("puppeteer");
const createCsvWriter = require("csv-writer").createObjectCsvWriter;
const csvParse = require("csv-parse");
const fs = require("fs");

const API_KEY = JSON.parse(fs.readFileSync("config.json")).api_key;

async function writeToCsv(data, outputFile) {
if (!data || data.length === 0) {
throw new Error("No data to write!");
}
const fileExists = fs.existsSync(outputFile);

const headers = Object.keys(data[0]).map(key => ({id: key, title: key}))

const csvWriter = createCsvWriter({
path: outputFile,
header: headers,
append: fileExists
});
try {
await csvWriter.writeRecords(data);
} catch (e) {
throw new Error("Failed to write to csv");
}
}

async function readCsv(inputFile) {
const results = [];
const parser = fs.createReadStream(inputFile).pipe(csvParse.parse({
columns: true,
delimiter: ",",
trim: true,
skip_empty_lines: true
}));

for await (const record of parser) {
results.push(record);
}
return results;
}

function range(start, end) {
const array = [];
for (let i=start; i<end; i++) {
array.push(i);
}
return array;
}

function getScrapeOpsUrl(url, location="us") {
const params = new URLSearchParams({
api_key: API_KEY,
url: url,
country: location
});
return `https://proxy.scrapeops.io/v1/?${params.toString()}`;
}

async function scrapeSearchResults(browser, keyword, pageNumber, location="us", retries=3) {
let tries = 0;
let success = false;

while (tries <= retries && !success) {

const formattedKeyword = keyword.replace(" ", "+");
const page = await browser.newPage();
try {
const url = `https://www.trustpilot.com/search?query=${formattedKeyword}&page=${pageNumber+1}`;

const proxyUrl = getScrapeOpsUrl(url, location);
await page.goto(proxyUrl);

console.log(`Successfully fetched: ${url}`);

const script = await page.$("script[id='__NEXT_DATA__']");

const innerHTML = await page.evaluate(element => element.innerHTML, script);
const jsonData = JSON.parse(innerHTML);

const businessUnits = jsonData.props.pageProps.businessUnits;


for (const business of businessUnits) {

let category = "n/a";
if ("categories" in business && business.categories.length > 0) {
category = business.categories[0].categoryId;
}

let location = "n/a";
if ("location" in business && "country" in business.location) {
location = business.location.country
}
const trustpilotFormatted = business.contact.website.split("://")[1];

const businessInfo = {
name: business.displayName.toLowerCase().replace(" ", "").replace("'", ""),
stars: business.stars,
rating: business.trustScore,
num_reviews: business.numberOfReviews,
website: business.contact.website,
trustpilot_url: `https://www.trustpilot.com/review/${trustpilotFormatted}`,
location: location,
category: category
};

await writeToCsv([businessInfo], `${keyword.replace(" ", "-")}.csv`);
}


success = true;
} catch (err) {
console.log(`Error: ${err}, tries left ${retries - tries}`);
tries++;
} finally {
await page.close();
}
}
}

async function startScrape(keyword, pages, location, concurrencyLimit, retries) {
const pageList = range(0, pages);

const browser = await puppeteer.launch()

while (pageList.length > 0) {
const currentBatch = pageList.splice(0, concurrencyLimit);
const tasks = currentBatch.map(page => scrapeSearchResults(browser, keyword, page, location, retries));

try {
await Promise.all(tasks);
} catch (err) {
console.log(`Failed to process batch: ${err}`);
}
}

await browser.close();
}

async function processBusiness(browser, row, location, retries = 3) {
const url = row.trustpilot_url;
let tries = 0;
let success = false;


while (tries <= retries && !success) {
const page = await browser.newPage();

try {
await page.goto(getScrapeOpsUrl(url, location));

const script = await page.$("script[id='__NEXT_DATA__']");
const innerHTML = await page.evaluate(element => element.innerHTML, script);

const jsonData = JSON.parse(innerHTML);
const businessInfo = jsonData.props.pageProps;

const reviews = businessInfo.reviews;

for (const review of reviews) {
const reviewData = {
name: review.consumer.displayName,
rating: review.rating,
text: review.text,
title: review.title,
date: review.dates.publishedDate
}
await writeToCsv([reviewData], `${row.name}.csv`);
}

success = true;
} catch (err) {
console.log(`Error: ${err}, tries left: ${retries-tries}`);
tries++;
} finally {
await page.close();
}
}
}

async function processResults(csvFile, location, concurrencyLimit, retries) {
const businesses = await readCsv(csvFile);
const browser = await puppeteer.launch();

while (businesses.length > 0) {
const currentBatch = businesses.splice(0, concurrencyLimit);
const tasks = currentBatch.map(business => processBusiness(browser, business, location, retries));

try {
await Promise.all(tasks);
} catch (err) {
console.log(`Failed to process batch: ${err}`);
}
}
await browser.close();

}

async function main() {
const keywords = ["online bank"];
const concurrencyLimit = 5;
const pages = 1;
const location = "us";
const retries = 3;
const aggregateFiles = [];

for (const keyword of keywords) {
await startScrape(keyword, pages, location, concurrencyLimit, retries);
aggregateFiles.push(`${keyword.replace(" ", "-")}.csv`);
}

for (const file of aggregateFiles) {
await processResults(file, location, concurrencyLimit, retries);
}
}


main();

Step 6: Production Run

Let's run both the crawler and the scraper together in production. Here is our updated main.

async function main() {
const keywords = ["online bank"];
const concurrencyLimit = 5;
const pages = 10;
const location = "us";
const retries = 3;
const aggregateFiles = [];

for (const keyword of keywords) {
await startScrape(keyword, pages, location, concurrencyLimit, retries);
aggregateFiles.push(`${keyword.replace(" ", "-")}.csv`);
}

for (const file of aggregateFiles) {
await processResults(file, location, concurrencyLimit, retries);
}
}

As before, I've changed our PAGES to 10 and our LOCATION to "us". Here are the results.

Scraper Performance

It took just over 121 seconds (including the time it took to create our initial report) to generate a full report and process all the results (86 rows).

This comes out to a speed of about 1.41 seconds per business.


When scraping any website, you need to always pay attention to the site's terms and conditions. You can view Trustpilot's consumer terms here.

You should also respect the site's robots.txt. You can view their robots.txt file here.

Always be careful about the information you extract and don't scrape private or confidential data.

  • If a website is hidden behind a login, that is generally considered private data.
  • If your data does not require a login, it is generally considered to be public data.

If you have questions about the legality of your scraping job, it is best to consult an attorney familiar with the laws and localities you're dealing with.


Conclusion

You now know how to build both a crawler and a scraper for Trustpilot. You know how to utilize parsing, pagination, data storage, concurrency, and proxy integration. You should also know how to deal with blobs of JSON data. Dealing with JSON is a very important skill not only in web scraping but software development in general.

If you'd like to learn more about the tools used in this article, take a look at the links below:


More Puppeteer Web Scraping Guides

Now that you've got these new skills, it's time to practice them... Go build something! Here at ScrapeOps, we've got loads of resources for you to learn from.

If you're in the mood to learn more, check out our The Puppeteer Web Scraping Playbook or take a look at the articles below.