Skip to main content

Who Is Really A Scraping Pro? Benchmarking The Fingerprinting Skills Of Scraping Pros

Today, with anti-bot systems using ever more advanced request fingerprinting techniques to detect and block scrapers, a crucial skill every scraping pro needs to develop is browser fortification.

Which is the ability to fortify their requests so that they don't leak any signs that the requests are coming from a scraper.

Developers can do this themselves or use fortified versions of Puppeteer, Playwright or Selenium (often need more fortification).

However, this can be a difficult and time consuming process if you don't have prior experience.

As a result, most proxy providers now offer some form of smart proxy solution that claim to manage this browser fortification for you.

So in this article, we decided to put the Scraping Pro's to the test...

Are they really experts at browser fortication?

Or do they make noob errors that no scraping professional should make?

So in this article we will put them to the test, covering:


TLDR Scoreboard

Pretty much every proxy provider claims to be the "Best Proxy Provider" so we decided to put to the test.

Each scraping tool is a variation of the same basic idea. Managed rotating proxies with user-agent and browser fingerprint optimization to bypass anti-bot detection.

Premium Unlockers

Some of these proxy products like Oxylabs Web Unblocker, Bright Data Web Unlocker and Decodo Site Unblocker, are dedicated "Unlockers" that specialize themselves in bypassing anti-bot systems on the most difficult websites and price themselves accordingly.

Smart APIs

Whereas others like Scrape.Do, ScraperAPI and ScrapingBee are more generic Smart Proxy APIs that offer lower cost scraping solutions, but also allow users to activate more advanced anti-bot bypassing functionality on requests.

Our analysis revealed a significant performance gap among the tested proxy providers in generating realistic and resilient browser fingerprints.

  • Top Performer: Scrapfly emerged as the definitive leader, delivering high-quality, diverse, and contextually accurate profiles.
  • Okay Performers: A middle tier of providers including Zenrows, Scrape.do, and Zyte API showed promise in some areas like hardware realism but failed systematically in others, particularly geo-specific spoofing.
  • Poor Performers: The remaining providers exhibited critical flaws, ranging from static, easily-detectable fingerprints and blatant automation flags to fundamental inconsistencies between browser layers.

Here are the overall results:

ProviderOverall Score (0–100)Pass/Warn/Fail (count)Comments
Scrapfly88.9612 / 2 / 0Leader by a wide margin. Excellent geo-aware data & hardware realism.
⚠️ Zenrows60.398 / 2 / 4Strong fingerprint diversity but failed on all geo-specific tests (timezone/language).
⚠️ Scrape.do59.097 / 3 / 4Good hardware/resolution diversity. Failed geo-spoofing and platform consistency.
⚠️ Zyte API57.797 / 3 / 4Good hardware profiles but failed on location data and platform consistency.
ScraperAPI35.063 / 3 / 8Failed on entropy (static hash), geo data, and exposed HeadlessChrome in UA.
ScrapingBee23.373 / 0 / 11Failed due to static hash, 800x600 resolution, and CDP automation flags.
Bright Data Unlocker22.720 / 1 / 1Most tests N/A; browser failed to return JS-based device info.
Oxylabs Web Unblocker15.582 / 0 / 12Critical failures: static profile, platform mismatch, and exposed automation flags.
Decodo Site Unblocker15.582 / 0 / 12Critical failures: static profile, platform mismatch, and exposed automation flags.
Scrapingdog7.791 / 0 / 13Abysmal performance. Displayed massive contradictions between reported UA and JS properties.

Header and Browser Fingerprint Testing Methodology

For this benchmarking, we decided to send requests with each proxy providers headless browser enabled to Device and Browser Info to look at the sophistication of their header and browser fingerprinting.

The key question we are asking is:

Is the proxy provider leaking any information that would increase the chances of a anti-bot system detecting and blocking the request?

To do this, we focused on any leaks that could signal to the anti-bot system that the request is being made by a automated headless browser like Puppeteer, Playwright, Selenium, etc.

Here are the tests we conducted:

1. Fingerprint Entropy Across Sessions

Test whether the browser fingerprint shows natural variation across multiple sessions.

  • Example: Identical JS fingerprint hashes, same WebGL/canvas values, or repeated hardware profiles across visits.
  • Why it matters: Real users vary; deterministic fingerprints are a strong indicator of automation.
2. Header Realism

Check whether HTTP headers match the structure and formatting of real modern browsers.

  • Example: Missing Accept-Encoding: br, gzip, malformed Accept headers, or impossible UA versions.
  • Why it matters: Incorrect headers are one of the fastest and simplest ways anti-bot systems identify bots.
3. Client Hints Coherence

Evaluate whether Client Hints (sec-ch-ua*) align with the User-Agent and operating system.

  • Example: UA claims Windows but sec-ch-ua-platform reports "Linux", or the CH brand list is empty.
  • Why it matters: Mismatched Client Hints are a highly reliable signal of an automated or spoofed browser.
4. TLS / JA3 Fingerprint Realism

Test whether the TLS fingerprint resembles a real Chrome/Firefox client rather than a script or backend library.

  • Example: JA3 matching cURL/Python/Node signatures, missing ALPN protocols, or UA/TLS contradictions.
  • Why it matters: Many anti-bot systems fingerprint TLS before any JS loads, so mismatched JA3 values trigger instant blocks.
5. Platform Consistency

Evaluate whether the OS in the User-Agent matches navigator.platform and other JS-exposed platform values.

  • Example: UA says macOS but JavaScript reports Linux x86_64.
  • Why it matters: Real browsers almost never contradict their platform; mismatches are a classic bot signal.
6. Device-Type Coherence

Test whether touch support, viewport size, and sensors align with the claimed device type (mobile vs. desktop).

  • Example: A mobile UA with maxTouchPoints=0, or an iPhone UA showing a 1920×1080 desktop viewport.
  • Why it matters: Device-type mismatches are one of the simplest heuristics anti-bot systems use to flag automation.
7. Hardware Realism

Check whether CPU cores, memory, and GPU renderer look like real consumer hardware.

  • Example: Every session reporting 32 cores, 8GB RAM, and a SwiftShader GPU.
  • Why it matters: Unrealistic hardware profiles strongly suggest virtualized or automated browser environments.
8. Timezone vs IP Geolocation

Evaluate whether the browser's timezone matches the location implied by the proxy IP.

  • Example: German IP reporting UTC or America/New_York.
  • Why it matters: Timezone mismatches reveal poor geo-spoofing and are widely used in risk scoring.
9. Language/Locale vs IP Region

Check whether browser language settings align with the IP's expected locale.

  • Example: All geos returning en-US regardless of country, or JS locale contradicting the Accept-Language header.
  • Why it matters: Locale mismatch is a simple yet strong indicator that the request is automated or spoofed.
10. Resolution & Pixel Density Realism

Test whether screen resolution and device pixel ratio resemble real user devices.

  • Example: Fixed 800×600 resolution, or repeated exotic sizes not seen on consumer hardware.
  • Why it matters: Bots often run in virtual machines or containers with unnatural screen sizes.
11. Viewport & Geometry Coherence

Evaluate whether window dimensions and screen geometry form a logically possible combination.

  • Example: Inner window width larger than the actual screen width.
  • Why it matters: Impossible geometry is a giveaway that the environment is headless or virtualized.
12. Fonts & Plugins Environment

Check whether the browser exposes realistic fonts and plugins for the claimed OS and device.

  • Example: A single font across all sessions, or empty plugin lists on macOS.
  • Why it matters: Normal devices have rich font/plugin environments; sparse lists are characteristic of automation.
13. Peripherals Presence

Test whether microphones, speakers, and webcams are exposed the way real devices normally do.

  • Example: All sessions reporting 0 microphones, 0 speakers, and 0 webcams.
  • Why it matters: Real devices, especially desktops and laptops, almost always expose some media peripherals.
14. Graphics Fingerprints (Canvas & WebGL)

Evaluate whether canvas and WebGL fingerprints are diverse and platform-appropriate.

  • Example: Identical WebGL renderer hashes across sessions, or a SwiftShader GPU on a claimed macOS device.
  • Why it matters: Graphics fingerprints are hard to spoof; unrealistic or repeated values reveal automation.
15. Automation Signals

Check whether the browser exposes direct automation flags or patched properties.

  • Example: navigator.webdriver=true, visible “CDP automation” flags, or inconsistent worker properties.
  • Why it matters: These are explicit—and often fatal—indicators that the environment is controlled by a bot framework.

These header and device fingerprint tests aren't conclusive on their own. But if a proxy provider is leaking numerous suspicious fingerprints that are sent consistently then it is easy for a anti-bot to detect and block these requests. Even if the proxy IPs are rotating.

We sent requests to Device and Browser Info using each proxy providers United States, German, Japanese, United Kingdom and Russian IPs to see how they are optimizing their browsers for each geolocation and the browser leaks differ by location.

Testing Issues & Caveats

During testing we encountered some issues:


15-Test Comparison Matrix

The following is a summary breakdown of the test results for each proxy provider.

Providers are ordered from best to worst based on overall performance.

Test \ ProviderScrapflyZenrowsScrape.doZyte APIScraperAPIScrapingBeeBright Data UnlockerOxylabs Web UnblockerDecodo Site UnblockerScrapingdogWeight
1. Fingerprint Entropy✅ Excellent diversity across all sessions.✅ Good fingerprint diversity across sessions.✅ Good hash and profile variation.✅ Good fingerprint diversity across sessions.❌ Identical hash across all sessions.❌ Identical hash across all sessions.N/A❌ Low diversity, very static profiles.❌ Very low diversity, static profiles.❌ Mostly identical hashes across sessions.3
2. Header Realism✅ Headers were clean and realistic.✅ Headers were clean and realistic.⚠️ Extra space in UA string (Chrome/141.. ).✅ Headers looked well-formed and complete.✅ Headers looked well-formed and complete.✅ Headers looked well-formed and complete.❌ Malformed Accept headers (no commas).✅ Headers looked well-formed and complete.✅ Headers looked well-formed and complete.❌ FF UA with Chrome CH; missing br.3
3. Client Hints Coherence⚠️ Brands serialization artifact; otherwise coherent.⚠️ Brands serialization artifact; otherwise coherent.⚠️ Brands serialization artifact; otherwise coherent.⚠️ Brands serialization artifact; otherwise coherent.⚠️ Brands serialization artifact; otherwise coherent.❌ Blank brands and sec-ch-ua missing.⚠️ Present for Chrome, not Safari; lacked context.❌ Brands/UA versions mismatched; HeadlessChrome.❌ Frozen brands value and version mismatch.❌ Impossible: Firefox UA with Chrome CH.2.5
4. TLS / JA3 RealismN/AN/AN/AN/AN/AN/AN/AN/AN/AN/A3
5. Platform Consistency⚠️ Win32 platform for some Win64 UAs.⚠️ Win32 platform for some Win64 UAs.Win32 platform for Win64 UAs.Win32 platform for Win64 UAs.✅ UA Linux matched navigator.platform.✅ UA Linux matched navigator.platform.N/A❌ macOS/Win UA with Linux x86_64 platform.❌ macOS/Win UA with Linux x86_64 platform.❌ Win UA vs. Linux platform/CH.3
6. Device-Type Coherence✅ Desktop UA, maxTouchPoints=0.✅ Desktop UA, maxTouchPoints=0.✅ Desktop UA, maxTouchPoints=0.✅ Desktop UA, maxTouchPoints=0.✅ Desktop UA, maxTouchPoints=0.✅ Desktop UA, maxTouchPoints=0.N/A✅ Desktop UA, maxTouchPoints=0.✅ Desktop UA, maxTouchPoints=0.✅ Desktop UA, maxTouchPoints=0.3
7. Hardware Realism✅ Realistic and varied CPU/GPU.✅ Excellent, diverse hardware profiles (M-series, Intel).✅ Realistic and varied CPU/GPU.✅ Realistic and varied CPU/GPU.⚠️ Frozen values (cores: 20, mem: 8).❌ Inconsistent worker hardware (4 vs 16 cores).N/A❌ Always 32 cores, 8 mem. Unnatural.❌ Always 32 cores; unnatural.❌ Always 32 cores; unnatural.2.5
8. Timezone vs IP Geo✅ Correct timezone for IP geo.America/New_York for all non-US geos.❌ US timezones for UK/DE/RU/JP geos.America/New_York for all non-US geos.UTC timezone for all geos.UTC timezone for all geos.N/AUTC timezone for all geos.UTC timezone for all geos.America/Los_Angeles for all geos.3
9. Language/Locale vs IP✅ Language/locale matched IP geography.❌ Always en-US for non-US geos.❌ Always en-US for non-US geos.❌ Always en-US for non-US geos.❌ Always en-US for non-US geos.❌ Always en-US for non-US geos.N/A❌ Always en-US for non-US geos.❌ Always en-US for non-US geos.❌ Always en-US for non-US geos.2
10. Resolution & DPR✅ Realistic and varied resolutions.✅ Realistic and varied resolutions.✅ Realistic and varied resolutions.✅ Realistic and varied resolutions.⚠️ Always 1280x720, a frozen value.❌ Always 800x600, a known bot signature.N/A❌ Always 1920x1080; a frozen value.❌ Always 1920x1080; a frozen value.❌ Always 1920x1080; a frozen value.2
11. Viewport/Geometry✅ Plausible and varied geometries.✅ Plausible and varied geometries.✅ Plausible and varied geometries.✅ Plausible and consistent geometries.⚠️ Frozen values, inner/screen were identical.❌ Impossible: innerWidth > screenWidth.N/A❌ Frozen values, inner/screen were identical.❌ Frozen values, inner/screen were identical.❌ Frozen values, inner/screen were identical.2
12. Fonts & Plugins✅ Realistic and varied font lists.✅ Realistic and varied font lists.❌ Single suspicious font across all sessions.⚠️ Same font list for Win/macOS sessions.❌ Single suspicious font across all sessions.❌ Single suspicious font across all sessions.N/A❌ Single suspicious font or empty.❌ Single suspicious font or empty.❌ Identical minimal font list.2
13. Peripherals Presence✅ Believable mix of peripherals present.❌ Always 0/0/0 peripherals.✅ Good presence of mic/speaker/webcam.❌ Always 0/0/0 peripherals.❌ Always 0/0/0 peripherals.❌ Always 0/0/0 peripherals.N/A❌ Always 0/0/0 peripherals.❌ Always 0/0/0 peripherals.❌ Always 0/0/0 peripherals.3
14. Graphics Fingerprints✅ Excellent GPU diversity (Apple/AMD/Intel/Nvidia).✅ Excellent GPU diversity (Apple/Intel).⚠️ Good GPU diversity, but empty hashes.⚠️ Good GPU diversity, but empty hashes.❌ Uses SwiftShader software renderer.❌ Uses SwiftShader software renderer.N/A❌ Uses llvmpipe or SwiftShader renderer.❌ Uses llvmpipe software renderer.❌ Uses SwiftShader software renderer.3
15. Automation Signals✅ No automation flags; workers consistent.❌ Worker properties returned NA, indicating patching.✅ No clear automation signals.✅ No clear automation signals.HeadlessChrome visible in JS User-Agent.CDP automation: true; inconsistent workers.N/ACDP: true, Playwright: true, inconsistent workers.CDP: true, Playwright: true, inconsistent workers.❌ Deep inconsistencies across layers (UA/JS).3
Test \ ProviderScrapflyZenrowsScrape.doZyte APIScraperAPIScrapingBeeBright Data UnlockerOxylabs Web UnblockerDecodo Site UnblockerScrapingdogWeight

Scrapfly Logo

Top Performer #1: Scrapfly

Scrapfly is a Smart Proxy API offering a wide range of features, including JavaScript rendering, geotargeting, comprehensive anti-bot bypassing, and CAPTCHA solving. It provides functionality comparable to other proxy APIs but is distinguished by its advanced browser fingerprinting capabilities.

In our analysis, Scrapfly was the definitive leader, ranking #1 out of 9 providers with an exceptional score of 88.96 / 100. It passed nearly all fingerprinting tests, demonstrating a level of sophistication and realism that no other provider matched.

  • ✅ Where Scrapfly performed well: Delivered highly diverse and unique fingerprints; correctly spoofed timezone and language to match IP geography; generated realistic hardware, GPU, and peripheral profiles; and showed no signs of automation.
  • ❌ Where Scrapfly fell short: Exhibited minor platform inconsistencies in some Windows-based sessions.

All tests were conducted using Scrapfly's JS Rendering mode (render_js=true) to assess its full capabilities. Each JS-rendered request consumes 5 API credits.

PlanPrice / monthAPI CreditsCPMCPM (JS Rendering ×5)
Free Trial$0.001,000~$0~$0
Discovery$30200,000$150$750
Pro$1001,000,000$100$500
Startup$2502,500,000$100$500
Enterprise$5005,500,000$91$455

The full pricing info can be viewed here.


Headers and Device Fingerprints

Scrapfly generated the most realistic and resilient browser fingerprints of all providers tested. Its profiles were highly diverse, context-aware, and free of the common automation signals that plagued competitors.

Each session produced a unique fingerprint hash, backed by varied and believable hardware configurations, screen resolutions, and font lists. Headers were modern and well-formed, with no inconsistencies between the HTTP and JavaScript layers.

Crucially, Scrapfly excelled at geo-specific spoofing. Timezones and languages were correctly configured to match the IP address's country, a test failed by most other providers.

The fingerprints were also "rich," reporting the presence of peripherals like microphones and webcams, which adds to their authenticity. The only observed weakness was a minor platform inconsistency in some Windows sessions.


Good

In our tests, Scrapfly consistently produced high-quality fingerprints that closely resembled those of real users.

  • Excellent Fingerprint Entropy: Every session returned a unique fingerprint hash, demonstrating strong randomization of device and browser properties. This avoids the static, easily blockable signatures seen in lower-tier providers.

  • Correct Geo-Spoofing: The browser's timezone and Accept-Language header were correctly aligned with the proxy's IP address geography. This was a critical test that most competitors failed.

    // Geo-Aware Properties
    Geo: US -> Timezone: America/Los_Angeles, Language Header: en-US
    Geo: UK -> Timezone: Europe/London, Language Header: en-GB
    Geo: DE -> Timezone: Europe/Berlin, Language Header: de-DE
    Geo: JP -> Timezone: Asia/Tokyo, Language Header: ja-JP
  • Excellent Hardware & GPU Diversity: The service generated a wide variety of convincing hardware profiles, including different CPU cores, memory configurations, and real-world consumer GPUs. This diversity is essential for blending in with organic user traffic.

    // Examples of GPU Renderers Observed
    - ANGLE (Intel Intel(R) Arc(tm) Graphics (MTL)...)
    - ANGLE (NVIDIA NVIDIA GeForce GTX 1650...)
    - ANGLE (Apple ANGLE Metal Renderer: Apple M1...)
    - ANGLE (NVIDIA NVIDIA GeForce GTX 1070 Ti...)
    - ANGLE (NVIDIA NVIDIA GeForce RTX 4070 SUPER...)
  • Realistic Peripherals: Sessions reported a non-zero count for microphones, speakers, and webcams (e.g., 1/1/1). This adds a layer of realism, as many automated environments report 0/0/0.

  • No Automation Signals: The tests revealed no automation flags. Key indicators like navigator.webdriver and CDP automation were consistently false, and properties queried within a web worker matched the main browser environment perfectly.


Bad

Scrapfly's performance was nearly flawless, with only minor inconsistencies observed.

⚠️ Platform Inconsistency

For some Windows-based sessions, there was a minor mismatch between the 64-bit User-Agent string and the 32-bit platform reported by the JavaScript navigator object. While this can occur in real-world scenarios, it represents a small point of potential detection.

  • Mismatch: A User-Agent advertising Win64; x64 was paired with navigator.platform: "Win32".

  • Occurrence: This was observed in several Windows sessions across different geographies.

    HTTP UA:    Mozilla/5.0 (Windows NT 10.0.0; Win64; x64) ... Chrome/142.0.0.0
    JS Platform: Win32
⚠️ Occasional Geo-Spoofing Inconsistency

While geo-spoofing was excellent overall, one session intended for Russia (RU) was configured with an incorrect timezone.

  • Timezone Mismatch: An IP from Russia was paired with the Europe/Lisbon timezone instead of a more appropriate one like Europe/Moscow. The Accept-Language and locale data were, however, correctly set. This appears to be an outlier rather than a systemic issue.

Verdict: ✅ Good

Scrapfly provided the most advanced and resilient browser fingerprints of all providers tested.

It was the only provider to successfully pass nearly all 15 tests, demonstrating a clear lead in fingerprinting quality and anti-bot evasion.

✅ What it gets right

  • High-entropy fingerprints with excellent diversity in hardware, GPU, and resolution.
  • Accurate geo-spoofing of timezone and language to match the IP address location.
  • Rich, realistic profiles that include peripherals (microphones/cameras).
  • Clean, modern HTTP headers with no automation flags or cross-layer inconsistencies.

❌ What holds it back

  • Minor and infrequent inconsistencies, such as the Win64 User-Agent with a Win32 platform string, were the only detectable flaws.

Bottom line During our tests, Scrapfly consistently produced high-integrity browser profiles that are very difficult to distinguish from organic users. It excelled in areas where every other competitor failed, particularly in creating diverse and geo-aware fingerprints. It is a top-tier choice for scraping targets protected by advanced anti-bot systems.


#2: Zenrows

Zenrows offers a Smart Proxy API with features like JavaScript rendering, geotargeting, anti-bot bypassing, and CAPTCHA solving. It positions itself as a developer-friendly tool for overcoming modern detection systems.

In our analysis, Zenrows ranked #2 out of 9 providers, scoring 60.39 / 100. It performed well in several important fingerprinting categories but showed structural weaknesses in others—especially around geo-spoofing.

✅ Where Zenrows performed well

  • Strong fingerprint entropy with highly varied hardware, GPU, and resolution profiles.
  • Clean, modern HTTP headers with no obvious automation flags.

❌ Where Zenrows fell short

  • Failed to spoof timezone and language for non-US geographies.
  • Missing or incomplete fingerprint fields (no peripherals, empty canvas/WebGL, patched workers), reducing “naturalness.”

All tests were executed using JS Rendering (js_render=true) to measure Zenrows’ full fingerprinting capabilities. Each JS-rendered request consumes 5 API credits.

PlanPrice / monthAPI CreditsCPM (Simple)CPM (JS Rendering ×5)
Free Trial$0.001,000~$0~$0
Developer$69.99250,000~$280~$1,400
Startup$129.991,000,000~$130~$650
Business$299.993,000,000~$100~$500

The full pricing info can be viewed here.


Headers and Device Fingerprints

Zenrows delivered browser fingerprints that were strong in some areas but critically flawed in others. Its profiles showed excellent diversity in hardware, screen resolution, and overall fingerprint hash, avoiding the static signatures that plagued lower-tier providers.

However, the system completely failed to align browser properties with the IP address's geography. All sessions, regardless of their origin (e.g., Japan, Germany, UK), were configured with a US timezone and language, creating a major and easily detectable inconsistency.

The fingerprints also lacked certain "richness" signals, such as the presence of peripherals (microphones, speakers, webcams), and showed evidence of browser patching in web workers. While no overt automation flags like webdriver=true were found, these gaps point to an environment that is not fully natural.


Good

Zenrows excelled at generating varied and realistic browser profiles, a key factor in avoiding basic fingerprint-based blocking.

  • Excellent Fingerprint Entropy: Each session produced a unique fingerprint hash, indicating strong randomization of browser and device properties.

  • Realistic Hardware & GPU Profiles: The service generated a believable mix of hardware and GPU profiles, including different CPU core counts, Intel and Apple M-series GPUs, and varied screen resolutions. This diversity is crucial for appearing like a real user population.

    // Example Session Profiles
    Session 1 (US): 12 Cores, Intel UHD Graphics, 1920x1080 resolution
    Session 2 (UK): 24 Cores, Intel UHD Graphics, 1920x1080 resolution
    Session 3 (RU): 4 Cores, Intel Iris(R) Xe, 1680x1050 resolution
    Session 4 (DE): 10 Cores, Apple M2 Pro, 1728x1117 resolution
  • Clean Headers: The HTTP headers were well-formed and included modern compression encodings like br and zstd, matching what is expected from up-to-date browsers.

  • No Overt Automation Flags: The tests did not find obvious automation signals like "Webdriver": "true" or "CDP automation": "true".


Bad

The positive aspects of Zenrows' fingerprints were offset by several significant and easily detectable flaws.

❌ Failed Geo-Spoofing

The browser's timezone and language failed to match the IP address's location in every non-US test. This is a critical failure, as it creates a clear mismatch that many anti-bot systems check for.

  • Timezone: All sessions originating from the UK, Germany, Russia, and Japan incorrectly reported the America/New_York timezone.

  • Language: The Accept-Language header and navigator.languages property were consistently set to en-US for all geographies.

    // Timezone & Language Mismatches
    Geo: JP -> Timezone: America/New_York, Language: en-US (Expected: Asia/Tokyo, ja-JP)
    Geo: UK -> Timezone: America/New_York, Language: en-US (Expected: Europe/London, en-GB)
    Geo: RU -> Timezone: America/New_York, Language: en-US (Expected: Europe/Moscow, ru-RU)
    Geo: DE -> Timezone: America/New_York, Language: en-US (Expected: Europe/Berlin, de-DE)
❌ Incomplete and Inconsistent Profiles

Several fingerprint attributes were static, empty, or indicated tampering, which can be used as signals for bot detection.

  • No Peripherals: All sessions reported 0 microphones, 0 speakers, and 0 webcams. This pattern is common in automated environments and lacks the richness of real user devices.
  • Patched Workers: For some sessions, key browser properties inside web workers returned NA. This suggests the environment was modified to hide or spoof values, which is itself a detectable behavior.
  • Empty Hashes: The Canvas and WebGL fingerprint fields were consistently empty. While not a direct automation flag, the absence of these hashes where they are expected is an anomaly.
⚠️ Platform Inconsistency

For Windows sessions, there was a minor but notable mismatch between the User-Agent string and the JavaScript-reported platform.

  • Platform Mismatch: Sessions using a 64-bit Windows User-Agent (Windows NT 10.0; Win64; x64) reported navigator.platform as a 32-bit value (Win32). While this can occur in real browsers due to legacy reasons, it represents a low-quality inconsistency.

    HTTP UA:     Mozilla/5.0 (Windows NT 10.0; Win64; x64) ... Chrome/138.0.0.0 Safari/537.36
    JS Platform: "Win32"

Verdict: ⚠️ Mixed

Zenrows delivers strong fingerprint diversity but falls short in environmental realism. In our tests, this meant it could evade basic fingerprint-only checks, but it risked detection by systems that validate timezone, language, and device integrity.

✅ What it got right

  • High-entropy fingerprints with varied hardware, GPU, and resolution profiles.
  • No obvious automation flags (webdriver, CDP traces, etc.).
  • Clean, modern HTTP headers.

❌ What holds it back

  • Geo-specific data was completely wrong: every non-US test still reported a US timezone and en-US language.
  • No peripherals (0 microphones/speakers/cameras) across all sessions—a strong signal of virtualized or automated environments.
  • Incomplete worker properties and empty canvas/WebGL hashes reduced the “naturalness” of the environment.
  • Minor platform inconsistencies (e.g., Win64 UA + navigator.platform = "Win32").

Bottom line Zenrows was strong where randomness and diversity mattered, but weaker where coherence and environmental richness were required. It is competitive for tasks that rely on basic or mid-level fingerprinting checks, but high-integrity targets that validate location and device realism will likely detect its inconsistencies.


Scrape.Do

#3: Scrape.do

Scrape.do is a Smart Proxy API offering a suite of features including JavaScript rendering, geotargeting, anti-bot bypassing, and CAPTCHA solving. It competes in a similar space to other proxy APIs that provide all-in-one scraping solutions.

In our analysis, Scrape.do ranked #3 out of 9 providers, with an overall score of 59.09 / 100. It demonstrated solid performance in generating varied device profiles but was undermined by systematic failures in geo-specific spoofing and other key consistency checks.

✅ Where Scrape.do performed well

  • Good fingerprint entropy with varied hardware, GPU, and resolution profiles across sessions.
  • No overt automation flags were found in the browser environment.
  • Reported the presence of peripherals, avoiding a common bot-like signature.

❌ Where Scrape.do fell short

  • Completely failed to spoof the browser timezone and language to match the IP's geography.
  • Showed a critical platform mismatch between the User-Agent and JavaScript environment.
  • Used a single, static font across all sessions, creating a suspicious pattern.

All tests were performed using Scrape.do's JavaScript rendering mode (render=true). Each JS-rendered request consumes 5 API credits.

PlanPrice / monthAPI CreditsCPMCPM (JS Rendering ×5)
Free Trial$0.001,000~$0~$0
Hobby$29.00250,000~$116~$580
Pro$99.001,250,000~$79~$395
Business$249.003,500,000~$71~$355

The full pricing info can be viewed here.


Headers and Device Fingerprints

Scrape.do's fingerprints presented a mixed picture. The service successfully generated varied device profiles, showing good diversity in hardware, GPU models, and screen resolutions. This variation in fingerprint hashes helps avoid simple, static detection signatures. No obvious automation flags like webdriver=true were exposed in the JavaScript environment.

However, these strengths were counteracted by significant inconsistencies. The browser's timezone and language failed to align with the proxy's IP geography in every single non-US test. All sessions also presented a platform mismatch, where a 64-bit Windows User-Agent was paired with a 32-bit JavaScript platform.

Adding to these issues, Scrape.do used a single, suspicious font for every request, creating a static data point that could be used to identify its traffic. Though the core headers were well-formed, these cross-layer contradictions and static elements increased the overall detection risk.


Good

In our tests, Scrape.do was effective at generating diverse and seemingly non-automated profiles in several key areas.

  • Good Fingerprint Entropy: Each session produced a different fingerprint hash, indicating that core device and browser properties were being randomized.

  • Realistic Hardware and GPU Diversity: The service generated a wide variety of convincing hardware profiles. CPU cores ranged from 2 to 32, and the GPU renderers included a mix of realistic models.

    // Example GPU Renderers
    Session 1 (DE): NVIDIA GeForce GTX 1080
    Session 2 (US): NVIDIA GeForce GTX 1050 Ti
    Session 3 (UK): NVIDIA Quadro RTX 3000
    Session 4 (RU): NVIDIA GeForce RTX 2060
    Session 5 (JP): Intel(R) UHD Graphics 630
  • Varied Screen Resolutions: Sessions reported a good mix of common screen resolutions, such as 1440x900, 1280x720, 1366x768, and 3840x2160, helping to mimic a diverse user base.

  • Peripherals Present: Unlike several competitors, Scrape.do's browsers consistently reported the presence of peripherals (1 microphone, 1 speaker, and 1 webcam). This avoids the common bot signature of having 0 for all three.

  • No Clear Automation Flags: No clear automation flags like webdriver or CDP automation were set to true.


Bad

Despite its strengths in diversity, Scrape.do's fingerprints were marked by several critical and systematic failures that could easily lead to detection.

❌ Failed Geo-Spoofing

The browser's timezone and language did not match the IP address's location in every non-US test. This is a fundamental inconsistency checked by many sophisticated anti-bot systems.

  • Timezone: Sessions from Europe and Asia were assigned incorrect timezones from other parts of the world.

  • Language: The Accept-Language header and navigator.languages property were both consistently set to en-US for all geographies, regardless of the IP location.

    // Timezone & Language Mismatches
    Geo: DE -> Timezone: Asia/Jerusalem, Language: en-US (Expected: Europe/Berlin, de-DE)
    Geo: UK -> Timezone: America/Chicago, Language: en-US (Expected: Europe/London, en-GB)
    Geo: RU -> Timezone: Asia/Jerusalem, Language: en-US (Expected: Europe/Moscow, ru-RU)
    Geo: JP -> Timezone: America/New_York, Language: en-US (Expected: Asia/Tokyo, ja-JP)
❌ Platform Inconsistency

Every session reported a mismatched platform between the HTTP User-Agent and the JavaScript navigator object. This is a classic indicator of a spoofed or improperly configured environment.

  • Platform Mismatch: The User-Agent header identified the OS as 64-bit Windows (Win64; x64), but the navigator.platform property reported Win32. While this can occur organically in some cases, its consistent presence across all sessions is a strong anomaly.

    // HTTP User-Agent indicated a 64-bit OS
    Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 ...

    // JavaScript environment reported a 32-bit platform
    "Platform (navigator)": "Win32"
❌ Static and Suspicious Font Profile

All sessions, regardless of the reported OS, hardware, or geography, returned an identical and unusual list of fonts. Real user devices report dozens or hundreds of fonts.

  • Single Font: The font list contained only one entry: "Univers CE 55 Medium". This static and minimal font profile is a strong signal of a templated browser environment.
  • Empty Hashes: In all tests, the Canvas and WebGL fingerprint fields were empty. The absence of these rich fingerprinting values further reduces the profile's "naturalness."

Verdict: ⚠️ Mixed

Scrape.do provided good device diversity, but critical inconsistencies in geo-location and platform data make it detectable. The service is capable of defeating basic fingerprinting checks but is likely to be flagged by more advanced anti-bot systems.

✅ What it gets right

  • Good randomization of hardware, GPU, and screen resolution profiles.
  • No obvious, high-severity automation flags were exposed.
  • Reported presence of peripherals, adding a layer of realism.

❌ What holds it back

  • Systematic geo-spoofing failure: Browser timezone and language did not match the IP address location in any non-US test.
  • Critical platform mismatch: A 64-bit Windows User-Agent was consistently paired with a Win32 JavaScript platform.
  • Unnatural font profile: All sessions reported a single, static font, a strong indicator of an automated template.

Bottom line Scrape.do's performance was mixed. It showed promise with its varied device profiles but was ultimately let down by major, easily detectable inconsistencies. It may be suitable for simpler targets but would struggle against sites that validate location data or perform deep environmental consistency checks.


Zyte API

#4: Zyte API

Zyte API is a Smart Proxy API offering a diverse range of features including JavaScript rendering, geotargeting, anti-bot bypassing, and CAPTCHA solving. It is aimed at developers looking to manage complex web scraping challenges.

In our analysis, Zyte API ranked #4 out of 9 providers, with an overall score of 57.79 / 100. It demonstrated strong performance in generating diverse and realistic hardware profiles, but was consistently flawed in its geo-specific browser data, which created obvious detection vectors.

✅ Where Zyte API performed well

  • Excellent diversity in fingerprint hashes and hardware profiles (CPU/GPU).
  • Clean, well-formed HTTP headers with no overt automation signals.

❌ Where Zyte API fell short

  • Failed to align browser timezone and language with the IP address's geography.
  • Showed platform inconsistencies (Win64 User-Agent with Win32 platform).
  • Reported static or incomplete data for peripherals and fonts.

All tests were conducted using Zyte API's JS Rendering mode. Zyte uses a tiered pricing system where costs vary by the target website's difficulty. Rendered requests are significantly more expensive than standard requests. The full pricing info can be viewed here.

Unrendered HTTP Requests (Price per 1,000 successful requests)

Website TierPAYG$100$200$350$500*
1$0.13$0.10$0.08$0.07$0.06
2$0.23$0.17$0.14$0.12$0.11
3$0.43$0.32$0.26$0.22$0.21
4$0.70$0.52$0.42$0.36$0.33
5$1.27$0.95$0.76$0.65$0.60

Rendered HTTP Requests (Price per 1,000 successful requests)

Website TierPAYG$100$200$350$500*
1$1.00$0.75$0.60$0.52$0.47
2$2.00$1.50$1.20$1.03$0.95
3$4.00$3.00$2.40$2.06$1.89
4$7.99$5.99$4.79$4.12$3.79
5$15.98$11.98$9.58$8.25$7.58

Headers and Device Fingerprints

Zyte API generated browser fingerprints with notable strengths and equally notable weaknesses. The sessions demonstrated good randomization, with unique fingerprint hashes and a diverse set of hardware profiles that avoided static signatures. The HTTP headers were complete and realistic.

However, these positive aspects were undermined by systematic failures. For all non-US tests, the browser's timezone and language were incorrectly set to US values, creating a clear and easily detectable mismatch with the IP address.

Furthermore, the fingerprints lacked richness, reporting zero peripherals (microphones, speakers, webcams) and using the same font list for both Windows and macOS sessions. While no direct automation flags were exposed, these inconsistencies weaken the profiles' overall credibility.


Good

The browser profiles generated by Zyte API were strong in areas related to hardware diversity and session entropy.

  • Good Fingerprint Entropy: Each session produced a unique fingerprint hash, indicating effective randomization of browser and device properties to avoid simple blocking.

  • Excellent Hardware & GPU Diversity: The service generated a realistic and varied mix of GPU renderers, including Apple, NVIDIA, and AMD hardware. This level of diversity closely mimics a real user population.

    // Example GPU Renderer Profiles
    Session 1 (JP): ANGLE (Apple ANGLE Metal Renderer: Apple M2 Max Unspecified Version)
    Session 2 (DE): ANGLE (Apple ANGLE Metal Renderer: Apple M3 Pro Unspecified Version)
    Session 3 (RU): ANGLE (NVIDIA NVIDIA GeForce GTX 1650 ... Direct3D11 vs_5_0 ps_5_0 D3D11)
    Session 4 (US): ANGLE (AMD AMD Radeon(TM) Graphics ... Direct3D11 vs_5_0 ps_5_0 D3D11)
  • Clean Headers & No Automation Flags: The HTTP headers were well-formed and included modern encodings like br and zstd. No obvious automation flags such as "Webdriver": "true" or "CDP automation": "true" were found in our tests.


Bad

Zyte API's fingerprints contained several critical and systematic flaws that increase the risk of detection.

❌ Failed Geo-Spoofing

The browser's timezone and language did not match the IP address's location in every non-US test. This is a primary check for many anti-bot systems.

  • Timezone: All sessions originating from the UK, Germany, Russia, and Japan incorrectly reported the America/New_York timezone.

  • Language: The Accept-Language header and navigator.languages property were consistently set to en-US across all tested geographies.

    // Timezone & Language Mismatches
    Geo: JP -> Timezone: America/New_York, Language: en-US (Expected: Asia/Tokyo, ja-JP)
    Geo: UK -> Timezone: America/New_York, Language: en-US (Expected: Europe/London, en-GB)
    Geo: RU -> Timezone: America/New_York, Language: en-US (Expected: Europe/Moscow, ru-RU)
    Geo: DE -> Timezone: America/New_York, Language: en-US (Expected: Europe/Berlin, de-DE)
❌ Platform Inconsistency

For Windows-based sessions, the User-Agent string and the JavaScript environment reported conflicting CPU architectures.

  • Platform Mismatch: Sessions with a 64-bit Windows User-Agent (Win64; x64) also reported navigator.platform as the 32-bit Win32. While this can occur organically, it is often a marker of a low-quality or inconsistent fingerprint.

    HTTP UA:    Mozilla/5.0 (Windows NT 10.0; Win64; x64) ... Chrome/140.0.0.0 Safari/537.36
    JS Platform: "Win32"
⚠️ Incomplete or Static Profiles

Several device-level attributes were static or incomplete, creating unnatural patterns.

  • No Peripherals: All sessions reported 0 microphones, 0 speakers, and 0 webcams. This is a common pattern in headless or virtualized environments and differs from typical user devices.
  • Static Font Lists: The same set of fonts was reported for both Windows and macOS sessions. Real devices have distinct default font sets for each operating system, making this a detectable anomaly.
  • Empty Hashes: The Canvas and WebGL fingerprint fields were empty across all sessions. While not a direct automation flag, the consistent absence of these values is an abnormal pattern.

Verdict: ⚠️ Mixed

Zyte API delivered fingerprints with strong hardware diversity but was critically flawed in geo-specific realism and device completeness. This makes it a capable provider for defeating some fingerprinting checks but vulnerable to more advanced validation.

✅ What it gets right

  • High entropy, with varied fingerprint hashes across sessions.
  • Excellent diversity in hardware profiles, especially GPUs (Apple, NVIDIA, AMD).
  • No overt automation flags like webdriver were found.

❌ What holds it back

  • Complete failure on geo-spoofing: All non-US browsers reported a US timezone and language.
  • Platform inconsistency: 64-bit Windows User-Agents were paired with a Win32 navigator.platform.
  • No peripherals were reported in any session, a common sign of automation.
  • Static font list was used for both Windows and macOS sessions, an unrealistic pattern.

Bottom line During our tests, Zyte API proved effective at generating varied hardware profiles, a key element of modern fingerprinting. However, its systematic failures in location-based and device-level data provide clear signals for detection, making it less reliable against sophisticated anti-bot systems.


#5: ScraperAPI

ScraperAPI

ScraperAPI is a Smart Proxy API with a diverse range of features including JavaScript rendering, geotargeting, anti-bot bypassing, and CAPTCHA solving. It competes with other API-based proxy solutions that manage browser and header generation for the user.

In our analysis, ScraperAPI ranked #5 out of 9 providers, scoring a low 35.06 / 100. While some basic layer consistencies were maintained, the service suffered from several critical flaws, including the use of a completely static fingerprint and the exposure of HeadlessChrome in the browser environment.

✅ Where ScraperAPI performed well

  • HTTP headers were generally well-formed and modern.
  • Platform values (e.g., UA OS vs. navigator.platform) were consistent.

❌ Where ScraperAPI fell short

  • Used a single, static fingerprint hash for all sessions across all geographies.
  • Failed to align timezone and language with the IP address location.
  • Exposed HeadlessChrome in the JavaScript User-Agent, a clear automation flag.
  • Relied on a software-based graphics renderer (SwiftShader) instead of mimicking real hardware.

All tests were executed using JS Rendering mode (render=true) to assess their browser fingerprinting capabilities. Each JS-rendered request consumes 10 API credits.

PlanPrice / monthAPI CreditsCPM (Simple)CPM (JS Rendering ×10)
Free Trial$0.001,000~$0~$0
Hobby$49100,000~$490~$4,900
Startup$1491,000,000~$149~$1,490
Business$2993,000,000~$100~$1,000

The full pricing info can be viewed here.


Headers and Device Fingerprints

ScraperAPI's browser fingerprints showed signs of a low-quality, static environment. During testing, every session, regardless of geography, produced the exact same fingerprint hash. This indicates a complete lack of diversity in the underlying browser profiles.

The fingerprint attributes were frozen across all tests. This included hardware metrics like CPU cores and memory, screen resolution, and even the list of installed fonts. Furthermore, the environment failed on all geo-spoofing tests, consistently reporting a UTC timezone and en-US language.

Most significantly, the browser environment reported itself as HeadlessChrome in the JavaScript layer, creating a direct automation signal and a mismatch with the HTTP User-Agent header. This, combined with the use of a software graphics renderer (SwiftShader), made the environment easily identifiable as automated.


Good

Despite major flaws, ScraperAPI's profiles were internally consistent in a few basic areas.

  • Platform Consistency: The User-Agent string claiming to be Linux was correctly matched by the navigator.platform property (Linux x86_64), avoiding a common cross-layer contradiction.
  • Realistic Headers: The HTTP headers appeared well-formed and included modern encodings like br and zstd, which are expected from up-to-date Chrome browsers.
  • Device-Type Coherence: All sessions correctly reported desktop characteristics, such as maxTouchPoints: 0, which aligned with the desktop User-Agent.

Bad

ScraperAPI's fingerprints were deficient in multiple critical areas, making them easy to detect.

❌ Static Fingerprint Profile

All sessions, regardless of geography or time, produced the exact same fingerprint hash (41dcfb0219ca...). This complete lack of diversity is a strong indicator of an unsophisticated botnet and allows for simple blocking.

  • Hardware: All sessions reported frozen values of Hardware concurrency: 20 and Device memory: 8.
  • Screen Resolution: All screen and viewport metrics were fixed at 1280x720 pixels.
  • Fonts: A single, suspicious font was reported in all sessions: "Univers CE 55 Medium".
  • Peripherals: All sessions reported 0 microphones, speakers, and webcams, a common bot signature.
❌ Exposed Headless Automation

The browser environment explicitly identified itself as automated, both through the User-Agent string in the JavaScript layer and the use of a software-based graphics renderer.

  • User-Agent Mismatch: The HTTP User-Agent claimed to be standard Chrome, while the JavaScript navigator.userAgent property exposed HeadlessChrome.
  • Software Renderer: The GPU was consistently reported as SwiftShader, a software renderer used in virtualized environments, rather than a real hardware GPU from Nvidia, AMD, or Intel.
HTTP UA: Mozilla/5.0 (X11; Linux x86_64) ... Chrome/142.0.0.0 Safari/537.36
JS UA: Mozilla/5.0 (X11; Linux x86_64) ... HeadlessChrome/142.0.0.0 Safari/537.36
→ This mismatch reveals headless automation between the network and browser layers.
❌ Failed Geo-Spoofing

The browser's timezone and language failed to match the IP address's location in every non-US test. This is a primary check for many anti-bot systems.

  • Timezone: All sessions reported the browser timezone as UTC, regardless of whether the exit IP was in Germany, Japan, Russia, or the US.
  • Language: The Accept-Language header and navigator.languages property were consistently set to en-US for all geographies.
// Timezone & Language Mismatches
Geo: JP -> Timezone: UTC, Language: en-US (Expected: Asia/Tokyo, ja-JP)
Geo: UK -> Timezone: UTC, Language: en-US (Expected: Europe/London, en-GB)
Geo: RU -> Timezone: UTC, Language: en-US (Expected: Europe/Moscow, ru-RU)
Geo: DE -> Timezone: UTC, Language: en-US (Expected: Europe/Berlin, de-DE)
⚠️ Incomplete Graphics Fingerprints

The graphics stack was not only software-based but also incomplete, missing key values that are typically present in real browsers.

  • Empty Hashes: Both the Canvas and WebGL fingerprint fields were empty in all test runs. The absence of these values is an anomaly that can be used for detection.

Verdict: ❌ Poor

ScraperAPI's browser fingerprints were static, inconsistent, and transparently automated. In our tests, the service failed on nearly all major fingerprinting criteria, from entropy and geo-consistency to hiding automation flags.

✅ What it gets right

  • Basic consistency between the User-Agent's OS and navigator.platform.
  • Well-formed HTTP headers.

❌ What holds it back

  • Static fingerprint: A single fingerprint hash was used for every request, making all traffic trivial to identify and block.
  • Exposed automation: The JavaScript User-Agent explicitly contained HeadlessChrome.
  • Failed geo-spoofing: Timezone and language did not match the exit IP's geography.
  • Frozen attributes: Hardware, screen resolution, fonts, and peripherals were identical across all sessions.
  • Software rendering: Use of SwiftShader is a strong signal of a automated environment.

Bottom line The fingerprints provided by ScraperAPI during our tests were of low quality and carried a high risk of detection. The combination of a static profile and overt automation signals would not be resilient against modern anti-bot systems.


ScrapingBee

#6: ScrapingBee

ScrapingBee provides a Smart Proxy API with features including JavaScript rendering, geotargeting, anti-bot bypassing, and CAPTCHA solving. It offers a product comparable to other API-based proxy solutions.

In our analysis, ScrapingBee ranked #6 out of 9 providers, scoring 23.37 / 100. Its performance was poor, failing 11 of the 15 fingerprinting tests. While its base headers appeared modern, the underlying browser environment was static, inconsistent, and clearly showed signs of automation.

✅ Where ScrapingBee performed well

  • HTTP headers were well-formed and included modern compression encodings.
  • Platform and device type were internally consistent (Linux desktop).

❌ Where ScrapingBee fell short

  • Used a static fingerprint hash across all sessions.
  • Exposed a direct automation flag (CDP automation: true).
  • Failed on all geo-specific tests (UTC timezone, en-US language).
  • Used a static 800x600 resolution, a known bot signature.
  • Relied on software-based rendering (SwiftShader).

Testing was conducted using ScrapingBee's JavaScript Rendering feature (render_js=true), where each JS-rendered request consumes 5 API credits.

PlanPrice / monthAPI CreditsCPMCPM (JS Rendering ×5)
Free Trial$0.001,000~$0~$0
Freelance$49150,000~$327~$1,635
Startup$991,000,000~$99~$495
Business$2493,000,000~$83~$415
Business+$5998,000,000~$75~$375

The full pricing info can be viewed here.


Headers and Device Fingerprints

During testing, ScrapingBee's browser fingerprints exhibited critical flaws characteristic of a low-quality automation environment. Every session produced an identical fingerprint hash, making the traffic trivial to identify as originating from a single source.

The browser profiles were riddled with inconsistencies and classic bot signatures. These included a static 800x600 screen resolution, failed geo-spoofing that defaulted to UTC timezone and en-US language for all requests, and impossible viewport geometry where the inner window was larger than the entire screen.

Most importantly, the environment directly exposed automation flags. The browser explicitly reported CDP automation: true and used a SwiftShader software renderer, both of which are strong indicators of a headless browser. Hardware values were also inconsistent between the main browser thread and its web workers, further undermining the profile's authenticity.


Good

Despite its significant failures, ScrapingBee passed a few fundamental consistency checks.

  • Header Realism: The HTTP Accept, Accept-Encoding, and Accept-Language headers were well-formed and included modern values like br and zstd for compression.
  • Platform Consistency: The User-Agent string specified a Linux OS (X11; Linux x86_64), which consistently matched the navigator.platform value reported by JavaScript.
  • Device-Type Coherence: All sessions correctly reported desktop user agents with maxTouchPoints=0, which is consistent for a non-touch device.

Bad

ScrapingBee's fingerprints failed in numerous critical areas, revealing an easily detectable automated environment.

❌ Static Fingerprints

All sessions, regardless of geography or time, produced the exact same fingerprint hash. This complete lack of diversity is a major red flag for any anti-bot system.

  • Fingerprint Hash: The fingerprint hash was frozen at c23c835e... across every tested session.
  • Fonts & Plugins: All sessions reported a single, suspicious font ("Univers CE 55 Medium") and an identical list of default PDF viewer plugins.
  • Peripherals: All sessions reported 0 microphones, 0 speakers, and 0 webcams, a common trait of virtualized environments.
  • Resolution: The screen resolution was consistently 800x600 pixels, a dated and highly suspicious value commonly associated with automated browsers.
❌ Exposed Automation Signals

The browser environment contained clear, unambiguous flags and artifacts indicating headless automation.

  • CDP Automation Flag: The JavaScript property CDP automation was set to true, directly signaling control via the Chrome DevTools Protocol.
  • Software Rendering: The WebGL renderer was identified as SwiftShader, a software-based renderer used in server environments that lack a physical GPU. Real user devices almost always report hardware-based renderers (e.g., Intel, NVIDIA, AMD, Apple).
  • Inconsistent Hardware: The main browser thread reported Hardware concurrency: 4, while its web worker reported Hardware concurrency (in web worker): 16. This discrepancy is unnatural and points to a misconfigured or patched environment.
CDP automation:	         true
GPU renderer: ANGLE (Google Vulkan 1.3.0 (SwiftShader...))
Hardware concurrency: 4
Worker hardware concurrency: 16
❌ Failed Geo-Spoofing

The browser engine failed to align its timezone and language with the proxy's IP address location, a fundamental check for sophisticated bot detection.

  • Timezone: All sessions reported the browser timezone as UTC, regardless of the exit IP's country.
  • Language: The browser language was always set to en-US and the Accept-Language header was en-US,en;q=0.9 for all geographies.
IP Geo: US -> Timezone: UTC, Language: en-US (Expected: e.g., America/New_York)
IP Geo: GB -> Timezone: UTC, Language: en-US (Expected: Europe/London)
IP Geo: JP -> Timezone: UTC, Language: en-US (Expected: Asia/Tokyo)
IP Geo: RU -> Timezone: UTC, Language: en-US (Expected: Europe/Moscow)
IP Geo: DE -> Timezone: UTC, Language: en-US (Expected: Europe/Berlin)
❌ Incoherent Browser Profile

The browser profile contained values that were contradictory or physically impossible, further exposing its synthetic nature.

  • Impossible Geometry: The inner window size was reported as 1920x993, while the screen size was 800x600. A window cannot be larger than the screen it is displayed on.
  • Missing Client Hints: The sec-ch-ua header was missing from requests, and the navigator.userAgentData.brands property was an empty array []. For the reported Chrome version, a populated Client Hints brands list is expected.

Verdict: ❌ Poor

ScrapingBee’s browser fingerprints were clearly automated and lacked the realism needed to bypass modern bot detection. While base-level headers were adequate, the underlying device and browser profile was static, inconsistent, and exposed direct automation flags.

✅ What it gets right

  • Well-formed HTTP headers.
  • Consistent platform matching (Linux UA + JS platform).

❌ What holds it back

  • Static profile: An identical fingerprint hash was used for every request.
  • Overt automation flags: CDP automation: true and SwiftShader software rendering were always present.
  • Classic bot signatures: A static 800x600 screen resolution and a single suspicious font were used in all sessions.
  • Failed geo-spoofing: Timezone and language never matched the IP address location.
  • Impossible values: The browser reported an inner window larger than the screen itself.

Bottom line The combination of a static fingerprint, exposed automation flags, and fundamental inconsistencies makes traffic from ScrapingBee trivial to identify and block. It is not a reliable choice for targets with even basic browser fingerprinting capabilities.


Bright Data Unlocker API

#7: Bright Data Unlocker

Bright Data Unlocker is a premium proxy product from one of the most recognized providers in the data collection industry. It is designed to bypass sophisticated anti-bot systems, manage cookies and headers, solve CAPTCHAs, and render JavaScript to enable scraping on difficult websites.

In our analysis, Bright Data Unlocker ranked #7 out of 9 providers. The low ranking was not due to poor fingerprinting but because the service failed to return any JavaScript-based device information during our tests. This suggests its browser rendering capabilities were not activated or were non-operational for our requests. The only data available for inspection came from HTTP headers.

✅ Where Bright Data Unlocker performed well

  • Accept-Language headers correctly matched the requested IP geography.

❌ Where Bright Data Unlocker fell short

  • No JavaScript-based device fingerprint data was returned at all.
  • The Accept HTTP header was malformed in all test cases.

Bright Data uses a successful request-based pricing model, with costs decreasing at higher volumes.

Plan (Requests)Price / monthRequests IncludedCPM
Pay-As-You-Go~$1,500
380K Plan$499380,000~$1,313
900K Plan$999900,000~$1,110
2M Plan$1,9992,000,000~$1,000

The full pricing info can be viewed here.


Headers and Device Fingerprints

Bright Data Unlocker's behavior during testing was unique among the providers. It failed to return any JavaScript-based device information, such as screen resolution, CPU/GPU, fonts, or canvas hashes. Every test resulted in an empty device_info object and a "Not detected" fingerprint hash.

This strongly suggests that a full browser environment with JavaScript execution was not active for our requests. Consequently, most of the 15 fingerprinting tests could not be evaluated.

Analysis was limited to the HTTP headers, which showed mixed results. While headers like Accept-Language were correctly aligned with the request's geography, the Accept header was consistently malformed, which could act as a detection signal.


Good

Despite the absence of device fingerprints, some aspects of the HTTP headers were realistic and well-configured.

  • Geo-Aware Language Headers: The Accept-Language header was correctly set to match the geography of the exit IP address. This is a positive signal, as it aligns with real user behavior.

    JP: 'ja-JPja;q=0.9'
    DE: 'de-DEde;q=0.9en-US;q=0.8en;q=0.7'
    GB: 'en-GBen;q=0.9'
    US: 'en-USen;q=0.9'
  • Realistic User-Agents: The service provided varied and realistic User-Agent strings for modern browsers like Chrome and Safari.


Bad

The primary issue was the complete lack of browser fingerprint data, supplemented by a noticeable flaw in the HTTP headers.

⚠️ No Device Fingerprint Data Captured

In every test, the browser environment failed to return any JavaScript-based device information. This likely indicates that the Unlocker did not activate a full browser for our tests, rather than an attempt to spoof these values poorly.

  • Fingerprint Hash: The fingerprint_hash was consistently "Not detected".
  • Device Info: The device_info object was always empty ({}), meaning no data was collected on screen size, hardware, fonts, plugins, or graphics.
  • Evaluation Impact: This prevented evaluation across 13 of the 15 core tests, including critical areas like hardware realism, platform consistency, and automation signals.
❌ Malformed HTTP Headers

The Accept header was consistently malformed across all sessions. It was missing the required commas between media types, creating a non-standard header that is easy to distinguish from real browser traffic.

  • Missing Commas: The header value incorrectly concatenated different media types without any separators.

  • Detection Risk: This structural error is a clear anomaly that can be flagged by web application firewalls or anti-bot systems.

    // Example of a malformed Accept header from a test
    "accept": "text/htmlapplication/xhtml+xmlapplication/xml;q=0.9image/avifimage/webpimage/apng*/*;q=0.8application/signed-exchange;v=b3;q=0.7"

    // Correct format with commas
    "accept": "text/html, application/xhtml+xml, application/xml;q=0.9, image/avif, image/webp, image/apng, */*;q=0.8, application/signed-exchange;v=b3;q=0.7"

Verdict: 🚫 No Data

The Bright Data Unlocker did not produce any usable browser fingerprint data, making a full evaluation of its stealth capabilities impossible.

The results strongly indicate that a full browser environment was not operational during our tests. This is a case of insufficient data rather than poor fingerprinting quality. The service’s true capabilities for JavaScript rendering and device spoofing could not be assessed.

✅ What we could confirm

  • Accept-Language headers were correctly matched to the request's geography.

❌ What holds it back

  • No device data: The service returned no JavaScript-based device metrics at all.
  • Malformed headers: The Accept header was structurally incorrect in all sessions, creating a simple detection vector.

Bottom line Based on our tests, the Bright Data Unlocker appeared to operate as a simple HTTP proxy without the advertised browser rendering. Because of this, its effectiveness against modern fingerprinting-based bot detectors could not be verified and it cannot be fairly compared to providers that successfully rendered a full browser environment.


Decodo Site Unblocker

#8: Decodo Site Unblocker

Decodo's Site Unblocker, formerly Smartproxy's Site Unblocker, is a premium web unlocker product designed to bypass anti-bot systems and scrape protected websites. It offers JavaScript rendering, geotargeting, and CAPTCHA handling, positioning it as a competitor to other high-end unlocker tools.

In our analysis, Decodo Site Unblocker performed poorly, ranking #8 out of 9 providers and scoring just 15.58 / 100. Its performance was nearly identical to Oxylabs Web Unblocker, with both exhibiting severe and easily detectable fingerprinting flaws.

✅ Where Decodo performed well

  • HTTP headers were generally well-formed and presented a variety of User-Agents.
  • Basic device type was coherent (desktop UAs had no touch points).

❌ Where Decodo fell short

  • Exposed clear automation flags, including CDP: true and Playwright: true.
  • Suffered from a critical platform mismatch, reporting a Linux environment in JavaScript for Windows and macOS User-Agents.
  • Used highly static fingerprints for hardware, resolution, fonts, and peripherals.
  • Completely failed to spoof timezone and language to match the IP address geography.

Pricing is offered on both a per-GB and per-request basis.

Per GB Based Pricing

Plan (GB)Plan PriceCost Per GBGB IncludedCPM (~300 KB/request)
1 GB$10$10.001 GB~$3,000
5 GB$45$9.005 GB~$2,700
10 GB$85$8.5010 GB~$2,550
25 GB$200$8.0025 GB~$2,400
50 GB$375$7.5050 GB~$2,250
100 GB$675$6.75100 GB~$2,025
250 GB$1,500$6.00250 GB~$1,800
500 GB$2,750$5.50500 GB~$1,650

Per Request Based Pricing

Plan (Requests)Price / monthRequests IncludedCPM
23K$2923,000~$1,261
82K$9982,000~$1,207
216K$249216,000~$1,153
455K$499455,000~$1,097
950K$999950,000~$1,052
2,000K (2M)$1,9992,000,000~$999
4,200K (4.2M)$3,9994,200,000~$952

The full pricing info can be viewed here.


Headers and Device Fingerprints

Decodo Site Unblocker's fingerprints were static and riddled with contradictions. While the network-level headers presented a varied and plausible appearance, the JavaScript-level device environment told a completely different story.

Nearly every session, regardless of the advertised operating system (Windows, macOS), was built on a Linux foundation, revealed by navigator.platform. The environment exposed direct evidence of automation through flags like CDP automation: true and Playwright: true.

Furthermore, fundamental properties like hardware, screen resolution, and fonts were frozen across tests, creating a highly uniform and artificial profile. The system also failed to align timezone or language with the IP's geography, defaulting to UTC and en-US for all international requests. These issues combined to create a low-quality, easily detectable fingerprint.


Good

The few positive signals were limited to surface-level header realism and basic device type consistency.

  • Well-Formed Headers: The HTTP headers appeared complete and realistic. They included modern encodings like br and zstd, and the User-Agent strings were varied and syntactically correct.
  • Device-Type Coherence: All sessions used desktop User-Agents and correctly reported maxTouchPoints=0, which is consistent with non-touch-enabled desktop devices.

Bad

The browser fingerprints generated by Decodo Site Unblocker suffered from multiple critical flaws, making them highly susceptible to detection.

❌ Exposed Automation Flags

The browser environment directly advertised that it was running under an automation framework. This is a definitive signal that bot detection systems look for.

  • CDP Automation: The CDP automation flag was set to true in most sessions.

  • Playwright Flag: The Playwright flag was also exposed as true, identifying the specific automation tool used.

  • Worker Inconsistency: The Are worker values consistent flag was false, indicating that the browser's web worker environment did not match the main thread—a common side effect of browser patching.

    "CDP automation": "true",
    "Webdriver": "false",
    "Playwright": "true",
    ...
    "Are worker values consistent": "false"
❌ Critical Platform Mismatch

A severe contradiction existed between the operating system claimed in the User-Agent and the one reported by the browser's JavaScript environment.

  • UA vs. JS Platform: Sessions with macOS or Windows User-Agents consistently reported navigator.platform as Linux x86_64. This mismatch is a reliable indicator of a forged environment.

    // Session claiming to be on macOS
    HTTP UA: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) ...
    JS Platform: "Linux x86_64"

    // Session claiming to be on Windows
    HTTP UA: Mozilla/5.0 (Windows NT 10.0; Win64; x64) ...
    JS Platform: "Linux x86_64"
❌ Static and Unrealistic Fingerprints

The device profiles showed very low diversity, using the same unrealistic values across nearly all sessions.

  • Hardware: Most sessions reported an identical server-like configuration of 32 CPU cores and 8GB memory.
  • Screen Resolution: All sessions used a frozen screen resolution of 1920x1080 pixels.
  • Graphics: The browser used llvmpipe, a software-based graphics renderer, instead of a real GPU. This is a strong sign of a virtualized or headless environment.
  • Fonts: Most sessions reported a single, suspicious font: "Univers CE 55 Medium".
  • Peripherals: Every session reported 0 microphones, 0 speakers, and 0 webcams, a pattern common with automated systems.
❌ Failed Geo-Spoofing

The browser environment failed to align with the proxy's IP geolocation, a basic check for advanced anti-bot systems.

  • Timezone: All international sessions (UK, DE, JP, RU) defaulted to a UTC timezone instead of the correct local one.

  • Language: The Accept-Language header and navigator.languages property were set to en-US for all geographies.

    // Expected vs. Actual for Non-US Geos
    JP: 'UTC' / 'en-US' (Expected: 'Asia/Tokyo' / 'ja-JP')
    DE: 'UTC' / 'en-US' (Expected: 'Europe/Berlin' / 'de-DE')
    RU: 'UTC' / 'en-US' (Expected: 'Europe/Moscow' / 'ru-RU')
    UK: 'UTC' / 'en-US' (Expected: 'Europe/London' / 'en-GB')
❌ Inconsistent Client Hints

The sec-ch-ua header, which provides browser brand and version information, was inconsistent with the values reported in the JavaScript environment.

  • Version Mismatch: In one test, the HTTP sec-ch-ua header reported Chrome version 137, while the JavaScript navigator.userAgentData.brands object reported version 134. The malformed JSON in the data below is likely a parsing artifact, but the version conflict is a real fingerprinting issue.

    HTTP Header: "sec-ch-ua": "\"Chromium\";v=\"137\"..."
    JS Brands: "[{\"brand\":\"Chromium\"\"version\":\"134\"}]"

Verdict: ❌ Poor

In our tests, Decodo Site Unblocker produced fingerprints with systemic and critical flaws that make them unreliable for stealth. The service failed on nearly every major test, from exposing direct automation flags to fundamental contradictions in its platform and geolocation data.

✅ What it gets right

  • HTTP headers were syntactically correct and varied.

❌ What holds it back

  • Exposed automation: Directly reported CDP automation: true and Playwright: true.
  • Platform mismatch: Claimed to be Windows/macOS in headers while running on a Linux JS platform.
  • Highly static profile: Used frozen values for hardware (CPU, RAM), screen size, fonts, and peripherals.
  • Unrealistic hardware: Used a software-based graphics renderer (llvmpipe), a clear bot signal.
  • Failed geo-spoofing: Timezone and language never matched the IP address location.

Bottom line The fingerprints generated by Decodo Site Unblocker were some of the lowest quality in our analysis. The combination of direct automation flags, blatant platform contradictions, and static profiles makes the traffic trivial to identify and block by any moderately sophisticated anti-bot system.


Scrapingdog

#9: Scrapingdog

Scrapingdog offers a Smart Proxy API with a diverse range of features including JavaScript rendering, geotargeting, anti-bot bypassing, and CAPTCHA solving. It markets itself as a comprehensive solution for web data extraction.

In our analysis, Scrapingdog ranked #9 out of 9 providers, scoring an exceptionally low 7.79 / 100. Its browser fingerprints were plagued by fundamental inconsistencies, static values, and failed geo-spoofing, making them easily detectable.

✅ Where Scrapingdog performed well

  • It correctly reported desktop UAs with maxTouchPoints=0.
  • It did not expose high-profile automation flags like navigator.webdriver.

❌ Where Scrapingdog fell short

  • Massive contradictions between the HTTP User-Agent and the JavaScript environment.
  • Completely static profiles for hardware, resolution, fonts, and GPU across all sessions.
  • Failed to spoof timezone and language for any non-US geography.
  • Used a software-based GPU renderer (SwiftShader), a strong bot signature.

All tests were executed using JS Rendering (dynamic=true) to assess full fingerprinting capabilities. Each JS-rendered request consumes 5 API credits.

PlanPrice / monthAPI CreditsCPMCPM (JS Rendering ×5)
Free Trial$0.001,000~$0~$0
Lite$40200,000~$200~$1,000
Standard$901,000,000~$90~$450
Pro$2003,000,000~$67~$335
Premium$3506,000,000~$58~$290

The full pricing info can be viewed here.


Headers and Device Fingerprints

Scrapingdog's fingerprints exhibited severe and systemic failures during our tests. The most significant issue was a complete disconnect between the network layer (HTTP headers) and the browser environment (JavaScript properties). HTTP User-Agents rotated between Windows Chrome, Edge, and Firefox, but the JavaScript environment invariably reported itself as a Linux-based Chrome browser.

This fundamental contradiction was compounded by a completely static device profile. Every session, regardless of the reported User-Agent or geography, used the same hardware (32 cores, 8GB memory), screen resolution, font list, and SwiftShader software GPU.

Furthermore, all attempts at geo-spoofing failed. Every request returned a US-based timezone and language, creating an obvious mismatch for international traffic. These combined issues make the fingerprints highly characteristic of a low-quality automation framework.


Good

The service passed a minimal number of checks, but these were insufficient to offset the major flaws observed elsewhere.

  • Basic Device Coherence: The fingerprints were consistent for desktop devices, correctly pairing desktop User-Agents with maxTouchPoints=0.
  • No Overt Automation Flags: The tests did not find obvious high-profile flags like "Webdriver": "true" or "CDP automation": "true". However, the deep inconsistencies between layers served as a much stronger indicator of automation.

Bad

The fingerprints generated by Scrapingdog were riddled with basic, easily detectable flaws across nearly every test category.

❌ Profound Cross-Layer Contradictions

The service showed a complete mismatch between the browser identity claimed in HTTP headers and the one reported by the JavaScript environment. This is a critical failure that allows anti-bot systems to immediately flag the traffic as illegitimate.

  • User-Agent Mismatch: The HTTP User-Agent advertised Windows versions of Chrome, Edge, and even Firefox, while the JavaScript navigator.userAgent was always a static Linux Chrome string.
  • Platform Mismatch: The HTTP User-Agent indicated Windows NT 10.0, but navigator.platform consistently reported Linux x86_64.
  • Client Hints Mismatch: Sessions with a Firefox User-Agent improperly included Chrome-only sec-ch-ua headers, an impossible combination for a real browser.
// Example: Firefox UA vs. Chrome JS Environment
HTTP User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:135.0) Gecko/20100101 Firefox/135.0
HTTP Headers: sec-ch-ua: "Chromium";v="140" (Impossible for Firefox)

JS User-Agent: Mozilla/5.0 (X11; Linux x86_64) ... Chrome/140.0.0.0 Safari/537.36
JS Platform: "Linux x86_64"
❌ Failed Geo-Spoofing

The browser environment's timezone and language failed to align with the proxy's IP geolocation in every non-US test.

  • Timezone: All international sessions (UK, DE, RU, JP) incorrectly reported the America/Los_Angeles timezone.
  • Language: The Accept-Language header and navigator.language property were always set to en-US.
Geo: UK -> Timezone: America/Los_Angeles, Language: en-US (Expected: Europe/London, en-GB)
Geo: DE -> Timezone: America/Los_Angeles, Language: en-US (Expected: Europe/Berlin, de-DE)
Geo: JP -> Timezone: America/Los_Angeles, Language: en-US (Expected: Asia/Tokyo, ja-JP)
Geo: RU -> Timezone: America/Los_Angeles, Language: en-US (Expected: Europe/Moscow, ru-RU)
❌ Static and Unrealistic Fingerprints

Nearly all device and browser properties were frozen, meaning they were identical across every session. This lack of diversity is a classic bot signature.

  • Hardware: All sessions reported an unnatural 32 CPU cores and 8GB of device memory.
  • Screen Resolution: Screen and viewport dimensions were always fixed at 1920x1080.
  • GPU: The renderer was always SwiftShader, a software-based renderer used in virtual environments, instead of a real hardware-accelerated GPU.
  • Fonts & Plugins: A small, identical list of fonts and plugins was used in every session.
  • Fingerprint Hash: The fingerprint hash showed almost no variation, indicating the underlying profile was static.
❌ Low-Quality and Incomplete Profile

The fingerprints also suffered from missing values and weak signals common in automated browsers.

  • Missing Peripherals: All sessions reported 0 microphones, 0 speakers, and 0 webcams.
  • Weak Encoding: The Accept-Encoding header was missing modern compression formats like br (Brotli), which are standard in the browsers it claimed to be.
  • Empty Hashes: The Canvas and WebGL fingerprint fields were consistently empty, which is anomalous.

Verdict: ❌ Poor

Scrapingdog's fingerprints displayed critical and fundamental contradictions that make them trivial to detect.

The service's performance in our tests was at the bottom of the group. The profound inconsistencies between the HTTP and JavaScript layers, combined with a completely static and unrealistic device profile, suggest a poorly configured automation stack rather than a sophisticated stealth solution.

✅ What it gets right

  • Avoided exposing the most basic navigator.webdriver flag.

❌ What holds it back

  • Complete mismatch between HTTP headers (claiming to be Firefox/Edge/Chrome on Windows) and the JS environment (always Linux/Chrome).
  • Static device profile: every session used the same CPU, memory, resolution, fonts, and software GPU.
  • Failed all geo-specific tests: timezone and language were always set to US values.
  • Low-quality signals: missing peripherals, weak encoding support, and empty graphics hashes.

Bottom line The fingerprints generated during our tests were not suitable for bypassing even moderately sophisticated anti-bot systems. The deep, structural flaws make the traffic easily distinguishable from real human users.


Lessons Learned: What This Benchmark Teaches Us

After running this benchmark across ten different scraping tools, a few clear insights emerged—ones that challenge some of the most common assumptions developers (including us) often hold.

1. Price ≠ Quality

One of the biggest surprises was how little correlation there was between cost and fingerprinting quality.

Some of the most expensive “unlockers” in the industry returned:

  • Incomplete browser environments,
  • Static fingerprints,
  • Automation flags, or
  • OS/platform mismatches.

Meanwhile, several mid-tier tools produced cleaner, more coherent fingerprints simply because their engineering focused on realism rather than brand prestige.

If you’ve been assuming that “premium = stealth”, this benchmark shows that’s not reliable anymore.

2. The “Scraping Pros” Aren’t Magic

Most scraping vendors position themselves as fingerprinting experts, teams of specialists who automatically handle the hard parts for you.

What we found is more grounded:

  • nearly all providers struggled with timezone and locale spoofing,
  • many had cross-layer inconsistencies (UA vs JS),
  • several reused static hardware profiles,
  • a few leaked direct automation signals, and
  • only one tool consistently produced high-entropy, location-aware fingerprints.

These are not unsolvable problems—just engineering challenges. But the marketing doesn’t always match the actual implementation.

No provider is a “god-tier anti-bot engineer.” Some are simply more mature and detail-oriented than others.

3. Entropy, Coherence, and Geography Matter the Most

Across all tests, the strongest indicators of authenticity weren’t flashy features, they were fundamentals:

  • Entropy: each session must look slightly different
  • Coherence: headers, UA, platform, and JS properties should agree
  • Geography: timezone and locale must match the IP region

If a tool can get these three right, it’s already ahead of most of the industry.


Conclusion: A More Realistic Way to Think About Scraping Tools

This benchmark wasn’t about crowning a universal winner, it was about understanding the gap between expectation and reality.

The reality is:

  • Some tools are genuinely good.
  • Some tools are workable with the right strategy.
  • Some tools have critical weaknesses you need to be aware of.
  • No tool is perfect.
  • And no price tag guarantees quality.

For developers, the best approach is to treat scraping tools like any other dependency:

  • Understand their strengths
  • Understand their blind spots
  • Choose based on your actual risk level
  • Mix providers when needed
  • Keep your own fallback strategies ready

Stealth scraping is no longer about “which provider is best”, it’s about knowing where each one fits into your system.

Want to learn more about web scraping? Take a look at the links below!