PublishedJan 9, 2026UpdatedJan 9, 2026

Most "Smart Proxy" Scraping APIs Fail Browser Fingerprinting Tests [January 2026]

From Bright Data to Zenrows, we tested all the best ones on the internet. Which ones are worth your time?

Today, with anti-bot systems using ever more advanced request fingerprinting techniques to detect and block scrapers, a crucial skill every scraping pro needs to develop is browser fortification. Which is the ability to fortify their requests so that they don't leak any signs that the requests are coming from a scraper. Developers can do this themselves or use fortified versions of Puppeteer, Playwright or Selenium (often need more fortification). However, this can be a difficult and time consuming process if you don't have prior experience. As a result, most proxy providers now offer some form of smart proxy solution that claim to manage this browser fortification for you. So in this article, we decided to put the Scraping Pro's to the test... Are they really experts at browser fortication? Or do they make noob errors that no scraping professional should make? So in this article we will put them to the test, covering:
Fingerprint Benchmark Test ResultsThe complete raw data from these tests is available on GitHub at Proxy API Browser Fingerprint Repo

TLDR: What Is The Best Proxy Provider For Browser Fingerprinting?

Pretty much every proxy provider claims to be the "Best Proxy Provider" so we decided to put to the test. Each scraping tool is a variation of the same basic idea. Managed rotating proxies with user-agent and browser fingerprint optimization to bypass anti-bot detection.

Premium Unlockers

Some of these proxy products like Oxylabs Web Unblocker, Bright Data Web Unlocker and Decodo Site Unblocker, are dedicated "Unlockers" that specialize themselves in bypassing anti-bot systems on the most difficult websites and price themselves accordingly.

Smart APIs

Whereas others like Scrape.Do, ScraperAPI and ScrapingBee are more generic Smart Proxy APIs that offer lower cost scraping solutions, but also allow users to activate more advanced anti-bot bypassing functionality on requests.

TLDR Scoreboard

Our analysis revealed a significant performance gap between elite automation-masking services and lower-tier unblockers, with scores ranging from a high of 86.67 to a low of 24.76.
  • Top Performers: Scrapfly led the benchmark with a score of 86.67, followed closely by Scrape.do (81.43) and Zyte API (80.48). These providers excelled in hardware realism and automation masking, with Scrapfly specifically noted for excellent localization across multiple regions.
  • Okay Performer: Bright Data Unlocker occupied the mid-tier with a score of 41.43. While it demonstrated realistic hardware and peripheral emulation in successful sessions, its performance was hampered by session execution failures and the leakage of a "Brightbot" User-Agent in some instances.
  • Poor Performers: Providers including Decodo Site Unblocker, Scrapingdog, Scrapingant, Oxylabs Web Unblocker, ScraperAPI, and ScrapingBee all scored below 40. These services frequently exposed critical automation signals, such as CDP flags, "HeadlessChrome" declarations, and impossible viewport geometries.
Here are the overall results:
ProviderOverall ScorePassWarnFailCriticalComments
Scrapfly86.6711210Scrapfly leads the benchmark with an impressive 86.67 score, featuring excellent localization and hardware realism, though it lacks canvas fingerprint hashes and shows minor timezone inconsistencies in Russia.
Scrape.do81.4310130Scrape.do delivers a strong performance by masking automation and rotating realistic hardware profiles, but it is undermined by systematic timezone mismatches and a lack of font diversity.
Zyte API80.489320Zyte API provides a robust, automation-free environment with realistic hardware and geometry, but it falls short by failing to localize timezones and languages to the proxy IP and missing graphics hashes.
Bright Data Unlocker41.436170Bright Data Unlocker shows potential with realistic hardware and peripheral emulation, but its overall reliability is hampered by session execution failures and the occasional leakage of a 'Brightbot' User-Agent.
Decodo Site Unblocker35.712660Decodo Site Unblocker struggles with significant OS-platform mismatches and a reliance on static server-grade hardware, resulting in a low score despite maintaining coherent Client Hints.
Scrapingdog32.384451Scrapingdog exhibits critical failures by leaking CDP automation signals and utilizing suspicious 'franken-font' profiles that mix Linux User-Agents with Windows-specific fonts.
Scrapingant30.104361Scrapingant is severely penalised for exposing CDP automation flags and presenting physically impossible viewport geometry where the inner content area exceeds the available screen dimensions.
Oxylabs Web Unblocker30.001670Oxylabs Web Unblocker is limited by contradictory header declarations and an over-reliance on static software-rendered graphics sticks, failing to provide the realism required for sophisticated anti-bot bypass.
ScraperAPI27.813461ScraperAPI performs poorly due to a total lack of fingerprint entropy and a critical failure where its JavaScript environment explicitly identifies as 'HeadlessChrome' despite masquerading headers.
ScrapingBee24.763281Scrapingbee finishes last with multiple critical failures, including active CDP automation signals, missing Client Hints, and an impossible 800x600 screen resolution that contradicts its 1080p viewport.

How We Tested Browser Fingerprinting

For this benchmarking, we decided to send requests with each proxy providers headless browser enabled to Device and Browser Info to look at the sophistication of their header and browser fingerprinting. The key question we are asking is:
Is the proxy provider leaking any information that would increase the chances of a anti-bot system detecting and blocking the request?
To do this, we focused on any leaks that could signal to the anti-bot system that the request is being made by a automated headless browser like Puppeteer, Playwright, Selenium, etc. Here are the tests we conducted:
1. Fingerprint Entropy Across SessionsTest whether the browser fingerprint shows natural variation across multiple sessions.
  • Example: Identical JS fingerprint hashes, same WebGL/canvas values, or repeated hardware profiles across visits.
  • Why it matters: Real users vary; deterministic fingerprints are a strong indicator of automation.
2. Header RealismCheck whether HTTP headers match the structure and formatting of real modern browsers.
  • Example: Missing Accept-Encoding: br, gzip, malformed Accept headers, or impossible UA versions.
  • Why it matters: Incorrect headers are one of the fastest and simplest ways anti-bot systems identify bots.
3. Client Hints CoherenceEvaluate whether Client Hints (sec-ch-ua*) align with the User-Agent and operating system.
  • Example: UA claims Windows but sec-ch-ua-platform reports "Linux", or the CH brand list is empty.
  • Why it matters: Mismatched Client Hints are a highly reliable signal of an automated or spoofed browser.
4. TLS / JA3 Fingerprint RealismTest whether the TLS fingerprint resembles a real Chrome/Firefox client rather than a script or backend library.
  • Example: JA3 matching cURL/Python/Node signatures, missing ALPN protocols, or UA/TLS contradictions.
  • Why it matters: Many anti-bot systems fingerprint TLS before any JS loads, so mismatched JA3 values trigger instant blocks.
5. Platform ConsistencyEvaluate whether the OS in the User-Agent matches navigator.platform and other JS-exposed platform values.
  • Example: UA says macOS but JavaScript reports Linux x86_64.
  • Why it matters: Real browsers almost never contradict their platform; mismatches are a classic bot signal.
6. Device-Type CoherenceTest whether touch support, viewport size, and sensors align with the claimed device type (mobile vs. desktop).
  • Example: A mobile UA with maxTouchPoints=0, or an iPhone UA showing a 1920×1080 desktop viewport.
  • Why it matters: Device-type mismatches are one of the simplest heuristics anti-bot systems use to flag automation.
7. Hardware RealismCheck whether CPU cores, memory, and GPU renderer look like real consumer hardware.
  • Example: Every session reporting 32 cores, 8GB RAM, and a SwiftShader GPU.
  • Why it matters: Unrealistic hardware profiles strongly suggest virtualized or automated browser environments.
8. Timezone vs IP GeolocationEvaluate whether the browser's timezone matches the location implied by the proxy IP.
  • Example: German IP reporting UTC or America/New_York.
  • Why it matters: Timezone mismatches reveal poor geo-spoofing and are widely used in risk scoring.
9. Language/Locale vs IP RegionCheck whether browser language settings align with the IP's expected locale.
  • Example: All geos returning en-US regardless of country, or JS locale contradicting the Accept-Language header.
  • Why it matters: Locale mismatch is a simple yet strong indicator that the request is automated or spoofed.
10. Resolution & Pixel Density RealismTest whether screen resolution and device pixel ratio resemble real user devices.
  • Example: Fixed 800×600 resolution, or repeated exotic sizes not seen on consumer hardware.
  • Why it matters: Bots often run in virtual machines or containers with unnatural screen sizes.
11. Viewport & Geometry CoherenceEvaluate whether window dimensions and screen geometry form a logically possible combination.
  • Example: Inner window width larger than the actual screen width.
  • Why it matters: Impossible geometry is a giveaway that the environment is headless or virtualized.
12. Fonts & Plugins EnvironmentCheck whether the browser exposes realistic fonts and plugins for the claimed OS and device.
  • Example: A single font across all sessions, or empty plugin lists on macOS.
  • Why it matters: Normal devices have rich font/plugin environments; sparse lists are characteristic of automation.
13. Peripherals PresenceTest whether microphones, speakers, and webcams are exposed the way real devices normally do.
  • Example: All sessions reporting 0 microphones, 0 speakers, and 0 webcams.
  • Why it matters: Real devices, especially desktops and laptops, almost always expose some media peripherals.
14. Graphics Fingerprints (Canvas & WebGL)Evaluate whether canvas and WebGL fingerprints are diverse and platform-appropriate.
  • Example: Identical WebGL renderer hashes across sessions, or a SwiftShader GPU on a claimed macOS device.
  • Why it matters: Graphics fingerprints are hard to spoof; unrealistic or repeated values reveal automation.
15. Automation SignalsCheck whether the browser exposes direct automation flags or patched properties.
  • Example: navigator.webdriver=true, visible “CDP automation” flags, or inconsistent worker properties.
  • Why it matters: These are explicit and often fatal indicators that the environment is controlled by a bot framework.
These header and device fingerprint tests aren't conclusive on their own. But if a proxy provider is leaking numerous suspicious fingerprints that are sent consistently then it is easy for a anti-bot to detect and block these requests. Even if the proxy IPs are rotating. We sent requests to Device and Browser Info using each proxy providers United States, German, Japanese, United Kingdom and Russian IPs to see how they are optimizing their browsers for each geolocation and the browser leaks differ by location.

15-Test Comparison: How Each Provider Performed

Use the filter tabs above the table to show or hide specific providers for easier comparison. The table shows how each provider performed across all 15 fingerprint tests, with weights indicating the importance of each test.
Compare Providers
Test \ ProviderWeight

Detailed Results: Which Provider Is The Best For Browser Fingerprinting?

The following section contains the detailed results for each provider, showing the overall score and the results of each test.

#1: Scrapfly

Scrapfly logo
Scrapfly

Scrapfly is a Smart Proxy API with a diverse range of features including JavaScript rendering, geotargeting, anti-bot bypassing and CAPTCHA solving.

#1
Position
86.67
Overall Score
11✅ Pass
2⚠️ Warn
1❌ Fail
0🚨 Critical
Scrapfly is a comprehensive Smart Proxy API that provides advanced features like JavaScript rendering, geotargeting, and anti-bot bypassing. In our benchmark analysis, it leads the rankings, placing #1 out of 10 providers with an overall score of 86.67.During testing, the provider displayed a sophisticated approach to browser fingerprinting, particularly in its handling of hardware realism and localization. While it surpassed most competitors in entropy and device profile diversity, it showed specific weaknesses regarding graphics stack functionality and certain geography-based timezone settings.
TestStatus
TLS / JA3 RealismN/A
Timezone vs IP GeoWarn
Automation SignalsPass
Language/Locale vs IPPass
Peripherals PresencePass
Header RealismPass
Hardware RealismPass
Platform ConsistencyPass
TestStatus
Client Hints CoherencePass
Viewport/GeometryWarn
Fingerprint EntropyPass
Fonts & PluginsPass
Graphics FingerprintsFail
Device Type CoherencePass
Resolution & DPRPass
  • ✅ Where Scrapfly performed well: Demonstrated high fingerprint entropy with diverse hardware profiles; successfully masked all automation signals across main and worker threads; and provided excellent localization matching local IP regions.
  • ❌ Where Scrapfly fell short: Failed to capture valid graphics fingerprint hashes (Canvas/WebGL) and exhibited minor timezone and geometry inconsistencies.

Pricing

Testing was conducted using Scrapfly's JS Rendering mode (render_js=true), which consumes 5 API credits per request. They utilize a dynamic credit-based pricing model.
PlanPrice / monthAPI CreditsCPMCPM (JS Rendering ×5)
Free Trial$0.001,000~$0~$0
Discovery$30200,000$150$750
Pro$1001,000,000$100$500
Startup$2502,500,000$100$500
Enterprise$5005,500,000$91$455
The full pricing info can be viewed here.

Headers and Device Fingerprints

Scrapfly produced highly realistic and modern browser fingerprints. The headers were well-formed, utilizing current Chrome versions and modern compression algorithms like zstd.The provider effectively patched the JavaScript environment to ensure consistency between the network and browser layers. Platform strings, Client Hints, and hardware properties were generally coherent and avoided common bot indicators.Hardware profiles were notably high-end, featuring diverse GPU vendors and varied core counts. This diversity helps mimic a real residential user base.However, the graphics stack showed signs of being blocked or non-functional, as Canvas and WebGL hashes were missing despite sophisticated renderer strings.

Good

Scrapfly demonstrated several high-quality fingerprinting traits that align with legitimate user behavior.
  • Zero Automation Signals: No leaks were found during testing. Both navigator.webdriver and CDP automation flags remained false, and web worker values were perfectly consistent with the main thread.
  • High Hardware Diversity: Use of distinct hardware configurations across sessions helps prevent pattern detection by anti-bot systems.
US: NVIDIA GeForce RTX 3070 Ti (16 cores) DE: NVIDIA GeForce RTX 4070 SUPER (4 cores) RU: AMD Radeon RX 7900 XTX (30 cores) JP: Apple M3 Pro (32 cores) UK: Intel(R) Iris(R) Xe Graphics (20 cores)
  • Superior Localization: Language and locale settings were precisely matched to the target IP region, enhancing the credibility of the request.
DE Session: de-DE (German) RU Session: ru-RU (Russian) JP Session: ja-JP (Japanese) UK Session: en-GB (English UK)
  • Peripherals Presence: Unlike many headless environments that report zero peripherals, Scrapfly sessions consistently reported 1 microphone, 1 speaker, and 1 webcam.

Bad

While Scrapfly leads the index, several technical areas showed room for improvement.
❌ Graphics Fingerprint Failure
While high-quality GPU strings were injected into the environment, the actual graphics stack appears to have been non-functional or blocked during our tests.
  • Canvas: All sessions returned empty strings for Canvas fingerprints.
  • WebGL: WebGL fingerprint hashes were missing across all geographies.
  • Impact: Advanced anti-bot checks that verify the output of a drawing operation would detect this as an anomaly.
⚠️ Timezone Inconsistency
Most geographies were handled correctly, but the Russian session exhibited a significant misalignment.
  • RU Mismatch: A Russian IP was paired with the Africa/Casablanca timezone.
  • Successes: US, UK, DE, and JP sessions all utilized the correct regional timezones (e.g., Asia/Tokyo for Japan).
⚠️ Static Geometry
The majority of sessions used a windowless instance pattern that may appear suspicious to certain trackers.
  • Geometry: 80% of sessions reported identical screen, available, and inner dimensions with 0px offsets (e.g., 3840x2160).
  • Exceptions: A macOS session for Japan correctly showed valid retina scaling logic, indicating the provider is capable of more realistic geometry.

Verdict: ✅ Good

Scrapfly leads the benchmark with the highest overall score, offering excellent localization and hardware realism.In our tests, the provider's footprints were highly sophisticated and successfully masked all common automation indicators. While there were specific failures in graphics fingerprinting and occasional timezone mismatches, the level of hardware diversity and OS-contextual realism (fonts, plugins, and peripherals) makes it the most resilient option currently available for evading advanced detection.

#2 Scrape.do

Scrape.do logo
Scrape.do

Scrape.do is a Smart Proxy API with a diverse range of features including JavaScript rendering, geotargeting, anti-bot bypassing and CAPTCHA solving.

#2
Position
81.43
Overall Score
10✅ Pass
1⚠️ Warn
3❌ Fail
0🚨 Critical
Scrape.do is a Smart Proxy API that provides scalable web scraping infrastructure, featuring built-in JavaScript rendering and anti-bot bypass capabilities. It is positioned as a competitive alternative to established providers like ScraperAPI and ScrapingBee, charging users only for successful responses.In our benchmark analysis, Scrape.do delivered a strong performance, ranking #2 out of 10 providers with an overall score of 81.43 / 100. While it excelled in masking automation and providing diverse hardware profiles, systematic localization issues slightly impacted its final score.
TestStatus
TLS / JA3 RealismN/A
Timezone vs IP GeoFail
Automation SignalsPass
Language/Locale vs IPWarn
Peripherals PresencePass
Header RealismPass
Hardware RealismPass
Platform ConsistencyPass
TestStatus
Client Hints CoherencePass
Viewport/GeometryPass
Fingerprint EntropyPass
Fonts & PluginsFail
Graphics FingerprintsFail
Device Type CoherencePass
Resolution & DPRPass
  • ✅ Where Scrape.do performed well: Successfully hid all common automation signals (Webdriver/CDP); rotated realistic consumer GPU models and hardware signatures; and maintained high entropy across sessions with varied screen resolutions.
  • ❌ Where Scrape.do fell short: Failed to align timezones or fonts with the claimed browser environment/IP geography, and lacked graphics fingerprint hashes (Canvas/WebGL).

Pricing

Testing was conducted using the Scrape.do JS Rendering mode (render=true) to evaluate its stealth capabilities. Under this configuration, each successful request consumes 5 API credits.
PlanPrice / monthAPI CreditsCPMCPM (JS Rendering ×5)
Free Trial$0.001,000~$0~$0
Hobby$29.00250,000~$116~$580
Pro$99.001,250,000~$79~$395
Business$249.003,500,000~$71~$355
The full pricing info can be viewed here.

Headers and Device Fingerprints

Scrape.do provided a highly diverse and automated-signal-free environment. During testing, it successfully masked typical headless indicators, with navigator.webdriver set to false and consistent results across both the main window and web worker contexts.The provider rotated through a sophisticated set of hardware fingerprints. We observed a variety of realistic NVIDIA and Intel GPU renderers paired with logically varying CPU core counts. These profiles were supported by modern, well-formatted Chrome 143 headers.However, the service showed a lack of localization depth. Regardless of the proxy's exit node location, sessions frequently reverted to US-centric timezones and languages. Furthermore, while the GPU strings were plausible, the browser failed to generate actual Canvas or WebGL hashes, and the font list was suspiciously limited for a Windows environment.

Good

In our tests, Scrape.do demonstrated high-quality masking of automation and professional-grade hardware rotation.
  • Clean Automation Profile: No automation leakage was observed. Key flags like CDP automation were false, and worker values matched the main environment perfectly, avoiding a common pitfall for many proxy providers.
  • High Hardware Entropy: The service generated a wide variety of convincing hardware profiles. We observed multiple distinct GPU configurations and varying CPU core counts that moved in sync with other device metrics.
// Observed GPU Diversity
  • NVIDIA GeForce GTX 1080
  • NVIDIA GeForce GTX 1050 Ti
  • NVIDIA GeForce RTX 2060
  • NVIDIA Quadro RTX 3000
  • Intel(R) HD Graphics 620
  • Realistic Peripherals: Unlike many competitors that report zero devices, Scrape.do sessions reported valid peripheral counts, typically including 1 microphone and 1 webcam, mimicking a real consumer laptop.
  • Geometry Variation: Viewport and screen sizes were varied and followed logical hierarchies. Inner window resolutions represented realistic "windowed" browsing rather than maximized automated windows.
US Session: Screen 1366x768 | Inner 1050x661 (Windowed) UK Session: Screen 1920x1080 | Inner 945x973 (Snap layout)

Bad

Despite its strengths, several systematic inconsistencies were detected that could be flagged by sophisticated anti-bot engines.
❌ Timezone vs IP Mismatch
The provider failed to match the browser's internal timezone to the proxy's geographic location. During testing, Asian and European sessions were consistently configured with US-based timezones.
  • Geographic Discrepancy: A Japan (JP) session was configured with America/Los_Angeles, while UK, DE, and RU sessions all used America/Chicago.
  • Detection Risk: Direct contradictions between IP location and internal clock are a primary signal for proxy detection.
❌ Font Environment Mismatch
While the sessions claimed to be running on Windows 10, the environment lacked the standard font diversity expected of that operating system.
  • Limited Fonts: All sessions reported only one non-standard font: "Univers CE 55 Medium".
  • Missing Standard Fonts: The environment failed to advertise core Windows fonts like Arial, Calibri, or Segoe UI, which is a significant anomaly.
❌ Incomplete Graphics Fingerprints
While GPU strings were realistic, the browser provided no execution-based graphics data.
  • Missing Hashes: Both Canvas and WebGL fingerprint hash fields were empty across all sessions.
  • String Injection: This suggests that GPU names are being injected as strings without the accompanying rendering capabilities required to generate a behavioral fingerprint.
⚠️ Generic Language Settings
All sessions presented en-US as the primary language and locale, regardless of the target geography.
  • Localization: Sessions in Japan and Germany did not include local language preferences in the headers or the navigator object, which is untypical for legitimate organic users in those regions.

Verdict: ✅ Good

Scrape.do delivers a strong, high-entropy fingerprinting solution that is difficult to detect through standard automation checks.In our tests, the provider excelled at hiding "HeadlessChrome" signatures and rotating through realistic hardware profiles. It offers significantly better entropy than many of its direct competitors, making it a reliable choice for high-volume scraping.✅ What it gets right
  • Excellent rotation of realistic consumer GPUs and hardware concurrency values.
  • Complete suppression of navigator.webdriver and CDP automation flags.
  • High diversity in fingerprint hashes and screen resolutions.
❌ What holds it back
  • Systematic failure to synchronize timezones with proxy IP locations.
  • Suspiciously minimal font lists that do not match the advertised OS.
  • Lack of actual Canvas and WebGL rendering hashes.
Bottom line Scrape.do is a top-tier performer in the Smart Proxy space. While the timezone and font issues are points of concern for the most advanced targets, its robust suppression of automation signals and high device diversity make it a very effective tool for bypassing modern anti-bot systems.

#3 Zyte API

Zyte API logo
Zyte API

Zyte API is a comprehensive Smart Proxy solution providing advanced anti-bot bypassing, JavaScript rendering, and geotargeting capabilities to facilitate seamless web scraping at scale.

#3
Position
80.48
Overall Score
9✅ Pass
3⚠️ Warn
2❌ Fail
0🚨 Critical
In our benchmark analysis, Zyte API demonstrated a strong performance, ranking #3 out of 10 providers with an overall score of 80.48 / 100. The service provides a robust, automation-free environment that effectively mimics legitimate browser behavior in several key high-weight categories.
TestStatus
TLS / JA3 RealismN/A
Timezone vs IP GeoFail
Automation SignalsPass
Language/Locale vs IPWarn
Peripherals PresenceWarn
Header RealismPass
Hardware RealismPass
Platform ConsistencyPass
TestStatus
Client Hints CoherencePass
Viewport/GeometryPass
Fingerprint EntropyPass
Fonts & PluginsPass
Graphics FingerprintsFail
Device Type CoherencePass
Resolution & DPRWarn
  • ✅ Where Zyte API performed well: Successfully eliminated all automation signals and CDP leaks; provided highly diverse hardware profiles with realistic consumer GPU renderers; and maintained perfect internal consistency across platform and Client Hint layers.
  • ❌ Where Zyte API fell short: Failed to localize timezones and languages to the proxy IP address and failed to provide valid graphics (Canvas/WebGL) fingerprint hashes.

Pricing

Zyte API utilizes a tiered pricing model where costs are determined by the complexity of the target website (Tiers 1–5). The following table illustrates the cost per 1,000 successful requests for JS-rendered sessions, which were used for this fingerprinting analysis.
Website TierPAYG$100 Plan$200 Plan$350 Plan$500 Plan*
Tier 1$1.00$0.75$0.60$0.52$0.47
Tier 2$2.00$1.50$1.20$1.03$0.95
Tier 3$4.00$3.00$2.40$2.06$1.89
Tier 4$7.99$5.99$4.79$4.12$3.79
Tier 5$15.98$11.98$9.58$8.25$7.58
The full pricing details are available on the Zyte pricing page.

Headers and Device Fingerprints

Zyte API presented a highly clean and professional environment during testing, with absolutely no exposure of common automation markers like navigator.webdriver.The provider relies on a consistent Windows 10 profile using modern Chrome 140 headers. The internal environment is well-coordinated; hardware attributes like CPU cores and memory vary naturally between sessions, preventing the "static signature" issue seen in many automated tools.However, the environment lacks geographical awareness. Regardless of the proxy exit node location, the browser context consistently reports a US-centric profile, which serves as a detectable signal for advanced anti-bot systems. Additionally, while the GPU strings appear realistic, the lack of actual graphics fingerprint hashes is a notable technical gap.

Good

In our tests, Zyte API excelled at creating an environment that feels like a legitimate consumer device at the hardware and platform levels.
  • Zero Automation Signals: The provider successfully hid all primary bot indicators. Key properties such as webdriver and CDP automation were false, and environment checks in web workers were perfectly consistent with the main thread.
  • High Hardware Entropy: Unlike providers that use static values, Zyte API rotated realistic consumer hardware configurations. We observed varied CPU counts and memory allocations across different sessions.
Session 1: Hardware Concurrency: 8, Device Memory: 2GB Session 2: Hardware Concurrency: 4, Device Memory: 8GB Session 3: Hardware Concurrency: 4, Device Memory: 4GB
  • Realistic GPU Renderers: The browser reported legitimate consumer graphics hardware rather than generic virtualized drivers, including Intel UHD and NVIDIA GeForce identifiers.
  • Platform and Client Hint Coherence: There was perfect alignment between the HTTP User-Agent, navigator.platform ("Win32"), and Client Hint headers. This internal consistency is critical for bypassing modern integrity checks.

Bad

While stable and clean, Zyte API exhibits systematic flaws in localization and visual fingerprinting.
❌ Timezone vs IP Geo Mismatch
The environment failed to adjust the browser's internal clock to match the proxy's location. Every session, regardless of the target country, was locked to a US East Coast timezone.
  • Systematic Error: America/New_York was used for sessions in RU, JP, DE, and UK.
  • Impact: This is a high-confidence signal for bot detection as it creates a direct contradiction between the IP's physical location and the browser's reported time.
Proxy Country: RU (Russia) -> Timezone: America/New_York Proxy Country: JP (Japan) -> Timezone: America/New_York Proxy Country: DE (Germany) -> Timezone: America/New_York
❌ Missing Graphics Fingerprints
While Zyte API correctly spoofs the GPU vendor and renderer strings, the actual Canvas and WebGL hashes were returned as empty strings.
  • Missing Data: Both Canvas and WebGL fingerprint fields were entirely blank across all sessions.
  • Impact: This prevents the browser from passing checks that rely on the unique rendering idiosyncrasies of a device's graphics stack.
⚠️ Generic Locale and Language
The provider does not localize the browser language to match the proxy geography, defaulting to generic US settings globally.
  • Language: All sessions used en-US even for Germany (de-DE expected) and Japan (ja-JP expected).
  • Peripherals: Every session reported 0 microphones, speakers, and webcams, which is a common characteristic of unconfigured headless environments.
⚠️ Static Resolution
While the geometry relationships (screen vs. viewport) were realistic, the resolution was identical across all tests.
  • Resolution: All sessions used a fixed 1920x1080 screen resolution, leading to a static fingerprint profile for this metric.

Verdict: ✅ Good

In our tests, Zyte API provided a robust and extremely clean environment for scraping.The provider's primary strength lies in its complete elimination of automation signals and the realistic diversity of its hardware profiles. While the lack of geo-localization (Timezone/Language) and missing graphics hashes are notable weaknesses, the high quality of the platform coherence and entropy makes it a very strong contender for high-volume scraping tasks.✅ What it gets right
  • Total suppression of webdriver and CDP automation flags.
  • Diverse and realistic hardware configurations (CPU/RAM/GPU).
  • Consistent alignment between headers and JavaScript platform APIs.
  • Unique fingerprint hashes for every session.
❌ What holds it back
  • Inability to match the browser timezone and locale to the proxy's IP address.
  • Failure to generate valid Canvas and WebGL graphics hashes.
  • Empty peripheral device lists (0 mics/speakers).

#4 Bright Data Unlocker

Bright Data Unlocker logo
Bright Data Unlocker

Bright Data Unlocker is a premium web unblocking solution designed to bypass sophisticated anti-bot systems through automated browser management and CAPTCHA solving.

#4
Position
41.43
Overall Score
6✅ Pass
1⚠️ Warn
7❌ Fail
0🚨 Critical
In our analysis, Bright Data Unlocker ranked #4 out of 10 providers, finishing with an overall score of 41.43 / 100. While it demonstrated high-quality emulation for hardware and peripherals in successful sessions, its performance was significantly impacted by browser execution failures and overt bot signaling.The provider presented a inconsistent profile during testing. While some sessions utilized realistic consumer hardware strings, others failed to execute the fingerprinting scripts entirely or explicitly identified themselves as bot traffic via the User-Agent.
TestStatus
TLS / JA3 RealismN/A
Timezone vs IP GeoFail
Automation SignalsFail
Language/Locale vs IPWarn
Peripherals PresencePass
Header RealismFail
Hardware RealismPass
Platform ConsistencyPass
TestStatus
Client Hints CoherenceFail
Viewport/GeometryFail
Fingerprint EntropyPass
Fonts & PluginsPass
Graphics FingerprintsFail
Device Type CoherenceFail
Resolution & DPRPass
  • ✅ Where Bright Data Unlocker performed well: Successfully emulated realistic consumer hardware (Intel UHD/AMD Radeon); provided credible peripheral counts; and maintained consistent platform data across network and browser layers in functioning sessions.
  • ❌ Where Bright Data Unlocker fell short: Suffered from sessions where the fingerprinting script failed to execute; leaked an explicit Brightbot User-Agent; failed to localize timezones correctly for several international regions; and lacked canvas/webGL fingerprint hashes.

Pricing

Bright Data Unlocker operates on a successful response-based pricing model. While its documentation mentions features like waiting for selectors, our tests indicated that JavaScript execution reliability can vary.
Plan (Requests)Price / monthRequests IncludedCPM
Pay-As-You-Go,,~$1,500
380K Plan$499380,000~$1,313
900K Plan$999900,000~$1,110
2M Plan$1,9992,000,000~$1,000
The full pricing info can be viewed here.

Headers and Device Fingerprints

Bright Data Unlocker showed a split personality in its fingerprinting behavior. On one hand, it provided some of the more detailed hardware emulation we've seen, including specific GPU model strings and varied CPU core counts that mimic actual consumer machines.But these strengths were undermined by significant technical lapses. In 25% of our tests, the environment failed to capture any browser data, and in one notable instance, the service completely dropped its mask by sending a Brightbot 1.0 User-Agent string.Geographic alignment also proved problematic. While European sessions sometimes matched the local context, other sessions defaulted to US-centric timezones regardless of the actual proxy exit node, creating a detectable mismatch for anti-bot systems.

Good

When the environment functioned correctly, Bright Data Unlocker provided several realistic data points.
  • Realistic Hardware Profiles: The provider avoided generic "Google Inc." renderers in functioning sessions, instead opting for specific consumer GPU strings like Intel UHD Graphics 730 or AMD Radeon RX 6600.
  • Credible Peripheral Counts: In 75% of sessions, the browser reported the presence of one microphone, one speaker, and one webcam, avoiding the "featureless" profile common in automated browsers.
  • Device Entropy: Successful sessions showed healthy variation in hardware concurrency (8 vs 12) and unique fingerprint hashes, indicating a rotational strategy for hardware profiles.
  • Platform Consistency: Where data was available, the navigator.platform value (Win32) correctly aligned with the Windows NT 10.0 User-Agent and Client Hint platform data.
// Examples of Observed GPU Renderers
  • ANGLE (Intel Intel(R) UHD Graphics 730 (0x00004692) ...)
  • ANGLE (AMD AMD Radeon RX 6600 (0x000073FF) ...)
  • ANGLE (Intel Intel(R) UHD Graphics (0x00009BC4) ...)

Bad

Several failures across header realism and execution reliability lowered the overall score.
❌ Overt Bot Signaling
In some instances, the provider failed to apply a browser mask altogether, identifying itself via a custom bot header and lacking modern browser characteristics like Client Hints.
  • User-Agent: One session explicitly identified as Brightbot 1.0.
  • Headers: The bot-labeled session lacked modern Accept-Encoding values (only gzip) and missing sec-ch-ua headers.
HTTP UA: Brightbot 1.0 Accept-Encoding: gzip Result: Immediate identification of automated traffic.
❌ Timezone Misalignment
The provider failed to consistently localize the browser environment to match the proxy's geographic location.
  • UK Session: Timezone was America/New_York (Expected: Europe/London).
  • JP Session: Timezone was America/New_York (Expected: Asia/Tokyo).
  • Systematic Issue: This creates a geographic contradiction between the IP address and the internal browser clock.
❌ Missing Fingerprint Data
A significant portion of the test suite resulted in empty data objects, indicating that the browser environment or the unblocking logic failed to execute the required scripts.
  • Execution Failure: One session returned zero device information or fingerprint hashes.
  • Graphics: Successful sessions failed to return Canvas or WebGL hashes, leaving the environment's graphics profile incomplete.

Verdict: ⚠️ Mixed (leaning poor)

In our tests, Bright Data Unlocker exhibited high-quality hardware emulation but was let down by inconsistent execution and overt bot signals.While its ability to present realistic GPU strings and peripheral counts is a strength, the leakage of the Brightbot identifier and the frequent "missing data" failures pose a substantial detection risk. It currently lacks the geographic coherence and technical reliability required to compete with top-tier stealth providers.

#5 Decodo Site Unblocker

Smartproxy Site Unblocker logo
Smartproxy Site Unblocker

Decodo Site Unblocker (formerly Smartproxy Site Unblocker) is a premium web unlocker designed to bypass sophisticated anti-bot systems with JavaScript rendering, geotargeting, and CAPTCHA solving.

#5
Position
35.71
Overall Score
2✅ Pass
6⚠️ Warn
6❌ Fail
0🚨 Critical
In our analysis, Decodo Site Unblocker ranked #5 out of 10 providers, finishing with an overall score of 35.71 / 100. While it maintained strong coherence in Client Hints and device identification, it struggled significantly with maintaining realistic environment characteristics, particularly concerning OS-platform alignment and hardware emulation.
TestStatus
TLS / JA3 RealismN/A
Timezone vs IP GeoFail
Automation SignalsWarn
Language/Locale vs IPWarn
Peripherals PresenceWarn
Header RealismFail
Hardware RealismFail
Platform ConsistencyFail
TestStatus
Client Hints CoherencePass
Viewport/GeometryWarn
Fingerprint EntropyWarn
Fonts & PluginsFail
Graphics FingerprintsFail
Device Type CoherencePass
Resolution & DPRWarn
  • ✅ Where Decodo Site Unblocker performed well: Demonstrated excellent coherence between User-Agent strings and Client Hints (Sec-CH-UA) across various browser profiles (Chrome, Edge, and Whale).
  • ❌ Where Decodo Site Unblocker fell short: Failed high-weight tests for Platform Consistency and Graphics Fingerprinting; consistently leaked an underlying Linux platform for Windows/macOS User-Agents; and relied on server-grade hardware (32 cores) with software-based rendering (SwiftShader).

Pricing

Decodo Site Unblocker offers both consumption-based (Per GB) and request-based pricing models.Per Request Based Pricing
Plan (Requests)Price / monthRequests IncludedCPM
23K$2923,000~$1,261
82K$9982,000~$1,207
216K$249216,000~$1,153
950K$999950,000~$1,052
Per GB Based Pricing
Plan (GB)Plan PriceCost Per GBGB Included
1 GB$10$10.001 GB
10 GB$85$8.5010 GB
100 GB$675$6.75100 GB
The full pricing info can be viewed here.

Headers and Device Fingerprints

Decodo Site Unblocker exhibited a significant discrepancy between its network-layer masquerading and its internal browser environment. While HTTP headers appeared relatively modern, the JavaScript environment frequently exposed the underlying Linux server infrastructure.The service uses a "frankin-fingerprint" approach where various browser profiles (like Naver Whale or Microsoft Edge) are applied to a static, high-performance server backend. This results in 32-core CPU reports and software-rendered graphics across all sessions, regardless of the claimed device type.Furthermore, we observed a systematic lack of localization. The browser hardware, timezone (UTC), and language (en-US) remained static even when rotating through diverse global IP addresses in Germany, Japan, and Russia.

Good

In our tests, Decodo Site Unblocker showed strong internal consistency in its browser branding data.
  • Client Hints Coherence: The provider demonstrated high alignment between the User-Agent and the sec-ch-ua headers. When the UA claimed to be Edge or the niche Asian browser Whale, the Client Hints correctly reflected the corresponding version and brand.
  • Device Type Coherence: All sessions correctly identified as desktop environments. Viewport sizes consistently aligned with the Is mobile: false status, avoiding the common "mobile UA on desktop viewport" error.
UA: ...Chrome/139.0.0.0 Safari/537.36 Edg/139.0.0.0 sec-ch-ua: "Chromium";v="139", "Not)A;Brand";v="24", "Microsoft Edge";v="139" Platform: "Windows" -> Hints and UA match perfectly for Edge 139 profile.

Bad

The following issues contributed to the provider's low overall score and increased its detection profile.
❌ Platform Consistency
The provider consistently failed to mask the host operating system in the JavaScript layer. While the User-Agent claimed the user was on Windows or macOS, navigator.platform always reported Linux x86_64.
  • OS Mismatch: Windows 10 and macOS 10.15.7 User-Agents were paired with a Linux platform string.
  • Consistency: This mismatch occurred in 100% of analyzed sessions.
JP Session: HTTP UA: Mozilla/5.0 (Windows NT 10.0; ...) JS Platform (navigator): Linux x86_64 JS Platform (UAData): Windows
❌ Hardware & Graphics Realism
The hardware profile was characteristic of a virtualized server environment rather than a consumer device. All sessions reported high-end server CPU counts paired with software rendering.
  • CPU/RAM: Every session reported exactly 32 CPU cores and 8GB of RAM.
  • GPU Renderer: Graphics were rendered via SwiftShader Device (Subzero), confirming the absence of a physical GPU.
  • Fonts: Every session returned the exact same single font: Univers CE 55 Medium.
❌ Timezone & Language Misalignment
Decodo Site Unblocker did not adapt the browser environment to the proxy's geographic location.
  • Timezone: Every session used UTC regardless of the IP location (e.g., Japan, Germany, Russia).
  • Locale: All sessions were set to en-US, failing to match regional expectations for non-US proxies.
IP Location: Japan (JP) Target Timezone: Asia/Tokyo Reported Timezone: UTC Reported Language: en-US
⚠️ Automation Indicators
While explicit automation flags like webdriver were set to false, the underlying environment remained suspicious.
  • Worker Inconsistency: The Are worker values consistent check failed in all sessions, indicating that fingerprints in the Web Worker context did not match the main window context.
  • Peripherals: All sessions reported 0 microphones, speakers, and webcams, a signature often associated with automated headless browsers.

Verdict: ⚠️ Mixed (leaning poor)

In our tests, Decodo Site Unblocker's fingerprints were insufficient for bypassing advanced detection due to significant OS and hardware discrepancies.While it successfully manages Client Hints and prevents direct Webdriver leaks, the systematic exposure of a server-grade Linux environment beneath Windows or Mac User-Agents creates a high-fidelity signature for anti-bot systems. The lack of geographic localization in timezones and languages further degrades its stealth. Users targeting high-security sites may find the static hardware and software-rendered graphics specifically problematic.

#6 Scrapingdog

Scrapingdog logo
Scrapingdog

Scrapingdog is a Smart Proxy API with a diverse range of features including JavaScript rendering, geotargeting, anti-bot bypassing and CAPTCHA solving.

#6
Position
32.38
Overall Score
4✅ Pass
4⚠️ Warn
5❌ Fail
1🚨 Critical
In our analysis, Scrapingdog ranked #6 out of 10 providers, earning an overall score of 32.38 / 100. While it successfully managed some basics like header consistency and OS matching, its performance was undermined by a critical automation leak and several suspicious hardware patterns.During testing, the provider displayed a mix of realistic networking headers and highly detectable browser properties. Its primary weaknesses include the exposure of internal automation flags and a "franken-font" configuration that incorrectly pairs Linux user-agents with Windows-specific fonts.
TestStatus
TLS / JA3 RealismN/A
Timezone vs IP GeoFail
Automation SignalsCritical
Language/Locale vs IPWarn
Peripherals PresenceWarn
Header RealismPass
Hardware RealismFail
Platform ConsistencyPass
TestStatus
Client Hints CoherencePass
Viewport/GeometryWarn
Fingerprint EntropyFail
Fonts & PluginsFail
Graphics FingerprintsFail
Device Type CoherencePass
Resolution & DPRWarn
  • ✅ Where Scrapingdog performed well: Maintained consistency between the HTTP User-Agent and JavaScript platform; provided modern headers with valid compression support; and avoided leaking "Headless" tokens in the network layer.
  • ❌ Where Scrapingdog fell short: Exposed critical CDP automation signals; failed to localize timezones or languages to the proxy IP; and utilized software-based GPU rendering (SwiftShader) which is an immediate indicator of a server-grade environment.

Pricing

Testing was conducted using Scrapingdog's JS Rendering mode (dynamic=true). Requests using this feature consume 5 API credits each.
PlanPrice / monthAPI CreditsCPMCPM (JS Rendering ×5)
Free Trial$0.001,000~$0~$0
Lite$40200,000~$200~$1,000
Standard$901,000,000~$90~$450
Pro$2003,000,000~$67~$335
Premium$3506,000,000~$58~$290
The full pricing info can be viewed here.

Headers and Device Fingerprints

Scrapingdog utilized a consistent Linux-based stack for its sessions, effectively matching its X11; Linux x86_64 User-Agent with the reported JavaScript platform. This prevents simple cross-layer detection.However, the environment suffered from low entropy, rotating between just two static hardware profiles. This lack of diversity, combined with an invariant 1920x1080 resolution and zero-offset viewports, makes the traffic easy to signature.Furthermore, the browser failed to adapt to target geographies. Every session, regardless of whether the proxy was in Germany or Japan, reported a California-based timezone and US-English language settings.Most critically, the presence of the CDP automation: true flag and the use of the SwiftShader software renderer provide definitive evidence of an automated, server-hosted browser.

Good

In our tests, Scrapingdog maintained some realistic networking and platform configurations.
  • Clean Header Formatting: Headers identified as modern Chrome on Linux without leaking common automation tokens like "Headless" in the User-Agent string.
  • Consistent OS Triplet: The environment avoided OS mismatches by ensuring the User-Agent, navigator.platform, and Client Hints all reported "Linux".
User-Agent: Mozilla/5.0 (X11; Linux x86_64)... navigator.platform: "Linux x86_64" sec-ch-ua-platform: "Linux"
  • Valid Client Hints: The sec-ch-ua headers were consistent with the primary User-Agent, accurately reflecting Chrome 140.

Bad

Multiple failures in hardware and automation checks significantly increase the detection risk for this provider.
🚨 Critical: Automation Signals Exposed
CDP automation flags were detected as true in 100% of tested sessions. This is a definitive signal that the browser is being instrumented via DevTools, a behavior not found in organic user traffic.
  • CDP Flags: CDP automation returned true in all 5 sessions.
  • Webdiver: While navigator.webdriver was successfully reported as false, the CDP leak overrides this protection.
❌ Hardware and Graphics Realism
The provider failed to mimic consumer-grade hardware, instead presenting server-grade specifications and software rendering.
  • GPU Renderer: All sessions reported SwiftShader Device (Subzero), a software rasterizer that is a clear indicator of a headless or virtualized environment.
  • Hardware Concurrency: Hardware core counts were static and unusually high, appearing as either 24 or 32 cores across all tests.
❌ Font and OS Mismatch
The environment exhibited a "franken-font" profile, where a Linux User-Agent was paired with fonts typically exclusive to Windows.
  • Mismatched Fonts: Sessions reported a font list containing Calibri and Segoe UI Light.
  • Inconsistency: These are Windows-system fonts being reported by a browser claiming to be running on Linux.
❌ Lack of Geo-Awareness
The browser environment failed to synchronize its internal settings with the proxy IP's location.
  • Timezone: Every session reported America/Los_Angeles regardless of the proxy geography (DE, UK, JP, RU).
  • Language: All sessions used en-US for both headers and the navigator.language property across all regions.
Actual Geo: DE (Germany) -> Timezone: America/Los_Angeles (Expected: Europe/Berlin) Actual Geo: JP (Japan) -> Timezone: America/Los_Angeles (Expected: Asia/Tokyo) Actual Geo: RU (Russia) -> Timezone: America/Los_Angeles (Expected: Europe/Moscow)

Verdict: ⚠️ Mixed (leaning poor)

In our tests, Scrapingdog exhibited significant fingerprinting flaws that make it susceptible to detection by advanced anti-bot systems.While it successfully prevents basic OS mismatches and provides clean headers, it fails more sophisticated checks. The exposure of CDP automation flags and the use of the SwiftShader renderer are high-confidence signals for bot detection. Furthermore, the lack of geographic alignment in timezones and languages, combined with the presence of Windows fonts on a Linux profile, makes these sessions appear highly artificial.Bottom line Scrapingdog is suitable for targets with moderate security, but its critical automation leaks and inconsistent hardware profiles may struggle against top-tier anti-bot solutions that inspect the browser's internal environment.

#7 Scrapingant

ScrapingAnt logo
ScrapingAnt

ScrapingAnt is a Smart Proxy API with a diverse range of features including JavaScript rendering, geotargeting, anti-bot bypassing and CAPTCHA solving.

#7
Position
30.10
Overall Score
4✅ Pass
3⚠️ Warn
6❌ Fail
1🚨 Critical
In our analysis, Scrapingant ranked #7 out of 10 providers with an overall score of 30.1 / 100. The provider is severely penalized for exposing CDP automation flags and presenting physically impossible viewport geometry where the inner content area exceeds the available screen dimensions.
  • ✅ Where Scrapingant performed well: Maintained consistency between primary platform properties and User-Agents on the main thread; provided valid Client Hints coherence across different browser versions.
  • ❌ Where Scrapingant fell short: Failed significantly in automation masking, graphics hardware reporting, and environment richness, including a static font list and impossible screen geometries.

Pricing

All tests were conducted using ScrapingAnt's JS Rendering mode (browser=true) to assess its full capabilities. Each JS-rendered request consumes 10 API credits.
PlanPrice / monthAPI CreditsCPMCPM (JS ×10)
Free Trial$010,000~$0~$0
Enthusiast$19100,000~$190~$1,900
Startup$49500,000~$98~$980
Business$2493,000,000~$83~$830
Business Pro$5998,000,000~$75~$750
The full pricing info can be viewed here.

Headers and Device Fingerprints

Scrapingant demonstrated significant fingerprints flaws during testing, primarily driven by overt automation signals and incomplete hardware emulation. The environment frequently reported missing GPU data and impossible viewport configurations.There were notable contradictions between execution layers; while the main thread reported expected platforms, background workers often revealed a Linux environment regardless of the declared OS.Geographic alignment was poor. The browser default to a UTC timezone and generic US-centric language settings for most target countries, which is a common signature of automated cloud infrastructure.

Good

During testing, Scrapingant managed to maintain some standard browser properties on the surface level.
  • Main-Thread Platform Consistency: On the primary execution thread, the provider correctly aligned the navigator platform with the User-Agent (e.g., Win32 for Windows).
  • Client Hints Coherence: The service varied browser versions (Chrome 130, 117) and ensured that sec-ch-ua headers matched the declared User-Agent and platform.
  • Diverse Desktop Profiles: Testing showed a mix of Windows and macOS environments with appropriate mobile-flag settings (Is mobile: false).
// Consistent Main-Thread Platforms UA: Windows NT 10.0 -> Platform: Win32 UA: Macintosh; Intel Mac OS X 10_15_7 -> Platform: MacIntel

Bad

Testing revealed several high-risk indicators that could lead to immediate detection.
🚨 Critical: Automation Signals Exposed
CDP automation flags were detected as true across all sessions, providing definitive proof of automation to anti-bot systems. Additionally, worker values were inconsistent with the main thread.
  • CDP Flags: The CDP automation flag was explicitly set to true.
  • Worker Consistency: Are worker values consistent returned false in every session.
  • Worker Platform: Web workers revealed Linux x86_64 even when the main thread claimed Windows or macOS.
❌ Impossible Geometry
The browser consistently reported an inner resolution larger than the available screen resolution, a physical impossibility in standard browsing.
  • Contradiction: Screen 1920x1080 with 1920x1032 available space reported an inner viewport of 1920x1080.
  • Impact: This 48px discrepancy indicates the browser is not accounting for system UI or is being artificially resized.
❌ Missing Graphics Data
Scrapingant failed to provide any valid graphics hardware information in all tested sessions.
  • GPU Hardware: Both GPU vendor and GPU renderer returned NA.
  • Fingerprinting: Canvas and WebGL fingerprints were either missing or marked NA, preventing hardware-based trust verification.
❌ Systematic Timezone Failure
The provider used UTC as the JavaScript timezone for every session, regardless of the proxy's IP location.
  • Geographic Mismatch: Sessions in Germany (DE) and Japan (JP) both utilized UTC instead of local timezones.
RU (Russia): Timezone UTC (Expected: Europe/Moscow) JP (Japan): Timezone UTC (Expected: Asia/Tokyo) DE (Germany): Timezone UTC (Expected: Europe/Berlin)
❌ Environment Richness Issues
The environment lacked the standard variety of fonts and peripherals expected from a human-operated device.
  • Fonts: All sessions across all operating systems returned only a single, suspicious font.
  • Peripherals: Every session reported exactly 0 microphones, speakers, and webcams.
Observed Fonts: "Univers CE 55 Medium" Expected Fonts: Arial, Segoe UI, Helvetica, etc. (Not found)

Verdict: ❌ Poor

In our tests, Scrapingant's fingerprints were highly detectable due to exposed automation signals and significant internal contradictions.The combination of CDP automation: true, impossible viewport geometry, and missing GPU data makes these sessions easily identifiable as bots. While the provider successfully varies basic headers and resolutions, it fails to mask the underlying automation framework or provide a realistic, geo-aligned environment. These traits represent a high risk for any target utilizing modern anti-bot solutions.

#8 Oxylabs Web Unblocker

Oxylabs Web Unlocker is a premium web unlocker product designed to allow users to bypass anti-bot systems and scrape difficult websites. Comparable to Bright Data's Unlocker API and Decodo's Site Unblocker.
Oxylabs Web Unblocker logo
Oxylabs Web Unblocker

Oxylabs Web Unlocker is a premium web unlocker product designed to allow users to bypass anti-bot systems and scrape difficult websites. Comparable to Bright Data's Unlocker API and Decodo's Site Unblocker.

#8
Position
30.00
Overall Score
1✅ Pass
6⚠️ Warn
7❌ Fail
0🚨 Critical
TestStatus
TLS / JA3 RealismN/A
Timezone vs IP GeoFail
Automation SignalsWarn
Language/Locale vs IPWarn
Peripherals PresenceWarn
Header RealismFail
Hardware RealismFail
Platform ConsistencyFail
TestStatus
Client Hints CoherenceFail
Viewport/GeometryWarn
Fingerprint EntropyWarn
Fonts & PluginsFail
Graphics FingerprintsFail
Device Type CoherencePass
Resolution & DPRWarn
  • ✅ Where it performed well:
    • It didn't perform well on any of the target websites. In fact, it was the worst performing of all the providers tested.
  • ❌ Where it performed poorly:
    • It failed on almost all the target websites, either being blocked or returning the wrong data.
  • ⚠️ Summary:
    • Oxylabs Web Unlocker is not a reliable web unlocking service and should be avoided.

#9 ScraperAPI

ScraperAPI logo
ScraperAPI

ScraperAPI is a Smart Proxy API with a diverse range of features including JavaScript rendering, geotargeting, anti-bot bypassing and CAPTCHA solving.

#9
Position
27.81
Overall Score
3✅ Pass
4⚠️ Warn
6❌ Fail
1🚨 Critical
In our benchmark analysis, ScraperAPI struggled significantly, ranking #9 out of 10 providers with an overall score of 27.81 / 100. Its performance was characterized by a lack of fingerprint diversity and a critical exposure of its underlying automation framework.
  • ✅ Where ScraperAPI performed well: Maintained internal consistency between the reported Linux platform and its modern Chrome versioning in Client Hints.
  • ❌ Where ScraperAPI fell short: Systematically failed to localize timezones or languages to match IP geography; relied on static hardware profiles; and consistently leaked "HeadlessChrome" indicators in the JavaScript environment.

Pricing

All tests were conducted using ScraperAPI's JS Rendering mode (render=true) to assess its full capabilities. Each JS-rendered request consumes 10 API credits.
PlanPrice / monthAPI CreditsCPMCPM (JS Rendering ×10)
Free Trial$0.001,000~$0~$0
Hobby$49100,000~$490~$4,900
Startup$1491,000,000~$149~$1,490
Business$2993,000,000~$100~$1,000
The full pricing info can be viewed here.

Headers and Device Fingerprints

ScraperAPI exhibited highly predictable and automated behavior during our tests. The provider relies on a fixed Linux environment that lacks the randomization required to evade modern anti-bot systems.A significant issue was the mismatch between network-layer headers and in-browser JavaScript properties. While the HTTP headers attempted to mimic a standard browser, the JavaScript environment explicitly identified as "HeadlessChrome" an immediate signal for bot detection.Furthermore, the provider showed zero entropy across sessions. Every request, regardless of its target location, produced identical hardware specs, screen resolutions, and graphics renderers, making the traffic trivial to track and block.

Good

In our tests, ScraperAPI maintained consistent internal alignment for its chosen platform.
  • Platform Consistency: The provider consistently presented a valid Linux profile. The User-Agent correctly identified as X11; Linux x86_64, which was reflected in both the navigator.platform and userAgentData fields.
  • Client Hints Coherence: Requests included coherent Client Hints that matched the advertised browser version. For sessions using Chrome 143, the sec-ch-ua-platform was correctly set to Linux.
  • Device Type Coherence: The browser environment correctly reported Screen touch points as 0 and Is mobile as false, which is consistent with the desktop Linux profile it aimed to emulate.

Bad

ScraperAPI's fingerprints were static and contained various indicators that are commonly associated with automated scraping.
🚨 Critical: Automation Signals Exposed
The JavaScript environment revealed clear evidence of automation that contradicted the masquerading HTTP headers. This represents a definitive signal for anti-bot detection.
  • Header vs JS Mismatch: While headers claimed a standard browser, the JavaScript device_info.User-Agent explicitly reported HeadlessChrome/143.0.0.0.
  • Consistency: This exposure was observed in 100% of the session data captured.
HTTP Header UA: Mozilla/5.0 (X11; Linux x86_64) ... Chrome/143.0.0.0 JS Engine UA: Mozilla/5.0 (X11; Linux x86_64) ... HeadlessChrome/143.0.0.0 Result: Critical failure; the JS engine explicitly identifies as a bot.
❌ Zero Fingerprint Entropy
ScraperAPI utilized a completely static configuration for all sessions. This lack of diversity allows anti-bot systems to create a permanent signature for the provider's traffic.
  • Identical Hashes: All sessions produced the exact same fingerprint hash (4f454b7c...).
  • Static Hardware: Hardware concurrency was fixed at 20 cores and memory at 8GB for every attempt.
  • GPU Rendering: Every session used the exact same Google SwiftShader software renderer.
❌ Systematic Geo-Spoofing Failure
The browser environment failed to adapt to the geographic location of the proxy IP, a common requirement for bypassing regional bot defenses.
  • Timezone: All sessions reported UTC regardless of whether the IP was in Japan, Germany, or Russia.
  • Language: Every session defaulted to en-US and en, ignoring local language requirements for non-US regions.
UK Session: Timezone: UTC (Expected: Europe/London) DE Session: Timezone: UTC (Expected: Europe/Berlin) JP Session: Timezone: UTC (Expected: Asia/Tokyo) RU Session: Timezone: UTC (Expected: Europe/Moscow)
❌ Hardware and Graphics Realism
The use of software-based graphics rendering and high-core counts is a clear indicator of a virtualized server environment.
  • GPU Renderer: The renderer was identified as ANGLE (Google Vulkan 1.3.0 (SwiftShader Device ...)). SwiftShader is a CPU-based implementation often used in headless setups.
  • Fonts: Only a single, suspicious font, Univers CE 55 Medium, was detected, failing to mimic the standard font library of a consumer OS.

Verdict: ❌ Poor

In our tests, ScraperAPI provided a high-risk environment with obvious automation signatures.The combination of a critical HeadlessChrome leak and zero entropy across sessions makes this provider highly vulnerable to detection by modern anti-bot solutions.✅ What it gets right
  • Consistent internal reporting of a Linux desktop environment.
  • Coherent Client Hint headers for the reported browser version.
❌ What holds it back
  • Critical failure where the JavaScript engine explicitly identifies as automated.
  • Completely static fingerprints with no randomization of hardware or geometry.
  • Total lack of geographic localization for timezones and languages.
  • Use of software rendering (SwiftShader) and server-like hardware profiles.
Bottom line ScraperAPI is currently poorly suited for targets with advanced fingerprinting defenses. Its static signatures and explicit automation leaks make it easily distinguishable from organic human traffic.

#10 ScrapingBee

ScrapingBee logo
ScrapingBee

ScrapingBee is a Smart Proxy API with a diverse range of features including JavaScript rendering, geotargeting, anti-bot bypassing and CAPTCHA solving.

#10
Position
24.76
Overall Score
3✅ Pass
2⚠️ Warn
8❌ Fail
1🚨 Critical
ScrapingBee finished at the bottom of our benchmark, ranking #10 out of 10 providers with an overall score of 24.76 / 100. Our analysis revealed significant fingerprinting vulnerabilities, including critical automation leaks and physically impossible device geometries that make the traffic easily identifiable to anti-bot systems.While the service maintained consistency between its User-Agent and platform reporting, it struggled with nearly every other realism metric. Key issues included active CDP automation signals, a lack of modern Client Hints headers, and a heavily restricted environment that utilized software rendering and a single system font.
TestStatus
TLS / JA3 RealismN/A
Timezone vs IP GeoFail
Automation SignalsCritical
Language/Locale vs IPWarn
Peripherals PresenceWarn
Header RealismPass
Hardware RealismFail
Platform ConsistencyPass
TestStatus
Client Hints CoherenceFail
Viewport/GeometryFail
Fingerprint EntropyFail
Fonts & PluginsFail
Graphics FingerprintsFail
Device Type CoherencePass
Resolution & DPRFail
  • ✅ Where ScrapingBee performed well: Maintained internal consistency by using "honest" Linux profiles where the User-Agent correctly matched the underlying platform.
  • ❌ Where ScrapingBee fell short: Exposed critical CDP automation flags; failed to align timezones or languages with proxy IP geography; reported impossible screen-to-viewport ratios (800x600 screen with 1080p content); and lacked modern Client Hints support.

Pricing

All tests were conducted using ScrapingBee's JS Rendering mode (render_js=true) to assess its full browser fingerprinting capabilities. Each JS-rendered request consumes 5 API credits.
PlanPrice / monthAPI CreditsCPMCPM (JS Rendering ×5)
Free Trial$0.001,000~$0~$0
Freelance$49150,000~$327~$1,635
Startup$991,000,000~$99~$495
Business$2493,000,000~$83~$415
Business+$5998,000,000~$75~$375
The full pricing info can be viewed here.

Headers and Device Fingerprints

ScrapingBee's fingerprinting profile appeared heavily virtualized and characteristic of a standard headless browser deployment. During testing, the environment consistently leaked automation signals through the Chrome DevTools Protocol (CDP) and exhibited high levels of repetition across sessions.The browser environment failed basic realism checks, such as matching the system timezone and language to the proxy's geographic location. Furthermore, the reporting of an 800x600 screen resolution alongside a much larger viewport creates a "franken-fingerprint" that is trivial for modern security stacks to detect.Hardware signatures were similarly problematic, often defaulting to software-based "SwiftShader" rendering or returning missing data entirely. This, combined with the absence of microphones or cameras, creates a starkly different profile than a genuine consumer device.

Good

In our tests, ScrapingBee demonstrated a few areas of internal consistency.
  • Platform Consistency: ScrapingBee utilized a transparent Linux profile. The HTTP User-Agent and the JavaScript navigator.platform both accurately matched as Linux, avoiding the immediate red flags associated with mismatched OS spoofing.
  • Technical Header Validity: The headers captured included modern compression support, such as zstd and br, which are expected in contemporary browser requests.
  • Device Type Coherence: Despite the small reported screen size, the service correctly identified as a desktop environment (Is mobile: false) and maintained a desktop-class inner viewport.
UA: Mozilla/5.0 (X11; Linux x86_64) ... Chrome/131.0.0.0 Platform: Linux x86_64 Mobile: false Touch Points: 0

Bad

ScrapingBee exhibited several high-severity issues during our benchmarking process.
🚨 Critical: Automation Signals Exposed
The environment explicitly identified itself as being under automated control through the Chrome DevTools Protocol.
  • CDP Leak: CDP automation was detected as true in 100% of the tested sessions.
  • Worker Inconsistency: Data queried via web workers mismatched the main environment, providing definitive proof of an automated container.
❌ Impossible Device Geometry
Sessions reported a physical screen resolution that was smaller than the actual browser window (viewport).
  • Geometry Mismatch: The Screen resolution was reported as 800x600 pixels, while the Inner resolution (the content area) was 1920x993 pixels.
  • Detection Risk: It is physically impossible for a viewport to be larger than the screen containing it; this is a classic signature of a default headless browser configuration.
❌ Missing Client Hints
Despite using modern Chrome User-Agents (v131 and v141), the browser failed to send the corresponding Client Hint headers.
  • Headers: All sec-ch-ua headers were entirely absent.
  • Impact: Real Chrome browsers of these versions always send these headers; their absence flags the traffic as synthetic or generated by an outdated automation framework.
❌ Failure to Geo-Spoof
The browser environment remained static regardless of the proxy's exit node location.
  • Timezone: Sessions in the US, Russia, and Germany returned UTC, while UK and Japan sessions returned Etc/Unknown.
  • Language: All sessions used en-US regardless of the target geography.
Proxy Geo: Japan (JP) Browser Timezone: Etc/Unknown (Expected: Asia/Tokyo) Browser Language: en-US (Expected: ja-JP)
❌ Weak Hardware & Graphics Signatures
The environment lacked convincing hardware diversity and relied on virtualized drivers.
  • Graphics: Most sessions exposed Google SwiftShader, a software-based renderer used in headless environments. Other sessions returned NA for GPU data.
  • Fonts: The font list was limited to a single unusual entry ("Univers CE 55 Medium"), missing standard Linux system fonts.
  • Peripherals: Every session reported 0 microphones, speakers, and webcams, which is highly atypical for a residential user.

Verdict: ❌ Poor

In our tests, ScrapingBee demonstrated significant fingerprinting weaknesses that pose a high risk of detection.The combination of critical automation leaks (CDP), impossible screen configurations, and the total lack of Client Hints suggests a tool that is not currently optimized for bypassing advanced anti-bot systems. While it may suffice for simpler targets, its static and identifiable browser profile is easily distinguished from organic human traffic.✅ What it gets right
  • Consistent Linux OS reporting across headers and JS.
  • Valid header compression and structure.
❌ What holds it back
  • Critical exposure of CDP automation flags in all sessions.
  • Impossible screen resolution vs. viewport geometry.
  • Failure to align timezone and language with proxy geography.
  • Total absence of mandatory modern Client Hint headers.
  • Low entropy and lack of genuine hardware/GPU signatures.
Bottom line During our tests, ScrapingBee exhibited a high volume of "bot-like" signatures. It is currently the least stealthy provider in our benchmark, making it a risky choice for scraping high-security domains where fingerprinting authenticity is paramount.

Lessons Learned: What This Benchmark Teaches Us

After analyzing the benchmark results across ten different scraping providers, several core insights emerged that challenge the marketing claims of the "Smart Proxy" industry.

1. "Premium" is a Label, Not a Guarantee

The most striking realization from this data is the total collapse of the correlation between price and technical stealth. Some of the most expensive "Web Unlockers" in the market, tools specifically marketed for high-security targets, performed significantly worse than general-purpose scraping APIs.
  • Oxylabs and Bright Data, both industry titans with premium pricing, landed in the bottom half of the scoreboard.
  • Bright Data explicitly leaked a Brightbot User-Agent in multiple sessions, effectively announcing its presence to any basic firewall.
  • Zyte and Scrape.do, while more reasonably priced, outperformed the dedicated "unblockers" by maintaining much cleaner automation profiles.
If you are paying a premium for "unblocking" technology, you might actually be paying for brand prestige rather than superior fortification.

2. The Great "Franken-Fingerprint" Epidemic

Most providers are not actually "emulating" browsers; they are "patching" them, often poorly. This creates what we call "Franken-Fingerprints", mathematically impossible combinations of attributes that act as a massive red flag for modern anti-bot systems like Akamai or Cloudflare.
  • Impossible Geometry: ScrapingBee and Scrapingant reported viewports (content areas) that were physically larger than the screens they were supposedly contained in.
  • OS Mismatch: Decodo consistently claimed to be a Windows or macOS device in the headers while reporting Linux x86_64 via JavaScript.
  • Font/OS Contradictions: Scrapingdog presented a Linux profile but curiously included Windows-exclusive fonts like Calibri and Segoe UI.
Real users don't have impossible screen dimensions or mismatched OS layers. These "noob errors" make your bot stand out more than if you hadn't tried to spoof the fingerprint at all.

3. Direct Automation Flags Are Still Leaking

The most disappointing finding was how many "professional" tools failed to hide the most basic indicators of automation. We found that simply setting navigator.webdriver = false is no longer enough, yet many tools stop there.
  • CDP Leaks: ScrapingBee, Scrapingdog, and Scrapingant all leaked CDP automation: true flags, a definitive signal that the browser is being controlled via DevTools.
  • The "Headless" Giveaway: ScraperAPI’s JavaScript engine explicitly identified itself as HeadlessChrome, even when the HTTP headers claimed to be a standard browser.
  • Worker Inconsistency: Most tools failed to patch the Web Worker environment, allowing anti-bots to find "clean" automation signals by simply running a script in a background thread.
Don't assume your provider is "expertly fortified." In the current market, the majority of tools are still leaking direct, fatal automation signals that any modern anti-bot package will catch easily.

Conclusion: A More Realistic Way to Think About Scraping Tools

This benchmark wasn’t about crowning a universal winner, it was about understanding the gap between expectation and reality. The reality is:
  • Some tools are genuinely good.
  • Some tools are workable with the right strategy.
  • Some tools have critical weaknesses you need to be aware of.
  • No tool is perfect.
  • And no price tag guarantees quality.
For developers, the best approach is to treat scraping tools like any other dependency:
  • Understand their strengths
  • Understand their blind spots
  • Choose based on your actual risk level
  • Mix providers when needed
  • Keep your own fallback strategies ready
Stealth scraping is no longer about “which provider is best”, it’s about knowing where each one fits into your system. Want to learn more about web scraping? Take a look at the links below!