We Benchmarked 9 “Smart Proxy” Scraping Tools Against Real Fingerprinting Tests, Only One Passed
Today, with anti-bot systems using ever more advanced request fingerprinting techniques to detect and block scrapers, a crucial skill every scraping pro needs to develop is browser fortification.
Which is the ability to fortify their requests so that they don't leak any signs that the requests are coming from a scraper.
Developers can do this themselves or use fortified versions of Puppeteer, Playwright or Selenium (often need more fortification).
However, this can be a difficult and time consuming process if you don't have prior experience.
As a result, most proxy providers now offer some form of smart proxy solution that claim to manage this browser fortification for you.
So in this article, we decided to put the Scraping Pro's to the test...
Are they really experts at browser fortication?
Or do they make noob errors that no scraping professional should make?
So in this article we will put them to the test, covering:
- TLDR Scoreboard
- Header and Browser Fingerprint Testing Methodology
- 15-Test Comparison Matrix
- Top Performer #1: Scrapfly
- #2: Zenrows
- #3: Scrape.do
- #4: Zyte API
- #5: ScraperAPI
- #6: ScrapingBee
- #7: Bright Data Unlocker
- #8: Decodo Site Unblocker
- #9: Scrapingdog
- Lessons Learned: What This Benchmark Teaches Us
- Conclusion: A More Realistic Way to Think About Scraping Tools
TLDR Scoreboard
Pretty much every proxy provider claims to be the "Best Proxy Provider" so we decided to put to the test.
Each scraping tool is a variation of the same basic idea. Managed rotating proxies with user-agent and browser fingerprint optimization to bypass anti-bot detection.
Premium Unlockers
Some of these proxy products like Oxylabs Web Unblocker, Bright Data Web Unlocker and Decodo Site Unblocker, are dedicated "Unlockers" that specialize themselves in bypassing anti-bot systems on the most difficult websites and price themselves accordingly.
Smart APIs
Whereas others like Scrape.Do, ScraperAPI and ScrapingBee are more generic Smart Proxy APIs that offer lower cost scraping solutions, but also allow users to activate more advanced anti-bot bypassing functionality on requests.
Our analysis revealed a significant performance gap among the tested proxy providers in generating realistic and resilient browser fingerprints.
- Top Performer: Scrapfly emerged as the definitive leader, delivering high-quality, diverse, and contextually accurate profiles.
- Okay Performers: A middle tier of providers including Zenrows, Scrape.do, and Zyte API showed promise in some areas like hardware realism but failed systematically in others, particularly geo-specific spoofing.
- Poor Performers: The remaining providers exhibited critical flaws, ranging from static, easily-detectable fingerprints and blatant automation flags to fundamental inconsistencies between browser layers.
Here are the overall results:
| Provider | Overall Score (0–100) | Pass/Warn/Fail (count) | Comments |
|---|---|---|---|
| ✅ Scrapfly | 88.96 | 12 / 2 / 0 | Leader by a wide margin. Excellent geo-aware data & hardware realism. |
| ⚠️ Zenrows | 60.39 | 8 / 2 / 4 | Strong fingerprint diversity but failed on all geo-specific tests (timezone/language). |
| ⚠️ Scrape.do | 59.09 | 7 / 3 / 4 | Good hardware/resolution diversity. Failed geo-spoofing and platform consistency. |
| ⚠️ Zyte API | 57.79 | 7 / 3 / 4 | Good hardware profiles but failed on location data and platform consistency. |
| ❌ ScraperAPI | 35.06 | 3 / 3 / 8 | Failed on entropy (static hash), geo data, and exposed HeadlessChrome in UA. |
| ❌ ScrapingBee | 23.37 | 3 / 0 / 11 | Failed due to static hash, 800x600 resolution, and CDP automation flags. |
| ❌ Bright Data Unlocker | 22.72 | 0 / 1 / 1 | Most tests N/A; browser failed to return JS-based device info. |
| ❌ Oxylabs Web Unblocker | 15.58 | 2 / 0 / 12 | Critical failures: static profile, platform mismatch, and exposed automation flags. |
| ❌ Decodo Site Unblocker | 15.58 | 2 / 0 / 12 | Critical failures: static profile, platform mismatch, and exposed automation flags. |
| ❌ Scrapingdog | 7.79 | 1 / 0 / 13 | Abysmal performance. Displayed massive contradictions between reported UA and JS properties. |
Header and Browser Fingerprint Testing Methodology
For this benchmarking, we decided to send requests with each proxy providers headless browser enabled to Device and Browser Info to look at the sophistication of their header and browser fingerprinting.
The key question we are asking is:
Is the proxy provider leaking any information that would increase the chances of a anti-bot system detecting and blocking the request?
To do this, we focused on any leaks that could signal to the anti-bot system that the request is being made by a automated headless browser like Puppeteer, Playwright, Selenium, etc.
Here are the tests we conducted:
1. Fingerprint Entropy Across Sessions
Test whether the browser fingerprint shows natural variation across multiple sessions.
- Example: Identical JS fingerprint hashes, same WebGL/canvas values, or repeated hardware profiles across visits.
- Why it matters: Real users vary; deterministic fingerprints are a strong indicator of automation.
2. Header Realism
Check whether HTTP headers match the structure and formatting of real modern browsers.
- Example: Missing
Accept-Encoding: br, gzip, malformedAcceptheaders, or impossible UA versions. - Why it matters: Incorrect headers are one of the fastest and simplest ways anti-bot systems identify bots.
3. Client Hints Coherence
Evaluate whether Client Hints (sec-ch-ua*) align with the User-Agent and operating system.
- Example: UA claims Windows but
sec-ch-ua-platformreports "Linux", or the CH brand list is empty. - Why it matters: Mismatched Client Hints are a highly reliable signal of an automated or spoofed browser.
4. TLS / JA3 Fingerprint Realism
Test whether the TLS fingerprint resembles a real Chrome/Firefox client rather than a script or backend library.
- Example: JA3 matching cURL/Python/Node signatures, missing ALPN protocols, or UA/TLS contradictions.
- Why it matters: Many anti-bot systems fingerprint TLS before any JS loads, so mismatched JA3 values trigger instant blocks.
5. Platform Consistency
Evaluate whether the OS in the User-Agent matches navigator.platform and other JS-exposed platform values.
- Example: UA says macOS but JavaScript reports
Linux x86_64. - Why it matters: Real browsers almost never contradict their platform; mismatches are a classic bot signal.
6. Device-Type Coherence
Test whether touch support, viewport size, and sensors align with the claimed device type (mobile vs. desktop).
- Example: A mobile UA with
maxTouchPoints=0, or an iPhone UA showing a 1920×1080 desktop viewport. - Why it matters: Device-type mismatches are one of the simplest heuristics anti-bot systems use to flag automation.
7. Hardware Realism
Check whether CPU cores, memory, and GPU renderer look like real consumer hardware.
- Example: Every session reporting 32 cores, 8GB RAM, and a SwiftShader GPU.
- Why it matters: Unrealistic hardware profiles strongly suggest virtualized or automated browser environments.
8. Timezone vs IP Geolocation
Evaluate whether the browser's timezone matches the location implied by the proxy IP.
- Example: German IP reporting UTC or
America/New_York. - Why it matters: Timezone mismatches reveal poor geo-spoofing and are widely used in risk scoring.
9. Language/Locale vs IP Region
Check whether browser language settings align with the IP's expected locale.
- Example: All geos returning
en-USregardless of country, or JS locale contradicting theAccept-Languageheader. - Why it matters: Locale mismatch is a simple yet strong indicator that the request is automated or spoofed.
10. Resolution & Pixel Density Realism
Test whether screen resolution and device pixel ratio resemble real user devices.
- Example: Fixed 800×600 resolution, or repeated exotic sizes not seen on consumer hardware.
- Why it matters: Bots often run in virtual machines or containers with unnatural screen sizes.
11. Viewport & Geometry Coherence
Evaluate whether window dimensions and screen geometry form a logically possible combination.
- Example: Inner window width larger than the actual screen width.
- Why it matters: Impossible geometry is a giveaway that the environment is headless or virtualized.
12. Fonts & Plugins Environment
Check whether the browser exposes realistic fonts and plugins for the claimed OS and device.
- Example: A single font across all sessions, or empty plugin lists on macOS.
- Why it matters: Normal devices have rich font/plugin environments; sparse lists are characteristic of automation.
13. Peripherals Presence
Test whether microphones, speakers, and webcams are exposed the way real devices normally do.
- Example: All sessions reporting 0 microphones, 0 speakers, and 0 webcams.
- Why it matters: Real devices, especially desktops and laptops, almost always expose some media peripherals.
14. Graphics Fingerprints (Canvas & WebGL)
Evaluate whether canvas and WebGL fingerprints are diverse and platform-appropriate.
- Example: Identical WebGL renderer hashes across sessions, or a SwiftShader GPU on a claimed macOS device.
- Why it matters: Graphics fingerprints are hard to spoof; unrealistic or repeated values reveal automation.
15. Automation Signals
Check whether the browser exposes direct automation flags or patched properties.
- Example:
navigator.webdriver=true, visible “CDP automation” flags, or inconsistent worker properties. - Why it matters: These are explicit—and often fatal—indicators that the environment is controlled by a bot framework.
These header and device fingerprint tests aren't conclusive on their own. But if a proxy provider is leaking numerous suspicious fingerprints that are sent consistently then it is easy for a anti-bot to detect and block these requests. Even if the proxy IPs are rotating.
We sent requests to Device and Browser Info using each proxy providers United States, German, Japanese, United Kingdom and Russian IPs to see how they are optimizing their browsers for each geolocation and the browser leaks differ by location.
During testing we encountered some issues:
15-Test Comparison Matrix
The following is a summary breakdown of the test results for each proxy provider.
Providers are ordered from best to worst based on overall performance.
| Test \ Provider | Scrapfly | Zenrows | Scrape.do | Zyte API | ScraperAPI | ScrapingBee | Bright Data Unlocker | Oxylabs Web Unblocker | Decodo Site Unblocker | Scrapingdog | Weight |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 1. Fingerprint Entropy | ✅ Excellent diversity across all sessions. | ✅ Good fingerprint diversity across sessions. | ✅ Good hash and profile variation. | ✅ Good fingerprint diversity across sessions. | ❌ Identical hash across all sessions. | ❌ Identical hash across all sessions. | N/A | ❌ Low diversity, very static profiles. | ❌ Very low diversity, static profiles. | ❌ Mostly identical hashes across sessions. | 3 |
| 2. Header Realism | ✅ Headers were clean and realistic. | ✅ Headers were clean and realistic. | ⚠️ Extra space in UA string (Chrome/141.. ). | ✅ Headers looked well-formed and complete. | ✅ Headers looked well-formed and complete. | ✅ Headers looked well-formed and complete. | ❌ Malformed Accept headers (no commas). | ✅ Headers looked well-formed and complete. | ✅ Headers looked well-formed and complete. | ❌ FF UA with Chrome CH; missing br. | 3 |
| 3. Client Hints Coherence | ⚠️ Brands serialization artifact; otherwise coherent. | ⚠️ Brands serialization artifact; otherwise coherent. | ⚠️ Brands serialization artifact; otherwise coherent. | ⚠️ Brands serialization artifact; otherwise coherent. | ⚠️ Brands serialization artifact; otherwise coherent. | ❌ Blank brands and sec-ch-ua missing. | ⚠️ Present for Chrome, not Safari; lacked context. | ❌ Brands/UA versions mismatched; HeadlessChrome. | ❌ Frozen brands value and version mismatch. | ❌ Impossible: Firefox UA with Chrome CH. | 2.5 |
| 4. TLS / JA3 Realism | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | 3 |
| 5. Platform Consistency | ⚠️ Win32 platform for some Win64 UAs. | ⚠️ Win32 platform for some Win64 UAs. | ❌ Win32 platform for Win64 UAs. | ❌ Win32 platform for Win64 UAs. | ✅ UA Linux matched navigator.platform. | ✅ UA Linux matched navigator.platform. | N/A | ❌ macOS/Win UA with Linux x86_64 platform. | ❌ macOS/Win UA with Linux x86_64 platform. | ❌ Win UA vs. Linux platform/CH. | 3 |
| 6. Device-Type Coherence | ✅ Desktop UA, maxTouchPoints=0. | ✅ Desktop UA, maxTouchPoints=0. | ✅ Desktop UA, maxTouchPoints=0. | ✅ Desktop UA, maxTouchPoints=0. | ✅ Desktop UA, maxTouchPoints=0. | ✅ Desktop UA, maxTouchPoints=0. | N/A | ✅ Desktop UA, maxTouchPoints=0. | ✅ Desktop UA, maxTouchPoints=0. | ✅ Desktop UA, maxTouchPoints=0. | 3 |
| 7. Hardware Realism | ✅ Realistic and varied CPU/GPU. | ✅ Excellent, diverse hardware profiles (M-series, Intel). | ✅ Realistic and varied CPU/GPU. | ✅ Realistic and varied CPU/GPU. | ⚠️ Frozen values (cores: 20, mem: 8). | ❌ Inconsistent worker hardware (4 vs 16 cores). | N/A | ❌ Always 32 cores, 8 mem. Unnatural. | ❌ Always 32 cores; unnatural. | ❌ Always 32 cores; unnatural. | 2.5 |
| 8. Timezone vs IP Geo | ✅ Correct timezone for IP geo. | ❌ America/New_York for all non-US geos. | ❌ US timezones for UK/DE/RU/JP geos. | ❌ America/New_York for all non-US geos. | ❌ UTC timezone for all geos. | ❌ UTC timezone for all geos. | N/A | ❌ UTC timezone for all geos. | ❌ UTC timezone for all geos. | ❌ America/Los_Angeles for all geos. | 3 |
| 9. Language/Locale vs IP | ✅ Language/locale matched IP geography. | ❌ Always en-US for non-US geos. | ❌ Always en-US for non-US geos. | ❌ Always en-US for non-US geos. | ❌ Always en-US for non-US geos. | ❌ Always en-US for non-US geos. | N/A | ❌ Always en-US for non-US geos. | ❌ Always en-US for non-US geos. | ❌ Always en-US for non-US geos. | 2 |
| 10. Resolution & DPR | ✅ Realistic and varied resolutions. | ✅ Realistic and varied resolutions. | ✅ Realistic and varied resolutions. | ✅ Realistic and varied resolutions. | ⚠️ Always 1280x720, a frozen value. | ❌ Always 800x600, a known bot signature. | N/A | ❌ Always 1920x1080; a frozen value. | ❌ Always 1920x1080; a frozen value. | ❌ Always 1920x1080; a frozen value. | 2 |
| 11. Viewport/Geometry | ✅ Plausible and varied geometries. | ✅ Plausible and varied geometries. | ✅ Plausible and varied geometries. | ✅ Plausible and consistent geometries. | ⚠️ Frozen values, inner/screen were identical. | ❌ Impossible: innerWidth > screenWidth. | N/A | ❌ Frozen values, inner/screen were identical. | ❌ Frozen values, inner/screen were identical. | ❌ Frozen values, inner/screen were identical. | 2 |
| 12. Fonts & Plugins | ✅ Realistic and varied font lists. | ✅ Realistic and varied font lists. | ❌ Single suspicious font across all sessions. | ⚠️ Same font list for Win/macOS sessions. | ❌ Single suspicious font across all sessions. | ❌ Single suspicious font across all sessions. | N/A | ❌ Single suspicious font or empty. | ❌ Single suspicious font or empty. | ❌ Identical minimal font list. | 2 |
| 13. Peripherals Presence | ✅ Believable mix of peripherals present. | ❌ Always 0/0/0 peripherals. | ✅ Good presence of mic/speaker/webcam. | ❌ Always 0/0/0 peripherals. | ❌ Always 0/0/0 peripherals. | ❌ Always 0/0/0 peripherals. | N/A | ❌ Always 0/0/0 peripherals. | ❌ Always 0/0/0 peripherals. | ❌ Always 0/0/0 peripherals. | 3 |
| 14. Graphics Fingerprints | ✅ Excellent GPU diversity (Apple/AMD/Intel/Nvidia). | ✅ Excellent GPU diversity (Apple/Intel). | ⚠️ Good GPU diversity, but empty hashes. | ⚠️ Good GPU diversity, but empty hashes. | ❌ Uses SwiftShader software renderer. | ❌ Uses SwiftShader software renderer. | N/A | ❌ Uses llvmpipe or SwiftShader renderer. | ❌ Uses llvmpipe software renderer. | ❌ Uses SwiftShader software renderer. | 3 |
| 15. Automation Signals | ✅ No automation flags; workers consistent. | ❌ Worker properties returned NA, indicating patching. | ✅ No clear automation signals. | ✅ No clear automation signals. | ❌ HeadlessChrome visible in JS User-Agent. | ❌ CDP automation: true; inconsistent workers. | N/A | ❌ CDP: true, Playwright: true, inconsistent workers. | ❌ CDP: true, Playwright: true, inconsistent workers. | ❌ Deep inconsistencies across layers (UA/JS). | 3 |
| Test \ Provider | Scrapfly | Zenrows | Scrape.do | Zyte API | ScraperAPI | ScrapingBee | Bright Data Unlocker | Oxylabs Web Unblocker | Decodo Site Unblocker | Scrapingdog | Weight |

Top Performer #1: Scrapfly
Scrapfly is a Smart Proxy API offering a wide range of features, including JavaScript rendering, geotargeting, comprehensive anti-bot bypassing, and CAPTCHA solving. It provides functionality comparable to other proxy APIs but is distinguished by its advanced browser fingerprinting capabilities.
In our analysis, Scrapfly was the definitive leader, ranking #1 out of 9 providers with an exceptional score of 88.96 / 100. It passed nearly all fingerprinting tests, demonstrating a level of sophistication and realism that no other provider matched.
- ✅ Where Scrapfly performed well: Delivered highly diverse and unique fingerprints; correctly spoofed timezone and language to match IP geography; generated realistic hardware, GPU, and peripheral profiles; and showed no signs of automation.
- ❌ Where Scrapfly fell short: Exhibited minor platform inconsistencies in some Windows-based sessions.
All tests were conducted using Scrapfly's JS Rendering mode (render_js=true) to assess its full capabilities. Each JS-rendered request consumes 5 API credits.
| Plan | Price / month | API Credits | CPM | CPM (JS Rendering ×5) |
|---|---|---|---|---|
| Free Trial | $0.00 | 1,000 | ~$0 | ~$0 |
| Discovery | $30 | 200,000 | $150 | $750 |
| Pro | $100 | 1,000,000 | $100 | $500 |
| Startup | $250 | 2,500,000 | $100 | $500 |
| Enterprise | $500 | 5,500,000 | $91 | $455 |
The full pricing info can be viewed here.
Headers and Device Fingerprints
Scrapfly generated the most realistic and resilient browser fingerprints of all providers tested. Its profiles were highly diverse, context-aware, and free of the common automation signals that plagued competitors.
Each session produced a unique fingerprint hash, backed by varied and believable hardware configurations, screen resolutions, and font lists. Headers were modern and well-formed, with no inconsistencies between the HTTP and JavaScript layers.
Crucially, Scrapfly excelled at geo-specific spoofing. Timezones and languages were correctly configured to match the IP address's country, a test failed by most other providers.
The fingerprints were also "rich," reporting the presence of peripherals like microphones and webcams, which adds to their authenticity. The only observed weakness was a minor platform inconsistency in some Windows sessions.
Good
In our tests, Scrapfly consistently produced high-quality fingerprints that closely resembled those of real users.
-
✅ Excellent Fingerprint Entropy: Every session returned a unique fingerprint hash, demonstrating strong randomization of device and browser properties. This avoids the static, easily blockable signatures seen in lower-tier providers.
-
✅ Correct Geo-Spoofing: The browser's timezone and
Accept-Languageheader were correctly aligned with the proxy's IP address geography. This was a critical test that most competitors failed.// Geo-Aware Properties
Geo: US -> Timezone: America/Los_Angeles, Language Header: en-US
Geo: UK -> Timezone: Europe/London, Language Header: en-GB
Geo: DE -> Timezone: Europe/Berlin, Language Header: de-DE
Geo: JP -> Timezone: Asia/Tokyo, Language Header: ja-JP -
✅ Excellent Hardware & GPU Diversity: The service generated a wide variety of convincing hardware profiles, including different CPU cores, memory configurations, and real-world consumer GPUs. This diversity is essential for blending in with organic user traffic.
// Examples of GPU Renderers Observed
- ANGLE (Intel Intel(R) Arc(tm) Graphics (MTL)...)
- ANGLE (NVIDIA NVIDIA GeForce GTX 1650...)
- ANGLE (Apple ANGLE Metal Renderer: Apple M1...)
- ANGLE (NVIDIA NVIDIA GeForce GTX 1070 Ti...)
- ANGLE (NVIDIA NVIDIA GeForce RTX 4070 SUPER...) -
✅ Realistic Peripherals: Sessions reported a non-zero count for microphones, speakers, and webcams (e.g.,
1/1/1). This adds a layer of realism, as many automated environments report0/0/0. -
✅ No Automation Signals: The tests revealed no automation flags. Key indicators like
navigator.webdriverandCDP automationwere consistentlyfalse, and properties queried within a web worker matched the main browser environment perfectly.
Bad
Scrapfly's performance was nearly flawless, with only minor inconsistencies observed.
⚠️ Platform Inconsistency
For some Windows-based sessions, there was a minor mismatch between the 64-bit User-Agent string and the 32-bit platform reported by the JavaScript navigator object. While this can occur in real-world scenarios, it represents a small point of potential detection.
-
Mismatch: A User-Agent advertising
Win64; x64was paired withnavigator.platform: "Win32". -
Occurrence: This was observed in several Windows sessions across different geographies.
HTTP UA: Mozilla/5.0 (Windows NT 10.0.0; Win64; x64) ... Chrome/142.0.0.0
JS Platform: Win32
⚠️ Occasional Geo-Spoofing Inconsistency
While geo-spoofing was excellent overall, one session intended for Russia (RU) was configured with an incorrect timezone.
- Timezone Mismatch: An IP from Russia was paired with the
Europe/Lisbontimezone instead of a more appropriate one likeEurope/Moscow. TheAccept-Languageand locale data were, however, correctly set. This appears to be an outlier rather than a systemic issue.
Verdict: ✅ Good
Scrapfly provided the most advanced and resilient browser fingerprints of all providers tested.
It was the only provider to successfully pass nearly all 15 tests, demonstrating a clear lead in fingerprinting quality and anti-bot evasion.
✅ What it gets right
- High-entropy fingerprints with excellent diversity in hardware, GPU, and resolution.
- Accurate geo-spoofing of timezone and language to match the IP address location.
- Rich, realistic profiles that include peripherals (microphones/cameras).
- Clean, modern HTTP headers with no automation flags or cross-layer inconsistencies.
❌ What holds it back
- Minor and infrequent inconsistencies, such as the
Win64User-Agent with aWin32platform string, were the only detectable flaws.
Bottom line During our tests, Scrapfly consistently produced high-integrity browser profiles that are very difficult to distinguish from organic users. It excelled in areas where every other competitor failed, particularly in creating diverse and geo-aware fingerprints. It is a top-tier choice for scraping targets protected by advanced anti-bot systems.

#2: Zenrows
Zenrows offers a Smart Proxy API with features like JavaScript rendering, geotargeting, anti-bot bypassing, and CAPTCHA solving. It positions itself as a developer-friendly tool for overcoming modern detection systems.
In our analysis, Zenrows ranked #2 out of 9 providers, scoring 60.39 / 100. It performed well in several important fingerprinting categories but showed structural weaknesses in others—especially around geo-spoofing.
✅ Where Zenrows performed well
- Strong fingerprint entropy with highly varied hardware, GPU, and resolution profiles.
- Clean, modern HTTP headers with no obvious automation flags.
❌ Where Zenrows fell short
- Failed to spoof timezone and language for non-US geographies.
- Missing or incomplete fingerprint fields (no peripherals, empty canvas/WebGL, patched workers), reducing “naturalness.”
All tests were executed using JS Rendering (js_render=true) to measure Zenrows’ full fingerprinting capabilities. Each JS-rendered request consumes 5 API credits.
| Plan | Price / month | API Credits | CPM (Simple) | CPM (JS Rendering ×5) |
|---|---|---|---|---|
| Free Trial | $0.00 | 1,000 | ~$0 | ~$0 |
| Developer | $69.99 | 250,000 | ~$280 | ~$1,400 |
| Startup | $129.99 | 1,000,000 | ~$130 | ~$650 |
| Business | $299.99 | 3,000,000 | ~$100 | ~$500 |
The full pricing info can be viewed here.
Headers and Device Fingerprints
Zenrows delivered browser fingerprints that were strong in some areas but critically flawed in others. Its profiles showed excellent diversity in hardware, screen resolution, and overall fingerprint hash, avoiding the static signatures that plagued lower-tier providers.
However, the system completely failed to align browser properties with the IP address's geography. All sessions, regardless of their origin (e.g., Japan, Germany, UK), were configured with a US timezone and language, creating a major and easily detectable inconsistency.
The fingerprints also lacked certain "richness" signals, such as the presence of peripherals (microphones, speakers, webcams), and showed evidence of browser patching in web workers. While no overt automation flags like webdriver=true were found, these gaps point to an environment that is not fully natural.
Good
Zenrows excelled at generating varied and realistic browser profiles, a key factor in avoiding basic fingerprint-based blocking.
-
✅ Excellent Fingerprint Entropy: Each session produced a unique fingerprint hash, indicating strong randomization of browser and device properties.
-
✅ Realistic Hardware & GPU Profiles: The service generated a believable mix of hardware and GPU profiles, including different CPU core counts, Intel and Apple M-series GPUs, and varied screen resolutions. This diversity is crucial for appearing like a real user population.
// Example Session Profiles
Session 1 (US): 12 Cores, Intel UHD Graphics, 1920x1080 resolution
Session 2 (UK): 24 Cores, Intel UHD Graphics, 1920x1080 resolution
Session 3 (RU): 4 Cores, Intel Iris(R) Xe, 1680x1050 resolution
Session 4 (DE): 10 Cores, Apple M2 Pro, 1728x1117 resolution -
✅ Clean Headers: The HTTP headers were well-formed and included modern compression encodings like
brandzstd, matching what is expected from up-to-date browsers. -
✅ No Overt Automation Flags: The tests did not find obvious automation signals like
"Webdriver": "true"or"CDP automation": "true".
Bad
The positive aspects of Zenrows' fingerprints were offset by several significant and easily detectable flaws.
❌ Failed Geo-Spoofing
The browser's timezone and language failed to match the IP address's location in every non-US test. This is a critical failure, as it creates a clear mismatch that many anti-bot systems check for.
-
Timezone: All sessions originating from the UK, Germany, Russia, and Japan incorrectly reported the
America/New_Yorktimezone. -
Language: The
Accept-Languageheader andnavigator.languagesproperty were consistently set toen-USfor all geographies.// Timezone & Language Mismatches
Geo: JP -> Timezone: America/New_York, Language: en-US (Expected: Asia/Tokyo, ja-JP)
Geo: UK -> Timezone: America/New_York, Language: en-US (Expected: Europe/London, en-GB)
Geo: RU -> Timezone: America/New_York, Language: en-US (Expected: Europe/Moscow, ru-RU)
Geo: DE -> Timezone: America/New_York, Language: en-US (Expected: Europe/Berlin, de-DE)
❌ Incomplete and Inconsistent Profiles
Several fingerprint attributes were static, empty, or indicated tampering, which can be used as signals for bot detection.
- No Peripherals: All sessions reported
0microphones,0speakers, and0webcams. This pattern is common in automated environments and lacks the richness of real user devices. - Patched Workers: For some sessions, key browser properties inside web workers returned
NA. This suggests the environment was modified to hide or spoof values, which is itself a detectable behavior. - Empty Hashes: The
CanvasandWebGL fingerprintfields were consistently empty. While not a direct automation flag, the absence of these hashes where they are expected is an anomaly.
⚠️ Platform Inconsistency
For Windows sessions, there was a minor but notable mismatch between the User-Agent string and the JavaScript-reported platform.
-
Platform Mismatch: Sessions using a 64-bit Windows User-Agent (
Windows NT 10.0; Win64; x64) reportednavigator.platformas a 32-bit value (Win32). While this can occur in real browsers due to legacy reasons, it represents a low-quality inconsistency.HTTP UA: Mozilla/5.0 (Windows NT 10.0; Win64; x64) ... Chrome/138.0.0.0 Safari/537.36
JS Platform: "Win32"
Verdict: ⚠️ Mixed
Zenrows delivers strong fingerprint diversity but falls short in environmental realism. In our tests, this meant it could evade basic fingerprint-only checks, but it risked detection by systems that validate timezone, language, and device integrity.
✅ What it got right
- High-entropy fingerprints with varied hardware, GPU, and resolution profiles.
- No obvious automation flags (
webdriver, CDP traces, etc.). - Clean, modern HTTP headers.
❌ What holds it back
- Geo-specific data was completely wrong: every non-US test still reported a US timezone and
en-USlanguage. - No peripherals (0 microphones/speakers/cameras) across all sessions—a strong signal of virtualized or automated environments.
- Incomplete worker properties and empty canvas/WebGL hashes reduced the “naturalness” of the environment.
- Minor platform inconsistencies (e.g., Win64 UA +
navigator.platform = "Win32").
Bottom line Zenrows was strong where randomness and diversity mattered, but weaker where coherence and environmental richness were required. It is competitive for tasks that rely on basic or mid-level fingerprinting checks, but high-integrity targets that validate location and device realism will likely detect its inconsistencies.

#3: Scrape.do
Scrape.do is a Smart Proxy API offering a suite of features including JavaScript rendering, geotargeting, anti-bot bypassing, and CAPTCHA solving. It competes in a similar space to other proxy APIs that provide all-in-one scraping solutions.
In our analysis, Scrape.do ranked #3 out of 9 providers, with an overall score of 59.09 / 100. It demonstrated solid performance in generating varied device profiles but was undermined by systematic failures in geo-specific spoofing and other key consistency checks.
✅ Where Scrape.do performed well
- Good fingerprint entropy with varied hardware, GPU, and resolution profiles across sessions.
- No overt automation flags were found in the browser environment.
- Reported the presence of peripherals, avoiding a common bot-like signature.
❌ Where Scrape.do fell short
- Completely failed to spoof the browser timezone and language to match the IP's geography.
- Showed a critical platform mismatch between the User-Agent and JavaScript environment.
- Used a single, static font across all sessions, creating a suspicious pattern.
All tests were performed using Scrape.do's JavaScript rendering mode (render=true). Each JS-rendered request consumes 5 API credits.
| Plan | Price / month | API Credits | CPM | CPM (JS Rendering ×5) |
|---|---|---|---|---|
| Free Trial | $0.00 | 1,000 | ~$0 | ~$0 |
| Hobby | $29.00 | 250,000 | ~$116 | ~$580 |
| Pro | $99.00 | 1,250,000 | ~$79 | ~$395 |
| Business | $249.00 | 3,500,000 | ~$71 | ~$355 |
The full pricing info can be viewed here.
Headers and Device Fingerprints
Scrape.do's fingerprints presented a mixed picture. The service successfully generated varied device profiles, showing good diversity in hardware, GPU models, and screen resolutions. This variation in fingerprint hashes helps avoid simple, static detection signatures. No obvious automation flags like webdriver=true were exposed in the JavaScript environment.
However, these strengths were counteracted by significant inconsistencies. The browser's timezone and language failed to align with the proxy's IP geography in every single non-US test. All sessions also presented a platform mismatch, where a 64-bit Windows User-Agent was paired with a 32-bit JavaScript platform.
Adding to these issues, Scrape.do used a single, suspicious font for every request, creating a static data point that could be used to identify its traffic. Though the core headers were well-formed, these cross-layer contradictions and static elements increased the overall detection risk.
Good
In our tests, Scrape.do was effective at generating diverse and seemingly non-automated profiles in several key areas.
-
✅ Good Fingerprint Entropy: Each session produced a different fingerprint hash, indicating that core device and browser properties were being randomized.
-
✅ Realistic Hardware and GPU Diversity: The service generated a wide variety of convincing hardware profiles. CPU cores ranged from
2to32, and the GPU renderers included a mix of realistic models.// Example GPU Renderers
Session 1 (DE): NVIDIA GeForce GTX 1080
Session 2 (US): NVIDIA GeForce GTX 1050 Ti
Session 3 (UK): NVIDIA Quadro RTX 3000
Session 4 (RU): NVIDIA GeForce RTX 2060
Session 5 (JP): Intel(R) UHD Graphics 630 -
✅ Varied Screen Resolutions: Sessions reported a good mix of common screen resolutions, such as
1440x900,1280x720,1366x768, and3840x2160, helping to mimic a diverse user base. -
✅ Peripherals Present: Unlike several competitors, Scrape.do's browsers consistently reported the presence of peripherals (
1microphone,1speaker, and1webcam). This avoids the common bot signature of having0for all three. -
✅ No Clear Automation Flags: No clear automation flags like
webdriverorCDP automationwere set totrue.
Bad
Despite its strengths in diversity, Scrape.do's fingerprints were marked by several critical and systematic failures that could easily lead to detection.
❌ Failed Geo-Spoofing
The browser's timezone and language did not match the IP address's location in every non-US test. This is a fundamental inconsistency checked by many sophisticated anti-bot systems.
-
Timezone: Sessions from Europe and Asia were assigned incorrect timezones from other parts of the world.
-
Language: The
Accept-Languageheader andnavigator.languagesproperty were both consistently set toen-USfor all geographies, regardless of the IP location.// Timezone & Language Mismatches
Geo: DE -> Timezone: Asia/Jerusalem, Language: en-US (Expected: Europe/Berlin, de-DE)
Geo: UK -> Timezone: America/Chicago, Language: en-US (Expected: Europe/London, en-GB)
Geo: RU -> Timezone: Asia/Jerusalem, Language: en-US (Expected: Europe/Moscow, ru-RU)
Geo: JP -> Timezone: America/New_York, Language: en-US (Expected: Asia/Tokyo, ja-JP)
❌ Platform Inconsistency
Every session reported a mismatched platform between the HTTP User-Agent and the JavaScript navigator object. This is a classic indicator of a spoofed or improperly configured environment.
-
Platform Mismatch: The User-Agent header identified the OS as 64-bit Windows (
Win64; x64), but thenavigator.platformproperty reportedWin32. While this can occur organically in some cases, its consistent presence across all sessions is a strong anomaly.// HTTP User-Agent indicated a 64-bit OS
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 ...
// JavaScript environment reported a 32-bit platform
"Platform (navigator)": "Win32"
❌ Static and Suspicious Font Profile
All sessions, regardless of the reported OS, hardware, or geography, returned an identical and unusual list of fonts. Real user devices report dozens or hundreds of fonts.
- Single Font: The font list contained only one entry:
"Univers CE 55 Medium". This static and minimal font profile is a strong signal of a templated browser environment. - Empty Hashes: In all tests, the
CanvasandWebGL fingerprintfields were empty. The absence of these rich fingerprinting values further reduces the profile's "naturalness."
Verdict: ⚠️ Mixed
Scrape.do provided good device diversity, but critical inconsistencies in geo-location and platform data make it detectable. The service is capable of defeating basic fingerprinting checks but is likely to be flagged by more advanced anti-bot systems.
✅ What it gets right
- Good randomization of hardware, GPU, and screen resolution profiles.
- No obvious, high-severity automation flags were exposed.
- Reported presence of peripherals, adding a layer of realism.
❌ What holds it back
- Systematic geo-spoofing failure: Browser timezone and language did not match the IP address location in any non-US test.
- Critical platform mismatch: A 64-bit Windows User-Agent was consistently paired with a
Win32JavaScript platform. - Unnatural font profile: All sessions reported a single, static font, a strong indicator of an automated template.
Bottom line Scrape.do's performance was mixed. It showed promise with its varied device profiles but was ultimately let down by major, easily detectable inconsistencies. It may be suitable for simpler targets but would struggle against sites that validate location data or perform deep environmental consistency checks.

#4: Zyte API
Zyte API is a Smart Proxy API offering a diverse range of features including JavaScript rendering, geotargeting, anti-bot bypassing, and CAPTCHA solving. It is aimed at developers looking to manage complex web scraping challenges.
In our analysis, Zyte API ranked #4 out of 9 providers, with an overall score of 57.79 / 100. It demonstrated strong performance in generating diverse and realistic hardware profiles, but was consistently flawed in its geo-specific browser data, which created obvious detection vectors.
✅ Where Zyte API performed well
- Excellent diversity in fingerprint hashes and hardware profiles (CPU/GPU).
- Clean, well-formed HTTP headers with no overt automation signals.
❌ Where Zyte API fell short
- Failed to align browser timezone and language with the IP address's geography.
- Showed platform inconsistencies (
Win64User-Agent withWin32platform). - Reported static or incomplete data for peripherals and fonts.
All tests were conducted using Zyte API's JS Rendering mode. Zyte uses a tiered pricing system where costs vary by the target website's difficulty. Rendered requests are significantly more expensive than standard requests. The full pricing info can be viewed here.
Unrendered HTTP Requests (Price per 1,000 successful requests)
| Website Tier | PAYG | $100 | $200 | $350 | $500* |
|---|---|---|---|---|---|
| 1 | $0.13 | $0.10 | $0.08 | $0.07 | $0.06 |
| 2 | $0.23 | $0.17 | $0.14 | $0.12 | $0.11 |
| 3 | $0.43 | $0.32 | $0.26 | $0.22 | $0.21 |
| 4 | $0.70 | $0.52 | $0.42 | $0.36 | $0.33 |
| 5 | $1.27 | $0.95 | $0.76 | $0.65 | $0.60 |
Rendered HTTP Requests (Price per 1,000 successful requests)
| Website Tier | PAYG | $100 | $200 | $350 | $500* |
|---|---|---|---|---|---|
| 1 | $1.00 | $0.75 | $0.60 | $0.52 | $0.47 |
| 2 | $2.00 | $1.50 | $1.20 | $1.03 | $0.95 |
| 3 | $4.00 | $3.00 | $2.40 | $2.06 | $1.89 |
| 4 | $7.99 | $5.99 | $4.79 | $4.12 | $3.79 |
| 5 | $15.98 | $11.98 | $9.58 | $8.25 | $7.58 |
Headers and Device Fingerprints
Zyte API generated browser fingerprints with notable strengths and equally notable weaknesses. The sessions demonstrated good randomization, with unique fingerprint hashes and a diverse set of hardware profiles that avoided static signatures. The HTTP headers were complete and realistic.
However, these positive aspects were undermined by systematic failures. For all non-US tests, the browser's timezone and language were incorrectly set to US values, creating a clear and easily detectable mismatch with the IP address.
Furthermore, the fingerprints lacked richness, reporting zero peripherals (microphones, speakers, webcams) and using the same font list for both Windows and macOS sessions. While no direct automation flags were exposed, these inconsistencies weaken the profiles' overall credibility.
Good
The browser profiles generated by Zyte API were strong in areas related to hardware diversity and session entropy.
-
✅ Good Fingerprint Entropy: Each session produced a unique fingerprint hash, indicating effective randomization of browser and device properties to avoid simple blocking.
-
✅ Excellent Hardware & GPU Diversity: The service generated a realistic and varied mix of GPU renderers, including Apple, NVIDIA, and AMD hardware. This level of diversity closely mimics a real user population.
// Example GPU Renderer Profiles
Session 1 (JP): ANGLE (Apple ANGLE Metal Renderer: Apple M2 Max Unspecified Version)
Session 2 (DE): ANGLE (Apple ANGLE Metal Renderer: Apple M3 Pro Unspecified Version)
Session 3 (RU): ANGLE (NVIDIA NVIDIA GeForce GTX 1650 ... Direct3D11 vs_5_0 ps_5_0 D3D11)
Session 4 (US): ANGLE (AMD AMD Radeon(TM) Graphics ... Direct3D11 vs_5_0 ps_5_0 D3D11) -
✅ Clean Headers & No Automation Flags: The HTTP headers were well-formed and included modern encodings like
brandzstd. No obvious automation flags such as"Webdriver": "true"or"CDP automation": "true"were found in our tests.
Bad
Zyte API's fingerprints contained several critical and systematic flaws that increase the risk of detection.
❌ Failed Geo-Spoofing
The browser's timezone and language did not match the IP address's location in every non-US test. This is a primary check for many anti-bot systems.
-
Timezone: All sessions originating from the UK, Germany, Russia, and Japan incorrectly reported the
America/New_Yorktimezone. -
Language: The
Accept-Languageheader andnavigator.languagesproperty were consistently set toen-USacross all tested geographies.// Timezone & Language Mismatches
Geo: JP -> Timezone: America/New_York, Language: en-US (Expected: Asia/Tokyo, ja-JP)
Geo: UK -> Timezone: America/New_York, Language: en-US (Expected: Europe/London, en-GB)
Geo: RU -> Timezone: America/New_York, Language: en-US (Expected: Europe/Moscow, ru-RU)
Geo: DE -> Timezone: America/New_York, Language: en-US (Expected: Europe/Berlin, de-DE)
❌ Platform Inconsistency
For Windows-based sessions, the User-Agent string and the JavaScript environment reported conflicting CPU architectures.
-
Platform Mismatch: Sessions with a 64-bit Windows User-Agent (
Win64; x64) also reportednavigator.platformas the 32-bitWin32. While this can occur organically, it is often a marker of a low-quality or inconsistent fingerprint.HTTP UA: Mozilla/5.0 (Windows NT 10.0; Win64; x64) ... Chrome/140.0.0.0 Safari/537.36
JS Platform: "Win32"
⚠️ Incomplete or Static Profiles
Several device-level attributes were static or incomplete, creating unnatural patterns.
- No Peripherals: All sessions reported
0microphones,0speakers, and0webcams. This is a common pattern in headless or virtualized environments and differs from typical user devices. - Static Font Lists: The same set of fonts was reported for both Windows and macOS sessions. Real devices have distinct default font sets for each operating system, making this a detectable anomaly.
- Empty Hashes: The
CanvasandWebGL fingerprintfields were empty across all sessions. While not a direct automation flag, the consistent absence of these values is an abnormal pattern.
Verdict: ⚠️ Mixed
Zyte API delivered fingerprints with strong hardware diversity but was critically flawed in geo-specific realism and device completeness. This makes it a capable provider for defeating some fingerprinting checks but vulnerable to more advanced validation.
✅ What it gets right
- High entropy, with varied fingerprint hashes across sessions.
- Excellent diversity in hardware profiles, especially GPUs (Apple, NVIDIA, AMD).
- No overt automation flags like
webdriverwere found.
❌ What holds it back
- Complete failure on geo-spoofing: All non-US browsers reported a US timezone and language.
- Platform inconsistency: 64-bit Windows User-Agents were paired with a
Win32navigator.platform. - No peripherals were reported in any session, a common sign of automation.
- Static font list was used for both Windows and macOS sessions, an unrealistic pattern.
Bottom line During our tests, Zyte API proved effective at generating varied hardware profiles, a key element of modern fingerprinting. However, its systematic failures in location-based and device-level data provide clear signals for detection, making it less reliable against sophisticated anti-bot systems.

#5: ScraperAPI
ScraperAPI is a Smart Proxy API with a diverse range of features including JavaScript rendering, geotargeting, anti-bot bypassing, and CAPTCHA solving. It competes with other API-based proxy solutions that manage browser and header generation for the user.
In our analysis, ScraperAPI ranked #5 out of 9 providers, scoring a low 35.06 / 100. While some basic layer consistencies were maintained, the service suffered from several critical flaws, including the use of a completely static fingerprint and the exposure of HeadlessChrome in the browser environment.
✅ Where ScraperAPI performed well
- HTTP headers were generally well-formed and modern.
- Platform values (e.g., UA OS vs.
navigator.platform) were consistent.
❌ Where ScraperAPI fell short
- Used a single, static fingerprint hash for all sessions across all geographies.
- Failed to align timezone and language with the IP address location.
- Exposed
HeadlessChromein the JavaScript User-Agent, a clear automation flag. - Relied on a software-based graphics renderer (
SwiftShader) instead of mimicking real hardware.
All tests were executed using JS Rendering mode (render=true) to assess their browser fingerprinting capabilities. Each JS-rendered request consumes 10 API credits.
| Plan | Price / month | API Credits | CPM (Simple) | CPM (JS Rendering ×10) |
|---|---|---|---|---|
| Free Trial | $0.00 | 1,000 | ~$0 | ~$0 |
| Hobby | $49 | 100,000 | ~$490 | ~$4,900 |
| Startup | $149 | 1,000,000 | ~$149 | ~$1,490 |
| Business | $299 | 3,000,000 | ~$100 | ~$1,000 |
The full pricing info can be viewed here.
Headers and Device Fingerprints
ScraperAPI's browser fingerprints showed signs of a low-quality, static environment. During testing, every session, regardless of geography, produced the exact same fingerprint hash. This indicates a complete lack of diversity in the underlying browser profiles.
The fingerprint attributes were frozen across all tests. This included hardware metrics like CPU cores and memory, screen resolution, and even the list of installed fonts. Furthermore, the environment failed on all geo-spoofing tests, consistently reporting a UTC timezone and en-US language.
Most significantly, the browser environment reported itself as HeadlessChrome in the JavaScript layer, creating a direct automation signal and a mismatch with the HTTP User-Agent header. This, combined with the use of a software graphics renderer (SwiftShader), made the environment easily identifiable as automated.
Good
Despite major flaws, ScraperAPI's profiles were internally consistent in a few basic areas.
- ✅ Platform Consistency: The User-Agent string claiming to be
Linuxwas correctly matched by thenavigator.platformproperty (Linux x86_64), avoiding a common cross-layer contradiction. - ✅ Realistic Headers: The HTTP headers appeared well-formed and included modern encodings like
brandzstd, which are expected from up-to-date Chrome browsers. - ✅ Device-Type Coherence: All sessions correctly reported desktop characteristics, such as
maxTouchPoints: 0, which aligned with the desktop User-Agent.
Bad
ScraperAPI's fingerprints were deficient in multiple critical areas, making them easy to detect.
❌ Static Fingerprint Profile
All sessions, regardless of geography or time, produced the exact same fingerprint hash (41dcfb0219ca...). This complete lack of diversity is a strong indicator of an unsophisticated botnet and allows for simple blocking.
- Hardware: All sessions reported frozen values of
Hardware concurrency: 20andDevice memory: 8. - Screen Resolution: All screen and viewport metrics were fixed at
1280x720 pixels. - Fonts: A single, suspicious font was reported in all sessions:
"Univers CE 55 Medium". - Peripherals: All sessions reported
0microphones, speakers, and webcams, a common bot signature.
❌ Exposed Headless Automation
The browser environment explicitly identified itself as automated, both through the User-Agent string in the JavaScript layer and the use of a software-based graphics renderer.
- User-Agent Mismatch: The HTTP User-Agent claimed to be standard Chrome, while the JavaScript
navigator.userAgentproperty exposedHeadlessChrome. - Software Renderer: The GPU was consistently reported as
SwiftShader, a software renderer used in virtualized environments, rather than a real hardware GPU from Nvidia, AMD, or Intel.
HTTP UA: Mozilla/5.0 (X11; Linux x86_64) ... Chrome/142.0.0.0 Safari/537.36
JS UA: Mozilla/5.0 (X11; Linux x86_64) ... HeadlessChrome/142.0.0.0 Safari/537.36
→ This mismatch reveals headless automation between the network and browser layers.
❌ Failed Geo-Spoofing
The browser's timezone and language failed to match the IP address's location in every non-US test. This is a primary check for many anti-bot systems.
- Timezone: All sessions reported the browser timezone as
UTC, regardless of whether the exit IP was in Germany, Japan, Russia, or the US. - Language: The
Accept-Languageheader andnavigator.languagesproperty were consistently set toen-USfor all geographies.
// Timezone & Language Mismatches
Geo: JP -> Timezone: UTC, Language: en-US (Expected: Asia/Tokyo, ja-JP)
Geo: UK -> Timezone: UTC, Language: en-US (Expected: Europe/London, en-GB)
Geo: RU -> Timezone: UTC, Language: en-US (Expected: Europe/Moscow, ru-RU)
Geo: DE -> Timezone: UTC, Language: en-US (Expected: Europe/Berlin, de-DE)
⚠️ Incomplete Graphics Fingerprints
The graphics stack was not only software-based but also incomplete, missing key values that are typically present in real browsers.
- Empty Hashes: Both the
CanvasandWebGL fingerprintfields were empty in all test runs. The absence of these values is an anomaly that can be used for detection.
Verdict: ❌ Poor
ScraperAPI's browser fingerprints were static, inconsistent, and transparently automated. In our tests, the service failed on nearly all major fingerprinting criteria, from entropy and geo-consistency to hiding automation flags.
✅ What it gets right
- Basic consistency between the User-Agent's OS and
navigator.platform. - Well-formed HTTP headers.
❌ What holds it back
- Static fingerprint: A single fingerprint hash was used for every request, making all traffic trivial to identify and block.
- Exposed automation: The JavaScript User-Agent explicitly contained
HeadlessChrome. - Failed geo-spoofing: Timezone and language did not match the exit IP's geography.
- Frozen attributes: Hardware, screen resolution, fonts, and peripherals were identical across all sessions.
- Software rendering: Use of
SwiftShaderis a strong signal of a automated environment.
Bottom line The fingerprints provided by ScraperAPI during our tests were of low quality and carried a high risk of detection. The combination of a static profile and overt automation signals would not be resilient against modern anti-bot systems.

#6: ScrapingBee
ScrapingBee provides a Smart Proxy API with features including JavaScript rendering, geotargeting, anti-bot bypassing, and CAPTCHA solving. It offers a product comparable to other API-based proxy solutions.
In our analysis, ScrapingBee ranked #6 out of 9 providers, scoring 23.37 / 100. Its performance was poor, failing 11 of the 15 fingerprinting tests. While its base headers appeared modern, the underlying browser environment was static, inconsistent, and clearly showed signs of automation.
✅ Where ScrapingBee performed well
- HTTP headers were well-formed and included modern compression encodings.
- Platform and device type were internally consistent (Linux desktop).
❌ Where ScrapingBee fell short
- Used a static fingerprint hash across all sessions.
- Exposed a direct automation flag (
CDP automation: true). - Failed on all geo-specific tests (UTC timezone,
en-USlanguage). - Used a static
800x600resolution, a known bot signature. - Relied on software-based rendering (
SwiftShader).
Testing was conducted using ScrapingBee's JavaScript Rendering feature (render_js=true), where each JS-rendered request consumes 5 API credits.
| Plan | Price / month | API Credits | CPM | CPM (JS Rendering ×5) |
|---|---|---|---|---|
| Free Trial | $0.00 | 1,000 | ~$0 | ~$0 |
| Freelance | $49 | 150,000 | ~$327 | ~$1,635 |
| Startup | $99 | 1,000,000 | ~$99 | ~$495 |
| Business | $249 | 3,000,000 | ~$83 | ~$415 |
| Business+ | $599 | 8,000,000 | ~$75 | ~$375 |
The full pricing info can be viewed here.
Headers and Device Fingerprints
During testing, ScrapingBee's browser fingerprints exhibited critical flaws characteristic of a low-quality automation environment. Every session produced an identical fingerprint hash, making the traffic trivial to identify as originating from a single source.
The browser profiles were riddled with inconsistencies and classic bot signatures. These included a static 800x600 screen resolution, failed geo-spoofing that defaulted to UTC timezone and en-US language for all requests, and impossible viewport geometry where the inner window was larger than the entire screen.
Most importantly, the environment directly exposed automation flags. The browser explicitly reported CDP automation: true and used a SwiftShader software renderer, both of which are strong indicators of a headless browser. Hardware values were also inconsistent between the main browser thread and its web workers, further undermining the profile's authenticity.
Good
Despite its significant failures, ScrapingBee passed a few fundamental consistency checks.
- ✅ Header Realism: The HTTP
Accept,Accept-Encoding, andAccept-Languageheaders were well-formed and included modern values likebrandzstdfor compression. - ✅ Platform Consistency: The User-Agent string specified a Linux OS (
X11; Linux x86_64), which consistently matched thenavigator.platformvalue reported by JavaScript. - ✅ Device-Type Coherence: All sessions correctly reported desktop user agents with
maxTouchPoints=0, which is consistent for a non-touch device.
Bad
ScrapingBee's fingerprints failed in numerous critical areas, revealing an easily detectable automated environment.
❌ Static Fingerprints
All sessions, regardless of geography or time, produced the exact same fingerprint hash. This complete lack of diversity is a major red flag for any anti-bot system.
- Fingerprint Hash: The fingerprint hash was frozen at
c23c835e...across every tested session. - Fonts & Plugins: All sessions reported a single, suspicious font (
"Univers CE 55 Medium") and an identical list of default PDF viewer plugins. - Peripherals: All sessions reported
0microphones,0speakers, and0webcams, a common trait of virtualized environments. - Resolution: The screen resolution was consistently
800x600 pixels, a dated and highly suspicious value commonly associated with automated browsers.
❌ Exposed Automation Signals
The browser environment contained clear, unambiguous flags and artifacts indicating headless automation.
- CDP Automation Flag: The JavaScript property
CDP automationwas set totrue, directly signaling control via the Chrome DevTools Protocol. - Software Rendering: The WebGL renderer was identified as
SwiftShader, a software-based renderer used in server environments that lack a physical GPU. Real user devices almost always report hardware-based renderers (e.g., Intel, NVIDIA, AMD, Apple). - Inconsistent Hardware: The main browser thread reported
Hardware concurrency: 4, while its web worker reportedHardware concurrency (in web worker): 16. This discrepancy is unnatural and points to a misconfigured or patched environment.
CDP automation: true
GPU renderer: ANGLE (Google Vulkan 1.3.0 (SwiftShader...))
Hardware concurrency: 4
Worker hardware concurrency: 16
❌ Failed Geo-Spoofing
The browser engine failed to align its timezone and language with the proxy's IP address location, a fundamental check for sophisticated bot detection.
- Timezone: All sessions reported the browser timezone as
UTC, regardless of the exit IP's country. - Language: The browser language was always set to
en-USand theAccept-Languageheader wasen-US,en;q=0.9for all geographies.
IP Geo: US -> Timezone: UTC, Language: en-US (Expected: e.g., America/New_York)
IP Geo: GB -> Timezone: UTC, Language: en-US (Expected: Europe/London)
IP Geo: JP -> Timezone: UTC, Language: en-US (Expected: Asia/Tokyo)
IP Geo: RU -> Timezone: UTC, Language: en-US (Expected: Europe/Moscow)
IP Geo: DE -> Timezone: UTC, Language: en-US (Expected: Europe/Berlin)
❌ Incoherent Browser Profile
The browser profile contained values that were contradictory or physically impossible, further exposing its synthetic nature.
- Impossible Geometry: The inner window size was reported as
1920x993, while the screen size was800x600. A window cannot be larger than the screen it is displayed on. - Missing Client Hints: The
sec-ch-uaheader was missing from requests, and thenavigator.userAgentData.brandsproperty was an empty array[]. For the reported Chrome version, a populated Client Hintsbrandslist is expected.
Verdict: ❌ Poor
ScrapingBee’s browser fingerprints were clearly automated and lacked the realism needed to bypass modern bot detection. While base-level headers were adequate, the underlying device and browser profile was static, inconsistent, and exposed direct automation flags.
✅ What it gets right
- Well-formed HTTP headers.
- Consistent platform matching (Linux UA + JS platform).