PublishedJan 22, 2026UpdatedApr 21, 2026

Most "Stealth Browser" APIs Fail Browser Fingerprinting Tests [April 2026]

From Browserbase to Hyperbrowser, we tested all the best stealth browser APIs on the internet. Which ones are worth your time?

Today, with anti-bot systems using ever more advanced request fingerprinting techniques to detect and block scrapers, a crucial skill every scraping pro needs to develop is browser fortification. Which is the ability to fortify their requests so that they don't leak any signs that the requests are coming from a scraper. Developers can do this themselves or use fortified versions of Puppeteer, Playwright or Selenium (often need more fortification). However, this can be a difficult and time consuming process if you don't have prior experience. As a result, a new category of Stealth Browser APIs has emerged that claim to manage this browser fortification for you, offering managed cloud browsers with built-in fingerprint protection, anti-bot bypass, and CAPTCHA solving. So in this article, we decided to put these Stealth Browser APIs to the test... Are they really experts at browser fortication? Or do they make noob errors that no scraping professional should make? So in this article we will put them to the test, covering:
Fingerprint Benchmark Test ResultsThe complete raw data from these tests is available on GitHub at Stealth Browser Fingerprint Repo

TLDR: What Is The Best Stealth Browser API For Browser Fingerprinting?

Pretty much every stealth browser API claims to be the "Best Stealth Browser" so we decided to put them to the test. Each stealth browser is a variation of the same basic idea. Managed cloud browsers with fingerprint protection, anti-bot bypass, and CAPTCHA solving to help you scrape without getting blocked.

Web Scraping Stealth Browsers

Some of these stealth browser products like Bright Data Scraping Browser, ZenRows Scraping Browser, Scrapeless, and Browserless are primarily designed for web scraping workflows, offering fingerprint protection, anti-bot bypass, and CAPTCHA solving to help extract data at scale.

AI Agent Stealth Browsers

Whereas others like Browserbase, Hyperbrowser, Anchor Browser, and Browser.cash are built for AI agent workflows, offering features like persistent sessions, autonomous navigation APIs, human-like behavior simulation, and long-running browser environments suited for agentic automation tasks.

TLDR Scoreboard

Our analysis revealed a significant performance gap between elite scraping browsers that provide comprehensive environment masking and legacy or VM-based solutions that suffer from critical automation leaks.
  • Top Performer: Scrapeless Browser led the benchmark with a score of 90.95. It demonstrated excellent hardware and graphics realism with high fingerprint entropy, though it struggled with systematic timezone leakage across global IPs. Bright Data Scraping Browser (89.05) and Oxylabs Headless Browser (85.71) also performed at an elite level, with Oxylabs showing superior geographic alignment and perfect timezone matching.
  • Okay Performers: ZenRows Scraping Browser and Browser.cash both finished with a score of 51.81. While they offered impressive hardware entropy and realistic font profiles, both suffered from critical failures: ZenRows leaked CDP automation flags in 80% of sessions, while Browser.cash exposed those same flags in 100% of sessions. Browserless (42.29) also falls into this tier, maintaining modern headers but failing significantly on timezone and font accuracy.
  • Poor Performers: Browserbase ranked lowest with a score of 37.71. It triggered a critical positive detection for the Playwright framework and showed a total lack of fingerprint entropy, signaling a clear automated VM fleet through static hardware and 0px viewport offsets.
Note: All providers were evaluated with one test marked as "na" for TLS / JA3 Realism due to missing data for that specific metric; scores reflect the evaluation of the remaining 14 fingerprinting facets. Here are the overall results:
ProviderOverall ScorePassWarnFailCriticalComments
Scrapeless Browser90.9510310Scrapeless Browser leads the benchmark with a 90.95 score due to excellent hardware and graphics realism, though it struggles with static English locales and systematic timezone leakage across global IPs.
Bright Data Scraping Browser89.0511120Bright Data delivers a highly modern and consistent browser environment with realistic hardware profiles, but it is undermined by failing to align timezones and peripheral counts with its proxy locations.
Oxylabs Headless Browser85.7110220Oxylabs shows superior geographic alignment with perfect timezone and locale matching, but its overall score is hindered by a failure in the Fonts test and static peripheral counts.
ZenRows Scraping Browser51.8110031ZenRows offers impressive hardware entropy and Apple Silicon emulation, yet it suffers a critical failure due to leaking CDP automation flags in 80% of sessions and providing seemingly random geo-language pairings.
Browser.cash51.8110211Leveraging its residential network for excellent hardware and font realism, Browser.cash nonetheless fails the automation test by exposing CDP flags in 100% of sessions and showing architectural gaps in web worker GPU reporting.
Browserless42.295531Browserless maintains modern header and platform consistency but fails significantly on timezone and font accuracy, while a critical CDP automation leak and missing web worker data point to a detectable headless profile.
Browserbase37.714541Browserbase ranks lowest due to a critical positive detection of the Playwright framework and a total lack of fingerprint entropy, with static hardware and 0px viewport offsets signaling a clear automated VM fleet.

How We Tested Browser Fingerprinting

For this benchmarking, we decided to send requests with each stealth browser API to Device and Browser Info to look at the sophistication of their header and browser fingerprinting. The key question we are asking is:
Is the stealth browser leaking any information that would increase the chances of an anti-bot system detecting and blocking the request?
To do this, we focused on any leaks that could signal to the anti-bot system that the request is being made by a automated headless browser like Puppeteer, Playwright, Selenium, etc. Here are the tests we conducted:
1. Fingerprint Entropy Across SessionsTest whether the browser fingerprint shows natural variation across multiple sessions.
  • Example: Identical JS fingerprint hashes, same WebGL/canvas values, or repeated hardware profiles across visits.
  • Why it matters: Real users vary; deterministic fingerprints are a strong indicator of automation.
2. Header RealismCheck whether HTTP headers match the structure and formatting of real modern browsers.
  • Example: Missing Accept-Encoding: br, gzip, malformed Accept headers, or impossible UA versions.
  • Why it matters: Incorrect headers are one of the fastest and simplest ways anti-bot systems identify bots.
3. Client Hints CoherenceEvaluate whether Client Hints (sec-ch-ua*) align with the User-Agent and operating system.
  • Example: UA claims Windows but sec-ch-ua-platform reports "Linux", or the CH brand list is empty.
  • Why it matters: Mismatched Client Hints are a highly reliable signal of an automated or spoofed browser.
4. TLS / JA3 Fingerprint RealismTest whether the TLS fingerprint resembles a real Chrome/Firefox client rather than a script or backend library.
  • Example: JA3 matching cURL/Python/Node signatures, missing ALPN protocols, or UA/TLS contradictions.
  • Why it matters: Many anti-bot systems fingerprint TLS before any JS loads, so mismatched JA3 values trigger instant blocks.
5. Platform ConsistencyEvaluate whether the OS in the User-Agent matches navigator.platform and other JS-exposed platform values.
  • Example: UA says macOS but JavaScript reports Linux x86_64.
  • Why it matters: Real browsers almost never contradict their platform; mismatches are a classic bot signal.
6. Device-Type CoherenceTest whether touch support, viewport size, and sensors align with the claimed device type (mobile vs. desktop).
  • Example: A mobile UA with maxTouchPoints=0, or an iPhone UA showing a 1920×1080 desktop viewport.
  • Why it matters: Device-type mismatches are one of the simplest heuristics anti-bot systems use to flag automation.
7. Hardware RealismCheck whether CPU cores, memory, and GPU renderer look like real consumer hardware.
  • Example: Every session reporting 32 cores, 8GB RAM, and a SwiftShader GPU.
  • Why it matters: Unrealistic hardware profiles strongly suggest virtualized or automated browser environments.
8. Timezone vs IP GeolocationEvaluate whether the browser's timezone matches the location implied by the proxy IP.
  • Example: German IP reporting UTC or America/New_York.
  • Why it matters: Timezone mismatches reveal poor geo-spoofing and are widely used in risk scoring.
9. Language/Locale vs IP RegionCheck whether browser language settings align with the IP's expected locale.
  • Example: All geos returning en-US regardless of country, or JS locale contradicting the Accept-Language header.
  • Why it matters: Locale mismatch is a simple yet strong indicator that the request is automated or spoofed.
10. Resolution & Pixel Density RealismTest whether screen resolution and device pixel ratio resemble real user devices.
  • Example: Fixed 800×600 resolution, or repeated exotic sizes not seen on consumer hardware.
  • Why it matters: Bots often run in virtual machines or containers with unnatural screen sizes.
11. Viewport & Geometry CoherenceEvaluate whether window dimensions and screen geometry form a logically possible combination.
  • Example: Inner window width larger than the actual screen width.
  • Why it matters: Impossible geometry is a giveaway that the environment is headless or virtualized.
12. Fonts & Plugins EnvironmentCheck whether the browser exposes realistic fonts and plugins for the claimed OS and device.
  • Example: A single font across all sessions, or empty plugin lists on macOS.
  • Why it matters: Normal devices have rich font/plugin environments; sparse lists are characteristic of automation.
13. Peripherals PresenceTest whether microphones, speakers, and webcams are exposed the way real devices normally do.
  • Example: All sessions reporting 0 microphones, 0 speakers, and 0 webcams.
  • Why it matters: Real devices, especially desktops and laptops, almost always expose some media peripherals.
14. Graphics Fingerprints (Canvas & WebGL)Evaluate whether canvas and WebGL fingerprints are diverse and platform-appropriate.
  • Example: Identical WebGL renderer hashes across sessions, or a SwiftShader GPU on a claimed macOS device.
  • Why it matters: Graphics fingerprints are hard to spoof; unrealistic or repeated values reveal automation.
15. Automation SignalsCheck whether the browser exposes direct automation flags or patched properties.
  • Example: navigator.webdriver=true, visible “CDP automation” flags, or inconsistent worker properties.
  • Why it matters: These are explicit and often fatal indicators that the environment is controlled by a bot framework.
These header and device fingerprint tests aren't conclusive on their own. But if a stealth browser is leaking numerous suspicious fingerprints that are sent consistently then it is easy for an anti-bot to detect and block these requests. Even if the proxy IPs are rotating. We sent requests to Device and Browser Info using each stealth browser's United States, German, Japanese, United Kingdom and Russian proxy endpoints to see how they are optimizing their browsers for each geolocation and whether the browser leaks differ by location.

15-Test Comparison: How Each Provider Performed

Use the filter tabs above the table to show or hide specific providers for easier comparison. The table shows how each provider performed across all 15 fingerprint tests, with weights indicating the importance of each test.
Compare Providers
Test \ ProviderWeight

Detailed Results: Which Stealth Browser Is Best For Browser Fingerprinting?

The following section contains the detailed results for each stealth browser, showing the overall score and the results of each test.

#1: Scrapeless Browser

Scrapeless Browser logo
Scrapeless Browser

Scrapeless provides a headless, stealth-oriented scraping browser focused on agentic workflows, featuring managed fingerprints, spoofed TLS, and cross-layer consistency patches for anti-bot environments.

#1
Position
90.95
Overall Score
10✅ Pass
3⚠️ Warn
1❌ Fail
0🚨 Critical
In our analysis, Scrapeless Browser ranked #1 out of 7 providers with an overall score of 90.95. It leads the benchmark due to its sophisticated handling of hardware and graphics realism, effectively masking automation signals that often trigger detection in other headless solutions.During testing, Scrapeless Browser demonstrated high fingerprint entropy and clean automation signals across all sessions. While it excels in hardware spoofing and peripheral diversity, it showed systematic weaknesses in regional localization, particularly regarding timezones and languages.
  • ✅ Where Scrapeless Browser performed well: Successfully masked all automation and CDP flags; provided highly diverse hardware and GPU renderers; maintained perfect consistency between headers and JavaScript platform properties.
  • ❌ Where Scrapeless Browser fell short: Failed to align timezones with proxy IP geography; utilized static English locales for all global requests; exhibited exotic screen resolutions with locked viewport widths.

Pricing

Scrapeless uses a consumption-based billing model with optional monthly or yearly prepaid plans that grant discounts on usage.
PlanMonthly PriceResidential (per GB)Hourly Rate
BasicConsumption-based only,,
Growth$49/mo (then 10% off usage)$1.62/GBFrom $0.081/hr
Scale$199/mo (then 15% off usage)$1.53/GBFrom $0.076/hr
Business$399/mo (then 20% off usage)$1.44/GBFrom $0.072/hr
CustomCustomCustomCustom

Headers and Device Fingerprints

Scrapeless Browser maintained a high level of technical realism in its browser profiles. In our tests, it utilized modern Chrome 140 headers that were perfectly synchronized with the internal JavaScript environment, ensuring that the advertised Windows 10 platform was consistent across all layers.The provider stood out for its hardware diversity, injecting realistic GPU renderers (including NVIDIA, AMD, and Intel) and varied CPU/memory configurations. This variety prevents the creation of a "static" fingerprint signature that anti-bot systems can easily flag.However, the environment suffered from a lack of geographic awareness. Regardless of the proxy exit node, the browser defaulted to a US-centric configuration for both timezones and language settings, creating a detectable anomaly for regional targets.
TestStatus
TLS / JA3 RealismN/A
Header RealismPass
Automation SignalsPass
Fonts & PluginsPass
Resolution & DPRWarn
Language/Locale vs IPWarn
Client Hints CoherencePass
Device Type CoherencePass
TestStatus
Platform ConsistencyPass
Hardware RealismPass
Peripherals PresencePass
Timezone vs IP GeoFail
Fingerprint EntropyPass
Viewport/GeometryWarn
Graphics FingerprintsPass

Good

In our tests, Scrapeless Browser consistently produced high-integrity footprints in terms of hardware identity and automation masking.
  • Clean Automation Signals: The provider successfully masked all indicators of headless control. Key booleans like navigator.webdriver were false, and CDP automation detection was bypassed in both the main thread and web workers.
  • Diverse Hardware & GPU Profiles: Each session presented a different hardware identity, avoiding the common mistake of using "server-grade" defaults. We observed specific consumer-grade GPU renderers and varying core counts.
Session 1: NVIDIA GeForce RTX 2060 (0x00001F08) Session 2: AMD Radeon Pro 555X (0x000067EF) Session 3: Intel(R) Iris(R) Xe Graphics (0x000046A6) Session 4: Intel(R) UHD Graphics 610 (0x00009BA8)
  • High Fingerprint Entropy: With 4 unique hashes across 4 test sessions, the provider demonstrated excellent randomization. It successfully varied screen sizes, hardware concurrency (2 to 16 cores), and device memory (2GB to 8GB) to ensure no two fingerprints were identical.
  • Realistic Peripherals: Unlike many bots that report zero devices, Scrapeless Browser mimicked a real desk setup with varied counts for speakers and microphones (ranging from 1 to 3).
  • Robust Font Lists: The environment reported extensive font arrays, including regional fonts like Batang, MS Mincho, and SimHei, which are appropriate for a standard Windows installation.

Bad

While Scrapeless Browser performed well in technical hardware spoofing, it failed several tests related to environmental and geographic consistency.
❌ Timezone vs IP Geo Failure
In our tests, the browser exhibited systematic timezone leakage. Every session, regardless of whether the proxy was in the UK, Germany, or Russia, reported the US Eastern timezone.
  • Leakage: Every session returned America/New_York as the JavaScript timezone.
  • Evidence: A session targeting Germany (DE) used a US timezone, which is a significant red flag for anti-bot systems.
US Session: America/New_York (Correct) UK Session: America/New_York (Expected: Europe/London) DE Session: America/New_York (Expected: Europe/Berlin) RU Session: America/New_York (Expected: Europe/Moscow)
⚠️ Language/Locale Mismatch
The provider utilized static English locales across all global IPs. This lack of regional localization makes the traffic appear suspicious when accessing strictly monitored regional domains.
  • Consistency: Both the Accept-Language header and the navigator.languages array were locked to en-US.
  • Locale: The locale date/time format remained en even when the proxy originated from non-English speaking countries like Russia.
⚠️ Viewport & Geometry Inconsistencies
While the outer screen resolutions were diverse, the inner viewport width was rigidly clamped across different monitor sizes, revealing an artificial scaling pattern.
  • Static Dimensions: Across resolutions ranging from 1680x1050 to 3440x1330, the inner width remained fixed at 945px.
  • Anomalous Offset: This creates a massive gap between the screen size and the browser window, characteristic of automated window scaling rather than a maximized user browser.

Verdict: ✅ Good

Scrapeless Browser is a top-performing stealth browser that excels at hardware-level spoofing and automation bypass.In our tests, it provided the best defense against hardware-based fingerprinting and automation detection of any provider in the benchmark, earning its #1 ranking. It is particularly effective for sites that use deep device interrogation (GPU/CPU/Workers).✅ What it gets right
  • Exceptional hardware and GPU diversity that prevents signature-based blocking.
  • Flawless masking of navigator.webdriver and CDP automation flags.
  • Perfect cross-layer consistency between headers and JavaScript platform properties.
  • High entropy, ensuring unique fingerprints for every request.
❌ What holds it back
  • Systematic failure to synchronize timezones with the proxy IP.
  • Lack of regional language localization for non-US requests.
  • Anomalous viewport clamping that could be detected by sophisticated geometry checks.
Bottom line Scrapeless Browser is currently the most sophisticated option for bypassing advanced device-fingerprinting defenses, though users should be mindful of its lack of geographic and timezone synchronization when targeting regional sites.

#2 Bright Data Scraping Browser

Bright Data Scraping Browser logo
Bright Data Scraping Browser

Bright Data’s Scraping Browser is a managed Chromium environment designed to produce high-quality stealth fingerprints and resist modern bot-detection systems, featuring automatic proxy rotation and fingerprint randomization.

#2
Position
89.05
Overall Score
11✅ Pass
1⚠️ Warn
2❌ Fail
0🚨 Critical
In our analysis, Bright Data Scraping Browser ranked #2 out of 7 providers with an overall score of 89.05. During testing, it delivered a highly modern and consistent browser environment with realistic hardware profiles, effectively masking automation signals across all sessions.The provider demonstrated strong performance in core areas such as automation signal suppression, header realism, and hardware diversity. However, its score was impacted by a lack of regional localization in timezones and static peripheral counts, which created detectable patterns compared to organic user behavior.
  • ✅ Strengths: Complete suppression of navigator.webdriver and CDP flags; high fingerprint entropy with diverse consumer GPUs; perfectly coherent Client Hints and User-Agent strings.
  • ❌ Weaknesses: Systemic failure to align browser timezones with proxy IP locations; static peripheral counts (1 mic/1 speaker/1 webcam) across all worldwide sessions.

Pricing (2025)

PlanIncluded TrafficPriceEffective Rate
Pay-As-You-GoNo commitment$8 / GB$8 / GB
71 GB Plan71 GB included$499 / month~$7 / GB
166 GB Plan166 GB included$999 / month~$6 / GB
399 GB Plan399 GB included$1,999 / month~$5 / GB
Pricing fluctuates based on traffic volume and bandwidth usage.
Full pricing details available at https://brightdata.com/products/scraping-browser

Headers and Device Fingerprints

Bright Data Scraping Browser maintained a very high standard of technical realism throughout our evaluation. It consistently utilized cutting-edge headers (Chrome 146) and supported modern compression protocols like Brotli and Zstandard, mirroring the behavior of the most up-to-date consumer browsers.The environment exhibited perfect cross-layer consistency. Whether checked via HTTP headers, the JavaScript navigator object, or within a Web Worker context, the platform (Windows), browser version, and hardware properties remained aligned.However, the provider’s regional spoofing was less robust than its technical masking. While the network layer correctly routed through various global proxies, the browser environment often remained anchored to US-centric locales and timezones, creating a detectable geographic mismatch for international traffic.

Good

In our tests, Bright Data Scraping Browser provided one of the most technically sound browser environments in the benchmark.
  • Successful Automation Masking: All major automation signals were suppressed. Indicators like navigator.webdriver and CDP automation flags were consistently false, and no "HeadlessChrome" strings were leaked.
  • High Fingerprint Entropy: The provider avoided static signatures by randomizing hardware properties. We observed multiple distinct GPU renderers and varied CPU core counts across sessions.
// Consumer GPUs Observed
  • ANGLE (NVIDIA NVIDIA GeForce RTX 3050 (0x00002584) Direct3D11...)
  • ANGLE (Intel Intel(R) UHD Graphics (0x000046A3) Direct3D11...)
  • ANGLE (Intel Intel(R) Iris(R) Xe Graphics (0x00009A49) Direct3D11...)
  • Perfect Client Hint Coherence: The sec-ch-ua headers were in total alignment with the User-Agent and JavaScript environment, correctly identifying the platform as "Windows" and the browser as Chrome 146.
  • Modern Header Stack: The browser properly advertised support for br and zstd compression, which is a key signature of modern, non-automated browsers.
  • Consistent Web Workers: Data queried from the main thread matched the Web Worker context perfectly, passing a common test used by anti-bots to find discrepancies in spoofed environments.

Bad

Despite its high score, Bright Data Scraping Browser exhibited systemic failures in geographic and peripheral realism.
❌ Timezone Misalignment
The browser failed to adjust its internal clock to match the proxy IP location. Aside from the US session, every international session leaked a US-based timezone, creating an immediate red flag for anti-bot systems.
  • UK Session: Timezone was America/Chicago (Expected: Europe/London)
  • JP Session: Timezone was America/Chicago (Expected: Asia/Tokyo)
  • DE Session: Timezone was America/Chicago (Expected: Europe/Berlin)
Proxy Geo: London, UK Timezone: America/Chicago -> Systematic mismatch between IP and JS environment.
⚠️ Locale and Language Inconsistency
The provider did not adapt browser language settings to match the requested geographies, relying heavily on English even in non-English regions.
  • Mismatches: The RU and DE sessions used en-US, while the US session ironically reported es-419 (Latin American Spanish).
  • Details: While not an immediate failure, the lack of regional localization for countries like Japan or Russia is suspicious to advanced detection engines.

Verdict: ✅ Good

Bright Data Scraping Browser offers a highly sophisticated and modern environment that is very difficult to detect via traditional automation flags.In our tests, it excelled at hiding its automated nature and providing realistic consumer hardware profiles. While the lack of timezone and locale alignment for non-US traffic is a weakness, the sheer quality of the technical masking, including perfect Client Hint coherence and modern header support, makes it a top-tier choice for high-end scraping.✅ What it gets right
  • Total suppression of webdriver and CDP automation flags.
  • Diverse, realistic consumer GPU and hardware configurations.
  • Modern, well-formed HTTP headers (Chrome 146).
  • Internal consistency between main thread and Web Workers.
❌ What holds it back
  • Failure to sync the JavaScript timezone with the proxy IP's location.

TestStatus
TLS / JA3 RealismN/A
Header RealismPass
Automation SignalsPass
Fonts & PluginsPass
Resolution & DPRPass
Language/Locale vs IPWarn
Client Hints CoherencePass
Device Type CoherencePass
TestStatus
Platform ConsistencyPass
Hardware RealismPass
Peripherals PresenceFail
Timezone vs IP GeoFail
Fingerprint EntropyPass
Viewport/GeometryPass
Graphics FingerprintsPass

#3 Oxylabs Headless Browser

Oxylabs Headless Browser logo
Oxylabs Headless Browser

Oxylabs Headless Browser is a remote headless browser with built-in stealth features and residential proxy integration, supporting Playwright, Puppeteer, and CDP-compatible tools.

#3
Position
85.71
Overall Score
10✅ Pass
2⚠️ Warn
2❌ Fail
0🚨 Critical
In our benchmark analysis, Oxylabs Headless Browser ranked 3 out of 7 providers with an overall score of 85.71. While it demonstrated industry-leading geographic alignment and hardware diversity, its final score was slightly lowered by static peripheral reporting and a lack of font environment richness.
TestStatus
TLS / JA3 RealismN/A
Header RealismPass
Automation SignalsPass
Fonts & PluginsFail
Resolution & DPRPass
Language/Locale vs IPPass
Client Hints CoherenceWarn
Device Type CoherencePass
TestStatus
Platform ConsistencyPass
Hardware RealismPass
Peripherals PresenceFail
Timezone vs IP GeoPass
Fingerprint EntropyPass
Viewport/GeometryWarn
Graphics FingerprintsPass
  • ✅ Where Oxylabs Headless Browser performed well: Achieved perfect synchronization between IP geography and browser attributes (timezone/language); maintained zero automation signals; and provided high-quality hardware/GPU diversity.
  • ❌ Where Oxylabs Headless Browser fell short: Failed to provide realistic font lists (often reporting only one font) and used hardcoded peripheral counts across all sessions.

Pricing

Pricing for the Oxylabs Headless Browser is based on monthly subscriptions with traffic (GB) allowances. All plans include advanced stealth, free geo-targeting, and 24/7 support.
PlanIncluded TrafficPriceEffective Rate
Starter50GB$300 + VAT~$6 / GB
Premium100GB$550 + VAT~$5.5 / GB
Venture300GB$1,410 + VAT~$4.7 / GB
Custom +400GB+Custom per GBCustom
Full pricing details: Oxylabs Headless Browser

Headers and Device Fingerprints

Oxylabs Headless Browser delivered a sophisticated fingerprinting environment that excelled in cross-layer consistency. During testing, HTTP headers for Chrome 142 were perfectly mirrored in the JavaScript environment, with no mismatches between User-Agents or platform strings.The provider demonstrated superior geographic coherence. Unlike many competitors, it successfully aligned the browser's internal timezone and language settings with the proxy's IP location across all tested regions, including the US, UK, Germany, and Japan.Hardware spoofing was equally robust, rotating through realistic CPU core counts and GPU renderers that matched the simulated OS (Windows or macOS). However, the profile "richness" was marred by nearly empty font lists and static hardware peripheral counts.While the service remains highly stealthy due to its lack of automation signals and high hardware entropy, the standardized inner window offsets and sparse font lists present minor recognizable patterns.

Good

In our tests, Oxylabs Headless Browser showcased advanced profile generation, particularly in geographic and hardware realism.
  • Superior Geographic Alignment: Timezones and languages perfectly tracked the request's origin. US sessions reported America/New_York, while Japanese sessions correctly shifted to Asia/Tokyo.
US: en-US / America/New_York UK: en-GB / Europe/London DE: de-DE / Europe/Berlin JP: ja-JP / Asia/Tokyo
  • Platform & Architecture Cohesion: For macOS sessions, the browser correctly reported MacIntel platforms and emulated Apple Silicon hardware specs, such as 10-core CPUs and Apple M1 Pro/Max GPUs.
  • Zero Automation Leaks: Key detection vectors were effectively neutralized. navigator.webdriver was false in 100% of sessions, and worker-layer data remained consistent with the main window.
  • Realistic Graphics Rendering: The provider utilized platform-appropriate GPU renderers, such as Intel Iris Xe or NVIDIA GT 1030 for Windows and native Apple Metal renderers for macOS.
Windows: ANGLE (Intel(R) Iris(R) Xe Graphics ... Direct3D11) macOS: ANGLE (Apple ANGLE Metal Renderer: Apple M1 Max ...)

Bad

Testing revealed specific areas where the fingerprinting was either too static or incomplete.
❌ Severely Limited Font Environment
The browser failed to present a believable font fingerprint. Across all desktop sessions, the font list was almost entirely empty, often exposing only a single target font. This is a common indicator of a manipulated headless environment.
  • Fonts: Most sessions reported only "Univers CE 55 Medium" or "Arial Unicode MS".
  • Impact: Real desktop users typically have 100+ system and application fonts; a single-font list is highly suspicious.
⚠️ Viewport Geometry Artifacts
While the screen resolutions were realistic, the inner window sizes adopted unnaturally narrow widths that created recognizable static offsets.
  • Windows: On 1920-width screens, the inner width was locked at 937px.
  • macOS: On 1728-width screens, the inner width was locked at 1042px.
⚠️ Client Hints Version String
A structural anomaly was observed in the sec-ch-ua header where the version string followed a non-standard format.
  • Observation: The Chrome version was emitted as v="Chrome/142" instead of the standard v="142".
  • Note: While technically correct, this specific string structure acts as a fingerprintable injection artifact.

Verdict: ✅ Good

Oxylabs Headless Browser provides a high-integrity environment that is particularly strong for tasks requiring strict geographic consistency and diverse hardware profiles. In our tests, its ability to match timezones and languages to IP regions was flawless, and it successfully evaded all standard automation detection flags. While the font environment is sparse and peripheral counts are static, the overall entropy and lack of automation signals make it a highly reliable choice for bypassing advanced anti-bot protections.

#4 ZenRows Scraping Browser

ZenRows Scraping Browser logo
ZenRows Scraping Browser

ZenRows Scraping Browser exposes pre-hardened Playwright/Chromium sessions over WebSocket, with automatic anti-bot bypass, TLS JA3 spoofing, and built-in CAPTCHA solving.

#4
Position
51.81
Overall Score
10✅ Pass
0⚠️ Warn
3❌ Fail
1🚨 Critical
In our benchmark analysis, ZenRows Scraping Browser ranked 4 out of 7 providers with an overall score of 51.81. While the provider demonstrated sophisticated hardware emulation, particularly for Apple Silicon, it was heavily penalized for a critical automation leak and significant geographical signaling failures.
TestStatus
TLS / JA3 RealismN/A
Header RealismPass
Automation SignalsCritical
Fonts & PluginsPass
Resolution & DPRPass
Language/Locale vs IPFail
Client Hints CoherencePass
Device Type CoherencePass
TestStatus
Platform ConsistencyPass
Hardware RealismPass
Peripherals PresencePass
Timezone vs IP GeoFail
Fingerprint EntropyPass
Viewport/GeometryPass
Graphics FingerprintsFail
  • ✅ Where ZenRows performed well: Showcased excellent hardware entropy including realistic Apple M3 and M4 core counts; maintained perfect header and Client Hint coherence; and provided OS-authentic font and resolution profiles.
  • ❌ Where ZenRows fell short: Suffered a critical failure by leaking CDP automation flags in most sessions; displayed highly randomized and incorrect timezone settings; and failed to align system languages with requested IP geographies.

Pricing

Monthly Plans
PlanMonthly PriceScraping Browser QuotaCost per GBConcurrency
Free (14-day trial)$0100 MB$05 requests
Developer$69/mo12.73 GB~$5.42/GB20
Startup$129/mo24.76 GB~$5.21/GB50
Business$299/mo60 GB~$4.98/GB100
Full pricing details available at Pricing - ZenRows

Headers and Device Fingerprints

ZenRows Scraping Browser presented a highly sophisticated set of device fingerprints that varied significantly across sessions. The hardware profiles were particularly impressive, accurately simulating modern Mac and Windows environments with matching GPUs and high-entropy CPU core counts.However, the internal environment was undermined by a critical exposure of CDP automation flags. In most test sessions, the browser explicitly identified as being under programmatic control, which bypasses most other stealth measures for modern anti-bot systems.Furthermore, while the technical device metrics were strong, the "behavioral" signals like timezones and languages were almost entirely decoupled from the proxy's exit IP. This created massive, easily detectable contradictions between the network layer and the browser's internal identity.

Good

In our tests, ZenRows Scraping Browser excelled at simulating realistic, high-entropy hardware environments.
  • Hardware Realism (Apple Silicon): For macOS sessions, hardware concurrency accurately reflected Apple's M-series architectures, such as 12 cores for M3 Pro and 14 cores for an M4, paired with fitting GPU renderers.
  • Internal Layer Coherence: User-Agent strings, Client Hints, and platform values were perfectly synchronized. A shift from Windows to macOS in the UA was immediately mirrored in the sec-ch-ua-platform and navigator.platform fields.
  • Entropy & Diversity: Five unique fingerprint hashes were observed across five sessions, driven by a wide range of screen resolutions (from 1280x720 to 2560x1440) and varied GPU profiles like Intel Iris Xe.
  • Realistic Font Emulation: The browser correctly swapped font lists based on the simulated OS, exposing Calibri and Segoe UI for Windows and Helvetica Neue for macOS.
// Hardware Entropy Examples Session 1 (Win): 8 Cores, Intel UHD 630, 1280x720 Session 2 (Mac): 12 Cores, Apple M3 Pro, 1728x1117 Session 5 (Mac): 14 Cores, Apple M4, 2560x1440

Bad

Despite the strong hardware emulation, ZenRows failed several high-weight tests regarding automation and geographical consistency.
🚨 Critical: Automation Signals Exposed
The provider failed to consistently mask its CDP automation status. While the US session successfully returned false, all other tested regions explicitly leaked automation flags.
  • CDP Flags: CDP automation: true was detected in 80% of sessions (UK, DE, RU, JP).
  • Impact: This is a definitive indicator of programmatic control that typically results in immediate flagging by advanced anti-bot systems.
❌ Language/Locale vs IP Mismatch
The provider spoofed highly specific languages that heavily contradicted the requested IP locations, creating a suspicious profile for any fraud prevention system.
  • UK Session: Requested UK IP but presented ar-IQ (Arabic - Iraq).
  • DE/RU Sessions: Presented hi-IN (Hindi - India) despite European IP addresses.
  • US Session: Presented ar-EG (Arabic - Egypt).
Requested IP: United Kingdom (UK) Main language: ar-IQ (Arabic - Iraq) Languages: ar-IQ, ar → High-confidence mismatch for geo-fencing systems.
❌ Timezone vs IP Geolocation
Timezones were seemingly assigned at random and did not align with the target regions or the proxy exit IPs.
  • JP session: Reported America/New_York (Expected: Asia/Tokyo).
  • DE session: Reported Asia/Calcutta (Expected: Europe/Berlin).
  • US session: Reported Africa/Cairo (Expected: US-based timezone).
❌ Graphics Fingerprints (Web Worker)
While the main thread graphics were realistic, the provider failed to populate GPU data in the web worker context during macOS sessions.
  • Missing Data: GPU vendor and renderer in the web worker literally returned NA.
  • Inconsistency: This failure exposes an inconsistency in the browser environment's ability to mock workers natively.

Verdict: ⚠️ Mixed

ZenRows Scraping Browser provides advanced hardware emulation but is currently undermined by critical automation leaks.In our tests, the provider showed a high level of technical sophistication in mimicking modern hardware profiles, particularly Apple Silicon Macs. However, the score of 51.81 reflects the significant risk posed by the exposed CDP automation flags and the complete lack of alignment between the proxy IP and the browser's timezone/language settings.✅ What it gets right
  • Excellent diversity in hardware concurrency and GPU renderers.
  • Flawless synchronization between Headers, Client Hints, and JS platform values.
  • High entropy across sessions prevents static fingerprinting.
❌ What holds it back
  • Critical Failure: CDP automation leaks in 80% of sessions.
  • Systemic misalignment of timezones and languages relative to the requested IP location.
  • Missing GPU data in web worker contexts during macOS sessions.
Bottom line ZenRows is a powerful tool for emulating diverse hardware, but the current exposure of automation flags and randomized geographical signals makes it vulnerable to detection by targets that analyze behavioral consistency or CDP-level artifacts.

#5 Browser.cash

Browser.cash logo
Browser.cash

Browser.cash (formerly BrowserHub) is a serverless browser grid providing hardened WebKit and Chromium instances over WebSocket with a focus on simple usage-based pricing and crypto-friendly billing.

#5
Position
51.81
Overall Score
10✅ Pass
2⚠️ Warn
1❌ Fail
1🚨 Critical
In our analysis, Browser.cash ranked #5 out of 7 providers with an overall score of 51.81 / 100. While the provider leverages its residential network to deliver excellent hardware realism and rich environment data, it is heavily hindered by detectable automation signals.
  • ✅ Where Browser.cash performed well: Demonstrated superior hardware diversity, realistic font and peripheral profiles, and highly accurate geo-spoofing for timezones and languages.
  • ❌ Where Browser.cash fell short: Failed critically by exposing Chrome DevTools Protocol (CDP) flags across all sessions and showed architectural gaps in web worker GPU reporting.

Pricing

Browser.cash focuses on a "pay-as-you-go" consumption model without monthly commitments.
Mode / Billing TypeRate / CostDetails
Browser Session (hourly)$0.09 / hrPay per hour of browser usage (session limit 1 hr)
Agent Input Tokens$0.50 / 1M input tokensFor agent-based tasks using “input tokens” billing
Agent Output Tokens$2.00 / 1M output tokensFor agent-based tasks using “output tokens” billing
Usage Model“Pay-as-you-go” (no commitment)Only pay for what you use; flexible top-ups
Top up via credit card or crypto wallet; see https://browser.cash for live tiers.

Headers and Device Fingerprints

Browser.cash provides a highly realistic front-end environment, utilizing its distributed network to surface genuine consumer hardware profiles and rich font sets.Headers remained modern and internally consistent, successfully implementing recent Chrome versions (146/147) and advanced compression like zstd.However, the stealth layer is undermined by a critical exposure of CDP automation flags. While the browser hides standard properties like navigator.webdriver, the underlying automation protocol remains visible to detection scripts.We also observed inconsistencies in multi-threaded environments, where web workers failed to report GPU data during specific sessions, creating a detectable footprint for advanced anti-bot systems.

Good

In our tests, Browser.cash exhibited high levels of realism in areas typically associated with genuine residential devices.
  • High Fingerprint Entropy: The provider generated unique hashes for every session, driven by varied hardware concurrency (8, 16, 20 cores) and diverse memory configurations (8GB to 32GB).
  • Realistic Hardware & GPU Profiles: Unlike providers that use server-grade hardware, Browser.cash presented legitimate consumer GPUs such as Intel Iris Xe and NVIDIA GTX 750 Ti.
  • Accurate Geo-Targeting: Language, locale, and timezone data were perfectly synchronized with the proxy IP's location.
US Session: Timezone: America/New_York | Header: en-US | Nav: en-US DE Session: Timezone: Europe/Berlin | Header: de-DE | Nav: de-DE RU Session: Timezone: Asia/Novosibirsk| Header: ru-RU | Nav: ru-RU
  • Environment Richness: Sessions included extensive font lists and natural variations in peripheral counts (microphones, webcams), successfully mimicking real user machines.

Bad

Despite its residential strengths, Browser.cash failed key tests related to automation masking and architectural consistency.
🚨 Critical: Automation Signals Exposed
The provider failed to mask Chrome DevTools Protocol (CDP) activity, which is a definitive indicator of automation for modern anti-bot solutions.
  • CDP Flags: The CDP automation flag returned true in 100% of tested sessions.
  • Impact: While navigator.webdriver was successfully hidden, the exposed CDP status provides unmaskable proof that the browser is being driven by automation.
❌ Graphics Fingerprints (Web Worker Gaps)
While the main thread reported realistic GPU data, the architectural implementation failed in multi-threaded contexts during some sessions.
  • Web Worker: GPU properties failed to resolve in the web worker context, returning NA.
  • Inconsistency: A discrepancy between the main browser thread (NVIDIA GeForce GTX 750 Ti) and a failing web worker (NA) alerts algorithms to proxy spoofing gaps.
⚠️ Viewport/Geometry Coherence
Viewport dimensions appeared suspiciously constrained and did not naturally scale with the reported screen boundaries.
  • Inner Viewport: Dimensions were seemingly "pinned" to approximately 1264x625 even when the screen resolution was set to 1920x1080.
  • Geometric Anomaly: This configuration introduced an unnatural horizontal offset of 656px, which is rarely visto in organic browsing.
Screen: 1920x1080 Available: 1920x1032 Inner: 1264x625 (Pinned) -> Static viewport size across varied screen bounds is a suspicious pattern.
⚠️ Device Type Coherence
One session broadcasted a standard Windows Desktop profile while reporting high touch capability, an unusual combination.
  • Touch Points: Reported 10 touch points on a environment claiming to be a standard desktop Windows Chrome instance.
  • Risk: While possible on touch-enabled laptops, this represents a detectable outlier for rigid security models.

Verdict: ⚠️ Mixed

In our tests, Browser.cash demonstrated excellent hardware realism but failed to address foundational automation signals.The provider's score of 51.81 reflects a "Mixed" performance where high-quality residential data is undermined by critical CDP leaks and architectural inconsistencies in web workers.✅ What it gets right
  • Excellent hardware diversity and entropy across sessions.
  • Superior geo-targeted alignment for timezones and localizations.
  • Rich, populated font and peripheral profiles that match genuine consumer PCs.
❌ What holds it back
  • Critical failure to mask CDP automation flags, leading to high detection risk.
  • Inconsistent GPU reporting in web workers.
  • Geometric anomalies where inner viewports do not align realistically with screen sizes.
Bottom line Browser.cash is effective at blending into a residential hardware "crowd," but remains vulnerable to any anti-bot system that specifically checks for CDP-based automation or cross-thread consistency.

#6 Browserless

Browserless logo
Browserless

Browserless provides managed cloud browsers (BaaS) that connect via Puppeteer or Playwright, focusing on scalable browser automation with built-in anti-bot bypass, CAPTCHA solving, and global multi-region endpoints.

#6
Position
42.29
Overall Score
5✅ Pass
5⚠️ Warn
3❌ Fail
1🚨 Critical
In our analysis, Browserless ranked #6 out of 7 providers with an overall score of 42.29 / 100. While it successfully maintains modern header standards and internal platform consistency, its stealth capabilities are significantly hindered by a critical automation leak and systemic failures in geo-specific spoofing.
TestStatus
TLS / JA3 RealismN/A
Header RealismPass
Automation SignalsCritical
Fonts & PluginsFail
Resolution & DPRWarn
Language/Locale vs IPWarn
Client Hints CoherencePass
Device Type CoherencePass
TestStatus
Platform ConsistencyPass
Hardware RealismPass
Peripherals PresenceWarn
Timezone vs IP GeoFail
Fingerprint EntropyWarn
Viewport/GeometryWarn
Graphics FingerprintsFail
  • ✅ Where Browserless performed well: Maintained perfect coherence between User-Agent strings, Client Hints, and platform markers; utilized modern Chrome 145 headers; and rotation of realistic consumer GPU models.
  • ❌ Where Browserless fell short: Exposed a critical CDP automation flag; failed to match timezones or languages to proxy IP locations; and exhibited "franken-fingerprint" traits such as Windows fonts on Linux environments.

Pricing

Browserless uses a unit-based pricing model. 1 Unit = 30 seconds of browser time. Residential proxies cost 6 units/MB, and CAPTCHA solving costs 10 units per successful solve.
PlanMonthly PriceAnnual PriceUnits IncludedMax ConcurrencyMax Session TimePersisted SessionsOverages
Free$0$01k11 min1 day,
Prototyping$35$2520k315 min7 days$0.0020 / unit
Starter$200$140180k2030 min30 days$0.0017 / unit
Scale$500$350500k5060 min90 days$0.0015 / unit
EnterpriseCustomCustomCustom100s+CustomCustomCustom

Headers and Device Fingerprints

Browserless presents a mixed profile that balances modern web standards against detectable automation signatures. Its network layer is up-to-date, using Chrome 145 headers and correctly implemented Client Hints that align with the reported Linux platform.However, the internal browser environment frequently betrays its automated nature. A critical leak in the Chrome DevTools Protocol (CDP) and the presence of OS-mismatched fonts create a detectable fingerprint for advanced anti-bot systems.Furthermore, the provider lacks sophisticated geo-spoofing. Regardless of the connection's physical location, the browser environment remains locked to a US-centric, UTC-based configuration. This lack of regional variation, combined with static hardware values, results in a rigid and suspicious device profile.

Good

During testing, Browserless demonstrated strong internal consistency across its supported platform markers and header configurations.
  • Modern Header Realism: The provider used well-formed Chrome 145 headers, including support for modern compression algorithms like br and zstd.
  • Platform Coherence: All layers correctly identified the environment as Linux. The User-Agent, navigator.platform, and sec-ch-ua-platform values were perfectly aligned.
  • Realistic GPU Rotation: Unlike many competitors that use generic "llvmpipe" renderers, Browserless rotated through genuine consumer GPU models, adding a layer of hardware realism.
// Observed GPU Renderers
  • ANGLE (NVIDIA Corporation NVIDIA GeForce GTX 1080 Ti...)
  • ANGLE (NVIDIA Corporation NVIDIA GeForce RTX 4070 Ti...)
  • ANGLE (AMD AMD Radeon RX 6600...)
  • ANGLE (AMD AMD Radeon RX 5700 XT...)

Bad

The following issues were identified as significant risks for detection during our benchmark.
🚨 Critical: Automation Signals Exposed
A critical failure was observed where the Chrome DevTools Protocol (CDP) explicitly signaled that the browser was under automated control. This is a definitive proof of automation that most anti-bots use for instant flagging.
  • CDP Flags: CDP automation: true was returned in 100% of the sessions.
  • Impact: This single parameter bypasses most other stealth measures, as it is a direct indicator of a controlled environment.
❌ Timezone Geolocation Mismatch
The browser failed to adjust its internal clock to match the proxy IP's location. This creates a standard detection point where the IP's geo-location contradicts the browser's JavaScript environment.
  • Timezone: Consistently reported UTC regardless of the target country.
  • Target Mismatches: Locations in the UK, DE, RU, and JP all reported UTC instead of local time.
Target Geo: UK -> Browser Timezone: UTC (Expected: Europe/London) Target Geo: JP -> Browser Timezone: UTC (Expected: Asia/Tokyo) Target Geo: DE -> Browser Timezone: UTC (Expected: Europe/Berlin)
❌ Font and OS Mismatch
Testing revealed a "franken-fingerprint" where the browser claimed to be Linux but possessed fonts exclusive to Windows environments.
  • OS Claim: User-Agent specified X11; Linux x86_64.
  • Fonts Detected: The font list included Calibri and Segoe UI Light.
  • Why it matters: Real Linux installations do not carry these proprietary Windows fonts by default, making this a clear indicator of a synthetic environment.
⚠️ Static Geometry and Viewport
All sessions exhibited identical screen resolutions and awkward viewport offsets, suggesting a highly standardized, non-organic window configuration.
  • Resolution: Statistically locked at 1536x864 for all sessions.
  • Offsets: A massive 485px width offset between the screen and inner viewport was consistent across all attempts, which is atypical for standard browser usage.

Verdict: ❌ Poor

In our tests, Browserless provided a detectable environment that struggled with fundamental stealth requirements.While the service offers excellent scalability and modern headers, its score of 42.29 reflects the high risk associated with its critical automation leaks and poor geo-spoofing.✅ What it gets right
  • Perfect alignment between headers and Client Hints.
  • Authentic GPU renderer rotation.
  • Clean, modern Chrome 145 User-Agents.
❌ What holds it back
  • Critical leak: CDP automation flags are active and detectable.
  • Systemic Mismatches: Timezones do not follow the IP address, and Windows fonts are present on Linux profiles.
  • Low Diversity: Hardware counts, memory, and screen resolutions are static across all sessions.
Bottom line During testing, Browserless functioned as a high-performance browser automation tool but lacked the subtle "human" characteristics needed to bypass sophisticated anti-bot headers. It is best suited for scenarios where scale is more important than absolute stealth.

#7 Browserbase

Browserbase logo
Browserbase

Browserbase is a managed Playwright/Selenium automation platform designed for long-running workflows with persistent browser environments.

#7
Position
37.71
Overall Score
4✅ Pass
5⚠️ Warn
4❌ Fail
1🚨 Critical
In our benchmark analysis, Browserbase ranked #7 out of 7 providers with an overall score of 37.71. While it successfully maintains consistency across several platform layers, it failed significantly in realism and entropy tests, ultimately being flagged by a critical automation signal.
TestStatus
TLS / JA3 RealismN/A
Header RealismPass
Automation SignalsCritical
Fonts & PluginsFail
Resolution & DPRWarn
Language/Locale vs IPWarn
Client Hints CoherencePass
Device Type CoherencePass
TestStatus
Platform ConsistencyPass
Hardware RealismWarn
Peripherals PresenceFail
Timezone vs IP GeoFail
Fingerprint EntropyFail
Viewport/GeometryWarn
Graphics FingerprintsWarn
  • ✅ Where Browserbase performed well: Maintained high internal consistency between HTTP headers, Client Hints, and the underlying Linux OS platform.
  • ❌ Where Browserbase fell short: Exposed a critical Playwright framework flag; utilized a completely static hardware and resolution profile across all regions; and failed to match timezones or languages to proxy exit IPs.

Pricing (2025)

All tests were conducted using Browserbase's Basic Stealth setup. Pricing is structured around browser hours and concurrent sessions.
PlanMonthly PriceConcurrent BrowsersIncluded Browser HoursProxy UsageData RetentionStealth Features
Free$0/month11 hour included,7 days,
Developer$20/month25100 hours included → then $0.12/hr1GB included → then $12/GB7 daysBasic Stealth + auto CAPTCHA solving
Startup$99/month100500 hours included → then $0.10/hr5GB included → then $10/GB30 daysBasic Stealth + auto CAPTCHA solving
ScaleCustom250+CustomCustom30+ daysAdvanced Stealth + auto CAPTCHA solving
Full pricing details available at Browserbase | Pricing

Headers and Device Fingerprints

Browserbase provides a highly consistent but obviously synthetic environment. During testing, it managed to perfectly align its Linux-based User-Agent strings with its JavaScript properties and Client Hints, avoiding the "Franken-fingerprint" issues seen in some other providers.However, the environment is extremely rigid. Every session, regardless of the target geography, utilized the exact same hardware profile, screen resolution, and font list. This total lack of entropy makes the traffic easily identifiable as coming from a uniform VM fleet.The most significant issue observed was the direct exposure of the Playwright framework in the JavaScript environment. Furthermore, the browser failed to adjust its internal clock or language settings to match the proxy IP, leaving an obvious trail of mismatched geolocation signals for anti-bot systems to detect.

Good

In our tests, Browserbase demonstrated strong internal coherence across its OS and header configurations.
  • Coherent OS Triplet: The HTTP User-Agent, navigator.platform, and Client Hints all correctly identified as a Linux x86_64 environment without any technical contradictions.
  • Modern Header Composition: Headers included support for modern compression algorithms, which is essential for passing strict network-layer checks.
Accept-Encoding: gzip deflate br zstd User-Agent: Mozilla/5.0 (X11; Linux x86_64) ... Chrome/146.0.0.0
  • Aligned Desktop Profile: The environment consistently reported desktop-specific properties, such as zero touch points and a non-mobile flag, which matched the provided User-Agent.

Bad

Browserbase's performance was hampered by static attributes and a critical leak of its underlying automation framework.
🚨 Critical: Automation Signals Exposed
The browser explicitly identified itself as being controlled by Playwright. While common flags like navigator.webdriver were successfully set to false, a specific framework detection check returned true in 100% of sessions.
  • Playwright Detection: The flag Playwright: true was unmasked across all tested sessions.
  • Automation Indicators: This serves as a definitive signature for modern anti-bot systems to immediately block the traffic.
❌ Total Lack of Fingerprint Entropy
Every session produced the identical fingerprint hash (4bc52fa0...), regardless of the proxy location or time of the request.
  • Hardware: Hardware concurrency was fixed at 2, and Device memory at 8GB for every session.
  • GPU: All sessions reported the exact same Mesa Intel(R) UHD Graphics 630 renderer.
  • Resolution: Screen resolution was statically locked at 2560x1440.
❌ Systematic Timezone & Locale Mismatch
Browserbase failed to match the browser's internal location settings to the proxy IP address, which is a significant indicator of proxy usage.
  • Timezone: All sessions defaulted to America/Los_Angeles, even when using Japanese or European IPs.
  • Language: The environment remained locked to English (en-US) globally.
IP Geo: JP (Japan) -> Timezone: America/Los_Angeles (Expected: Asia/Tokyo) IP Geo: DE (Germany) -> Timezone: America/Los_Angeles (Expected: Europe/Berlin) IP Geo: RU (Russia) -> Language: en-US (Expected: ru-RU)
❌ Suspicious Environment Richness
The environment lacked the "clutter" of a real user system, exhibiting patterns typical of a bare-bones headless setup.
  • Fonts: Only one font, Univers CE 55 Medium, was detected across all Linux sessions.
  • Viewport: The available screen resolution and inner window size were identical (2560x1440), indicating a 0px browser chrome offset, a common trait of headless windowing.
  • Peripherals: Device counts were hardcoded to 1 microphone, 1 speaker, and 0 webcams without any variation.

Verdict: ❌ Poor

In our tests, Browserbase's Basic Stealth mode provided insufficient protection against modern anti-bot systems.While the technical alignment of its Linux environment is well-executed, the critical exposure of the Playwright framework and the use of a perfectly static, "zero-entropy" fingerprint make it highly susceptible to detection. The failure to synchronize timezones with proxy exit nodes further reduces its stealth capabilities for global scraping tasks.

Lessons Learned: What This Benchmark Teaches Us

After running this benchmark across seven of the most prominent stealth browser APIs, several clear insights emerged, ones that challenge the marketing promises of "undetectable" browsing and highlight the technical shortcuts many providers take.

1. Automation Flags are the "Silent Killer"

The most startling discovery was how many "Stealth" providers fail at the most basic level of their job: hiding the fact that they are automation frameworks. Developers often assume that paying for a managed browser guarantees the removal of internal bot signals, but the data suggests otherwise.
  • CDP Leakage: Multiple providers (ZenRows, Browser.cash, Browserless) leaked "CDP automation" flags in nearly 100% of sessions.
  • Framework Fingerprints: Browserbase, despite its positioning for AI agents, triggered a critical positive detection for the Playwright framework itself.
  • The Implication: If the browser explicitly tells the website "I am a bot," no amount of high-quality proxy rotation or hardware spoofing will prevent a block from a sophisticated anti-bot system.

2. Geographic Inconsistency is the Most Common "Lazy" Error

There is a massive disconnect between the network layer (the proxy IP) and the browser layer (the Javascript environment) in the majority of tools tested. Even top-tier providers failed this fundamental test of realism.
  • Timezone Clashes: Scrapeless, Bright Data, and Browserless failed to synchronize the browser's internal timezone with the proxy location, often defaulting to America/New_York or UTC regardless of whether the IP was in Germany or Japan.
  • The "Franken-Fingerprint": Some tools claimed to be Linux-based in their User-Agent but exposed proprietary Windows fonts like Calibri and Segoe UI.
  • The Takeaway: Anti-bot systems don't just look at your IP; they look for contradictions. A Russian IP with a New York timezone and Windows fonts on a Linux OS is a high-confidence signal for fraud detection.

3. Static Patterns Defeat High-Quality Proxies

A common misconception is that high entropy (randomness) is hard to achieve. In reality, many providers rely on a "Static VM fleet" approach, where every single browser instance looks exactly like the last one.
  • Identical Hashes: Browserbase produced the exact same fingerprint hash for every single test session, providing zero entropy.
  • Geometric Anomalies: Many tools used "pinned" viewport sizes with impossible offsets (e.g., 0px browser chrome), a signature of headless windowing that never occurs in organic consumer browsing.

4. Specialization Dictates Stealth Quality

The benchmark revealed a widening gap between tools built for Scraping and those built for AI Agents.
  • Scraping-First Wins: Tools like Scrapeless and Bright Data, built for high-scale data extraction, generally offer more sophisticated hardware masking (GPU/CPU/Web Workers) than agent-focused tools.
  • Agent-First Struggles: Tools designed for long-running "Agentic" workflows often prioritize persistence and ease of use over deep fingerprint fortification, making them more vulnerable to aggressive anti-bots.
Reliability isn't a silver bullet. The "best" tool is simply the one whose engineering failures don't overlap with your target's specific detection heuristics. In this market, trust the raw fingerprint data over the marketing "stealth" label.

Conclusion: A More Realistic Way to Think About Scraping Tools

This benchmark wasn’t about crowning a universal winner, it was about understanding the gap between expectation and reality. The reality is:
  • Some tools are genuinely good.
  • Some tools are workable with the right strategy.
  • Some tools have critical weaknesses you need to be aware of.
  • No tool is perfect.
  • And no price tag guarantees quality.
For developers, the best approach is to treat scraping tools like any other dependency:
  • Understand their strengths
  • Understand their blind spots
  • Choose based on your actual risk level
  • Mix providers when needed
  • Keep your own fallback strategies ready
Stealth scraping is no longer about “which provider is best”, it’s about knowing where each one fits into your system. Want to learn more about web scraping? Take a look at the links below!