![]()
Scrapy Selenium Guide: Integrating Selenium Into Your Scrapy Spiders
Originally designed for automated testing of web applications, over the years Selenium became the go to headless browser option for Python developers looking to scrape JS heavy websites.
Selenium gave you the ability to scrape websites that needed to be rendered or interacted with to show all the data.
For years, Selenium was the most popular headless browser for web scraping, however, since the launch of Puppeteer and Playwright Selenium has begun to fall out of favour.
That being said, Selenium is still a powerful headless browser option and every web scraper should be aware of it.
Although, you could use the Python Selenium library directly in your spiders (it can be a bit clunky), in this guide we're going to use scrapy-selenium which provides a much better integration with Scrapy.
In this guide we're going to walk through how to setup and use Scrapy Splash, including:
Note: scrapy-selenium hasn't been maintained in over 2 years, so it is recommended you check out scrapy-playwright as well as it is a more powerful headless browser and is actively maintained by the Scrapy community.
Need help scraping the web?
Then check out ScrapeOps, the complete toolkit for web scraping.
Integrating Scrapy Selenium
Getting setup with Scrapy Selenium is easier to get setup than Scrapy Splash, but not as easy as Scrapy Playwright as you need to install and configure a browser driver for scrapy-selenium to use it. Which can be a bit prone to bugs.
Base Scrapy Project
If you'd like to follow along with a project that is already setup and ready to go you can clone our scrapy project that is made espcially to be used with this tutorial.
Once you download the code from our github repo. You can just copy/paste in the code snippets we use below and see the code working correctly on your computer.
The only thing that you need to do after downloading the code is to install a python virtual environment. If you don't know how to do that you can check out our guide here
If you prefer video tutorials, then check out the video version of this article.
1. Install Scrapy Selenium
To get started we first need to install scrapy-selenium by running the following command:
pip install scrapy-selenium
Note: You should use Python Version 3.6 or greater. You also need one of the Selenium compatible browsers.
2. Install ChromeDriver
To use scrapy-selenium you first need to have installed a Selenium compatible browser.
In this guide, we're going to use ChromeDiver which you can download from here.
You will need to download the ChromeDriver version that matches the version of Chrome you have installed on your machine.
To find out what version you are using, go to Settings in your Chrome browser and then click About Chrome to find the version number.

We should put the downloaded chromedriver.exe in our Scrapy project here:
├── scrapy.cfg