Step 1:Install Python 2. Since we will be using Python scripts to extract data from the Facebook page then we need to install Python interpreter to execute them.Installation instructions will vary depending on whether you are using Mac OS X,Linux/UNIX or Windows.I will cover the installation in brief.But it is very easy and there is a lot of detailed instructions online incase you can’t. Web crawling is a component of web scraping, the crawler logic finds URLs to be processed by the scraper code. A web crawler starts with a list of URLs to visit, called the seed. For each URL, the crawler finds links in the HTML, filters those links based on some criteria and adds the new links to a queue.
Internet extends fast and modern websites pretty often use dynamic content load mechanisms to provide the best user experience. Still, on the other hand, it becomes harder to extract data from such web pages, as it requires the execution of internal Javascript in the page context while scraping. Let's review several conventional techniques that allow data extraction from dynamic websites using Python.
What is a dynamic website?#
A dynamic website is a type of website that can update or load content after the initial HTML load. So the browser receives basic HTML with JS and then loads content using received Javascript code. Such an approach allows increasing page load speed and prevents reloading the same layout each time you'd like to open a new page.
Usually, dynamic websites use AJAX to load content dynamically, or even the whole site is based on a Single-Page Application (SPA) technology.
In contrast to dynamic websites, we can observe static websites containing all the requested content on the page load.
A great example of a static website is example.com
:
The whole content of this website is loaded as a plain HTML while the initial page load.
To demonstrate the basic idea of a dynamic website, we can create a web page that contains dynamically rendered text. It will not include any request to get information, just a render of a different HTML after the page load:
All we have here is an HTML file with a single <div>
in the body that contains text - Web Scraping is hard
, but after the page load, that text is replaced with the text generated by the Javascript:
To prove this, let's open this page in the browser and observe a dynamically replaced text:
Alright, so the browser displays a text, and HTML tags wrap this text.
Can't we use BeautifulSoup or LXML to parse it? Let's find out.
Extract data from a dynamic web page#
BeautifulSoup is one of the most popular Python libraries across the Internet for HTML parsing. Almost 80% of web scraping Python tutorials use this library to extract required content from the HTML.
Let's use BeautifulSoup for extracting the text inside <div>
from our sample above.
This code snippet uses os
library to open our test HTML file (test.html
) from the local directory and creates an instance of the BeautifulSoup library stored in soup
variable. Using the soup
we find the tag with id test
and extracts text from it.
In the screenshot from the first article part, we've seen that the content of the test page is I ❤️ ScrapingAnt
, but the code snippet output is the following:
And the result is different from our expectation (except you've already found out what is going on there). Everything is correct from the BeautifulSoup perspective - it parsed the data from the provided HTML file, but we want to get the same result as the browser renders. The reason is in the dynamic Javascript that not been executed during HTML parsing.
We need the HTML to be run in a browser to see the correct values and then be able to capture those values programmatically.
Below you can find four different ways to execute dynamic website's Javascript and provide valid data for an HTML parser: Selenium, Pyppeteer, Playwright, and Web Scraping API.
Selenuim: web scraping with a webdriver#
Selenium is one of the most popular web browser automation tools for Python. It allows communication with different web browsers by using a special connector - a webdriver.
To use Selenium with Chrome/Chromium, we'll need to download webdriver from the repository and place it into the project folder. Don't forget to install Selenium itself by executing:
Selenium instantiating and scraping flow is the following:
- define and setup Chrome path variable
- define and setup Chrome webdriver path variable
- define browser launch arguments (to use headless mode, proxy, etc.)
- instantiate a webdriver with defined above options
- load a webpage via instantiated webdriver
In the code perspective, it looks the following:
And finally, we'll receive the required result:
Selenium usage for dynamic website scraping with Python is not complicated and allows you to choose a specific browser with its version but consists of several moving components that should be maintained. The code itself contains some boilerplate parts like the setup of the browser, webdriver, etc.
I like to use Selenium for my web scraping project, but you can find easier ways to extract data from dynamic web pages below.
Pyppeteer: Python headless Chrome#
Pyppeteer is an unofficial Python port of Puppeteer JavaScript (headless) Chrome/Chromium browser automation library. It is capable of mainly doing the same as Puppeteer can, but using Python instead of NodeJS.
Puppeteer is a high-level API to control headless Chrome, so it allows you to automate actions you're doing manually with the browser: copy page's text, download images, save page as HTML, PDF, etc.
To install Pyppeteer you can execute the following command:
The usage of Pyppeteer for our needs is much simpler than Selenium:
I've tried to comment on every atomic part of the code for a better understanding. However, generally, we've just opened a browser page, loaded a local HTML file into it, and extracted the final rendered HTML for further BeautifulSoup processing.
As we can expect, the result is the following:
We did it again and not worried about finding, downloading, and connecting webdriver to a browser. Though, Pyppeteer looks abandoned and not properly maintained. This situation may change in the nearest future, but I'd suggest looking at the more powerful library.
Playwright: Chromium, Firefox and Webkit browser automation#
Playwright can be considered as an extended Puppeteer, as it allows using more browser types (Chromium, Firefox, and Webkit) to automate modern web app testing and scraping. You can use Playwright API in JavaScript & TypeScript, Python, C# and, Java. And it's excellent, as the original Playwright maintainers support Python.
The API is almost the same as for Pyppeteer, but have sync and async version both.
Installation is simple as always:
Let's rewrite the previous example using Playwright.
Create Web Scraper Python Code
As a good tradition, we can observe our beloved output:
We've gone through several different data extraction methods with Python, but is there any more straightforward way to implement this job? How can we scale our solution and scrape data with several threads?
Meet the web scraping API!
Web Scraping API#
ScrapingAnt web scraping API provides an ability to scrape dynamic websites with only a single API call. It already handles headless Chrome and rotating proxies, so the response provided will already consist of Javascript rendered content. ScrapingAnt's proxy poll prevents blocking and provides a constant and high data extraction success rate.
Usage of web scraping API is the simplest option and requires only basic programming skills.
You do not need to maintain the browser, library, proxies, webdrivers, or every other aspect of web scraper and focus on the most exciting part of the work - data analysis.
As the web scraping API runs on the cloud servers, we have to serve our file somewhere to test it. I've created a repository with a single file: https://github.com/kami4ka/dynamic-website-example/blob/main/index.html
To check it out as HTML, we can use another great tool: HTMLPreview
The final test URL to scrape a dynamic web data has a following look: http://htmlpreview.github.io/?https://github.com/kami4ka/dynamic-website-example/blob/main/index.html
Web Scraper Google Chrome
The scraping code itself is the simplest one across all four described libraries. We'll use ScrapingAntClient library to access the web scraping API.
Let's install in first:
And use the installed library:
To get you API token, please, visit Login page to authorize in ScrapingAnt User panel. It's free.
And the result is still the required one.
All the headless browser magic happens in the cloud, so you need to make an API call to get the result.
Check out the documentation for more info about ScrapingAnt API.
Summary#
Today we've checked four free tools that allow scraping dynamic websites with Python. All these libraries use a headless browser (or API with a headless browser) under the hood to correctly render the internal Javascript inside an HTML page. Below you can find links to find out more information about those tools and choose the handiest one:
Happy web scraping, and don't forget to use proxies to avoid blocking 🚀
Learning Outcomes
- To understand the benefits of using async + await compared to simply web scraping with the requests library.
- Learn how to create an asynchronous web scraper from scratch in pure python using asyncio and aiohttp.
- Practice downloading multiple webpages using Aiohttp + Asyncio and parsing HTML content per URL with BeautifulSoup.
The following python installations are for a Jupyter Notebook, however if you are using a command line then simply exclude the ! symbol
Note: The only reason why we use nest_asyncio is because this tutorial is written in a jupyter notebook, however if you wanted to write the same web scraper code in a python file, then you would’nt need to install or run the following code block:
Why Use Asychronous Web Scraping?
Writing synchronous web scrapers are easier and the code is less complex, however they’re incredibly slow.
This is because all of the requests must wait for the current request to finish one by one. There can only be one request running at a given time.
In contrast, asynchronous web requests are able to execute without depending on previous requests within a queue or for loop. Asychronous requests happen simultaneously.
How Is Asychronous Web Scraping Different To Using Python Requests?
Instead of thinking about creating a for loop with Xn requests, you need to think about creating an event loop. For example the environment for NodeJS, by design executes in a single threaded event loop.
However for Python, we will manually create an event loop with asyncio.
Inside of your event loop, you can set a number of tasks to be completed and every task will be created and executed asychronously.
How To Web Scrape A Single Web Page Using Aiohttp
Firstly we define a client session with aiohttp:
Then with our session, we execute a get response on a single URL:
Thirdly, notice how we use the await keyword in front of response.text() like this:
Also, note that every asynchronous function starts with:
Finally we run asyncio.run(main()), this creates an event loop and executes all tasks within it.
After all of the tasks have been completed then the event loop is automatically destroyed.
How To Web Scrape Multiple Pages Using Aiohttp
When scraping multiple pages with asyncio and aiohttp, we’ll use the following pattern to create multiple tasks that will be simulataneously executed within an asyncio event loop:
To start with we create an empty list and then for every URL, we will attach an uncalled/uninvoked function, an AioHTTP session and the URL to the list.
The asyncio.gather(*tasks), basically tells asyncio to keep running the event loop until all of these functions within the python have been completed. It will return a list that is the same length as the number of functions (unless one of the functions within the list returned zero results).
Now that we know how to create and execute multiple tasks, let’s see this in action:
Adding HTML Parsing Logic To The Aiohttp Web Scraper
As well as collecting the HTML response from multiple webpages, parsing the web page can be useful for SEO and HTML Content Analysis.
Therefore let’s create second function which will parse the HTML page and will extract the title tag.
Conclusion
Asynchronous web scraping is more suitable when you have a larger number of URLs that need to be processed quickly.
Also, notice how easy it is to add on a HTML parsing function with BeautifulSoup, allowing you to easily extract specific elements on a per URL basis.