Web Scraping and Crawling with Python

Extract a wealth of information embedded within static and dynamic websites.

Introduction

The internet is not just a collection of webpages, it’s a gigantic resource of interesting data. Being able to extract that data is a valuable skill. It’s certainly challenging, but with the right knowledge and tools, you’ll be able to leverage a wealth of information for your personal and professional projects.

Imagine building a web scraper that legally gathers information about potential houses to buy, a process that automatically fills in that tedious form to download a report, or a crawler that enriches an existing data set with weather information. In this hands-on workshop we’ll teach you how to accomplish just that using Python and a handful of packages.

You’ll learn about the concepts underlying HTML, CSS selectors, and HTTP requests; and how to inspect those using the developer tools of your browser. We’ll show you how to turn messy HTML into structured data sets, how to automate interacting with dynamic websites and forms, and how to set up crawlers that can traverse thousands or million of websites. Through plenty of exercises you’ll be able to apply this new knowledge to your own projects in no time.

What you’ll learn

  • The challenge of scraping messy HTML
  • The structure of GET and POST requests
  • How to target HTML elements and attributes using CSS selectors
  • The difference between a static and a dynamic website
  • How to extract data from a dynamic website
  • How to automate browser tasks such as clicking links and submitting forms
  • How to set up a scraping job and let it run at regular intervals
  • How to use Python packages beautifulsoup4, pyquery, scrapy, and selenium

This workshop is for you because

  • You want to extract data from a static or dynamic webpage (and potentially many websites)
  • You want to transform messy HTML into a structured data set for your data visualisation or machine learning project
  • You want to automate a task that requires logging in, filling in forms, or downloading files

Schedule

  • Introduction to web scraping
    • What’s the challenge anyway?
    • Common HTML elements and attributes
    • Static vs dynamic web pages
    • Working with Developer Tools in Firefox and Chrome
  • Targeting elements using CSS Selectors
    • Based on types, classes, and IDs
    • Based on parents, ancestors, and siblings
    • Based on attributes and pseudo-classes
  • HTTP basics
    • The structure of a GET request
    • Query parameters
    • Understanding status codes such as 200, 301, and 404
    • Why use a POST request?
  • From HTML to data
    • Converting data types
    • Extracting and combining multiple elements
    • Transforming tables into CSV
    • Traversing paginated results
    • Working with badly formatted HTML
  • Dynamic websites
    • Introduction to Selenium
    • Understanding headless browsing
    • Scraping JavaScript
  • Automated browsing
    • Clicking links
    • Filling in forms
    • Logging in
    • Uploading and downloading files
  • Web crawling
    • Setting up a crawler
    • Traversing a single domain
    • Crawling across the internet
    • Scheduling crawl jobs

Prerequisites

You’re expected to have some experience with programming in Python. Our workshop Introduction to Programming in Python is one option that can help you with that. Roughly speaking, if you’re familiar with the following Python syntax and concepts, then you’ll be fine:

  • assignment, arithmetic, boolean expression, tuple unpacking
  • bool, int, float, list, tuple, dict, str, type casting
  • in operator, indexing, slicing
  • if, elif, else, for, while
  • range(), len(), zip()
  • def, (keyword) arguments, default values
  • import, import as, from import ...
  • lambda functions, list comprehension
  • JupyterLab or Jupyter Notebook

Some experience with HTML and CSS is useful, but not required.

We’re going to use Python together with JupyterLab and the following packages:

  • beautifulsoup4
  • mechanize
  • pyquery
  • scrapy, and
  • selenium

The recommended way to get everything set up is to:

  • Download and install the Anaconda Distribution
  • Run the command: ! conda install -y -c conda-forge beautifulsoup4 mechanize pyquery scrapy selenium in a Jupyter notebook

Alternatively, if you don’t want to use Anaconda, then you can install everything using pip. In any case, if running import bs4, mechanize, pyquery, scrapy, selenium doesn’t produce any errors then you know you’ve set up everything correctly.

In addition, you should have a recent version of either Firefox or Chrome because we’re going to use their Developer Tools to inspect HTTP requests and HTML elements.

About your instructor

Jeroen Janssens
Principal Instructor, Data Science Workshops

Jeroen is an RStudio Certified Instructor who enjoys visualizing data, building machine learning models, and automating things using either Python, R, or Bash. Previously, he was an assistant professor at Jheronimus Academy of Data Science and a data scientist at Elsevier in Amsterdam and various startups in New York City. He is the author of Data Science at the Command Line. Jeroen holds a PhD in machine learning from Tilburg University and an MSc in artificial intelligence from Maastricht University.

Clients

We’ve previously delivered this workshop at:

Elsevier

Photos and testimonials

Jamie Dobson
CEO, Container Solutions

Data Science Workshops came to our company to help us understand big data and the tools around it. They are clearly experts in this field and we enjoyed the course, learnt a lot and one day, when we have more big data, we hope to team up again. The trainer was personable, capable, and an expert. I couldn’t recommend them more highly.

Sign up

One upcoming date:
We can also organise this hands-on workshop as an online training for your team. Learn more.