How to speed up web scraping with Go (Golang) and concurrency ?

Reading time ~5 minutes

I’ve been developing Python web scrapers for years now. Python’s simplicity is great for quick prototyping and so many amazing libraries can help you build a scraper and a result parser (Requests, Beautiful Soup, Scrapy, …). Yet once you start looking into your scraper’s performance, Python can be somewhat limited and Go is a great alternative !

Why Go ?

When you’re trying to speed up information fetching from the Web (for HTML scraping or even for a mere API consumption), 2 ways of optimization are possible:

  • speed up the web resource download (e.g. download http://example.com/hello.html)
  • speed up the parsing of the information you retrieved (e.g. get all urls available in hello.html)

Parsing can be improved either by reworking your code, or using a more efficient parser like lxml, or allocating more resources to your scraper. Still, parsing optimization is often negligible compared to the real bottleneck, namely network access (i.e. web page downloading).

Consequently the solution is about downloading the web resources in parallel. This is where Go is a great help !

Concurrent programming is a very complicated field, and Go makes it pretty easy. Go is a modern language which was created with concurrency in mind. On the other hand, Python is an older language and writing a concurrent web scraper in Python can be tricky, even if Python has improved a lot in this regard recently.

Go has other advantages, but let’s talk about it in another article !

Install Go

I already made a short tuto about how to install Go on Ubuntu.

If you need to install Go on another platform, feel free to read the official docs.

A simple concurrent scraper

Our scraper will basically try to download a list of web pages we’re giving him first, and check it gets a 200 HTTP status code (meaning the server returned an HTML page without an error). We’re not dealing with HTML results parsing here, since the goal is to focus on the critical point: improving network access performance. It’s your turn to write something now !

Final code


/*
Open a series of urls.

Check status code for each url and store urls I could not
open in a dedicated array.
Fetch urls concurrently using goroutines.
*/

package main

import (
    "fmt"
    "net/http"
)

// -------------------------------------

// Custom user agent.
const (
    userAgent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) " +
        "AppleWebKit/537.36 (KHTML, like Gecko) " +
        "Chrome/53.0.2785.143 " +
        "Safari/537.36"
)

// -------------------------------------

// fetchUrl opens a url with GET method and sets a custom user agent.
// If url cannot be opened, then log it to a dedicated channel.
func fetchUrl(url string, chFailedUrls chan string, chIsFinished chan bool) {

    // Open url.
    // Need to use http.Client in order to set a custom user agent:
    client := &http.Client{}
    req, _ := http.NewRequest("GET", url, nil)
    req.Header.Set("User-Agent", userAgent)
    resp, err := client.Do(req)

    // Inform the channel chIsFinished that url fetching is done (no
    // matter whether successful or not). Defer triggers only once
    // we leave fetchUrl():
    defer func() {
        chIsFinished <- true
    }()

    // If url could not be opened, we inform the channel chFailedUrls:
    if err != nil || resp.StatusCode != 200 {
        chFailedUrls <- url
        return
    }

}

func main() {

    // Create a random urls list just as an example:
    urlsList := [10]string{
        "http://example1.com",
        "http://example2.com",
        "http://example3.com",
        "http://example4.com",
        "http://example5.com",
        "http://example10.com",
        "http://example20.com",
        "http://example30.com",
        "http://example40.com",
        "http://example50.com",
    }

    // Create 2 channels, 1 to track urls we could not open
    // and 1 to inform url fetching is done:
    chFailedUrls := make(chan string)
    chIsFinished := make(chan bool)

    // Open all urls concurrently using the 'go' keyword:
    for _, url := range urlsList {
        go fetchUrl(url, chFailedUrls, chIsFinished)
    }

    // Receive messages from every concurrent goroutine. If
    // an url fails, we log it to failedUrls array:
    failedUrls := make([]string, 0)
    for i := 0; i < len(urlsList); {
        select {
        case url := <-chFailedUrls:
            failedUrls = append(failedUrls, url)
        case <-chIsFinished:
            i++
        }
    }

    // Print all urls we could not open:
    fmt.Println("Could not fetch these urls: ", failedUrls)

}


Explanations

This code is a bit longer than what we could do with a language like Python, but as you can see it is still very reasonable. Go is a statically typed language, so we need a couple of more lines dedicated to variables declaration. But please measure how much time the script is taking, and you’ll understand how rewarding it is !

We chose 10 random urls as an example.

Here, the magical keywords enabling us to use concurrency are go, chan, and select:

  • go creates a new goroutine, which means fetchUrl will be executed within a new concurrent goroutine each time.
  • chan is the type representing a channel. Channels help us communicate among goroutines (main being a goroutine itself as well).
  • select ... case is a switch ... case dedicated to receiving messages sent through channels. Program stays here as long as all goroutines have not sent a message (either to say that url fetching is done, or to say that url fetching failed).

We could have made this scraper without any channel, that’s to say create goroutines and not expect a message from them in return (for instance if every goroutine ends up storing information in database). In such a case, our main goroutine can perfectly end while some goroutines are still working. This is possible because main does not block other goroutines when it stops. But in real life it is almost always necessary to use channels in order to make our goroutines talk to each other.

Don’t forget to limit speed

Here speed is our goal. This is not a concern because we’re scraping all different urls. However if you need to scrap the same urls multiple times (like in API consumption for example), you’ll probably have to stay under a certain number of requests per second. In this case, you’ll have to set up a counter (maybe we’ll talk about it in another article !).

Have a nice scraping !

Existe aussi en français | También existe en Español

API Rate Limiting With Traefik, Docker, Go, and Caching

Limiting API usage based on advanced rate limiting rule is not so easy. In order to achieve this behind the NLP Cloud API, we're using a combination of Docker, Traefik (as a reverse proxy) and local caching within a Go script. When done correctly, you can considerably improve the performance of your rate limiting and properly throttle API requests without sacrificing speed of the requests. Continue reading