Scraping, Geocoding, and Mapping Points with Scrapy, Geopy, and Leaflet

Displaying points of interest on maps is fun and can be an informative first step in geospatial analysis.  The task is relatively straightforward when the data already contain the points’ latitudes and longitudes.  Sometimes, however, the data don’t contain this information, e.g. when you simply have a list of addresses.  In this case, you can geocode the addresses to determine their latitudes and longitudes in order to display the points on a map.

Let’s tackle this situation to demonstrate how to geocode addresses and display the points on a map.  Since I live in California, I searched online for interesting points of interest in the western region of the United States and found this page of attractions for RV travelers, organized by state.  If you prefer to skip this web scraping section, you can find the resulting data, along with all of the files associated with this tutorial, in this Github repository.

Screen Shot 2017-12-12 at 12.17.18 PM

Each listing includes a name/title and the address, phone number, cost, website, and latitude and longitude for the attraction.  Let’s scrape this information from the page so we can geocode the addresses and also display the additional details about the attractions.  While the listings include latitude and longitude information, let’s ignore it for now and pretend we only have the addresses so we can demonstrate geocoding.

Project Environment

To begin, let’s create a project folder and a Python virtual environment for this tutorial:

mkdir points_of_interest
cd points_of_interest

conda create -n poi python=2.7 scrapy pandas geopy

source activate poi

pip install --upgrade lxml
pip install --upgrade cryptography
pip install --upgrade parsel
pip install --upgrade w3lib
pip install --upgrade twisted
pip install --upgrade pyOpenSSL
conda install -c conda-forge scrapy
pip install --upgrade geopy

The conda create statement creates a Python 2.7 virtual environment named poi and installs scrapy, pandas, and geopy in the environment.  The source activate statement activates the environment.  The additional pip install –upgrade statements ensure the main underlying packages are up-to-date (some of them were not for me and I needed to run these commands before scrapy and geopy would work correctly).


scrapy is a great Python package for web scraping. Let’s use it to scrape the data from the page of western attractions. To create a new scrapy project named western_attractions, run the following command:

scrapy startproject western_attractions

To scrape the page, we need to create a spider, so run the following command to create a spider named

touch western_attractions/spiders/

Before attempting to scrape the data from the page, let’s inspect the page’s element’s using Chrome’s inspect console. From here, we can see that the attraction titles are in h4 elements and the data are inside p elements inside blockquote elements.

Screen Shot 2017-12-12 at 12.14.26 PM

Now that we know which elements contain the data we want to extract, we can use scrapy shell to test selector commands, methods of extracting specific pieces of data from a page, before incorporating them into the spider. To use scrapy shell on this page, run the following command:

scrapy shell ''

Now let’s see if we can extract the attraction titles by selecting the text inside h4 elements. To do so, run the following command:


Similarly, we can select all of the blockquotes and website links associated with the attractions with the following commands:


response.css('blockquote p a::attr(href)').extract()

Now that we have an idea of the commands we’ll need to use to extract the data we’re interested in, let’s start incorporating them into a spider. Open western_attractions/spiders/ in an editor and add the following code:

import re
import scrapy

class WesternAttractionsSpider(scrapy.Spider):
    name = 'western_attractions'
    attractions_url = ''
    start_urls = [attractions_url]

    def parse(self, response):
        # Extract names/titles of attractions
        h4s = response.css("h4::text").extract()
        h4s = [val.encode('ascii','ignore') for val in h4s]
        h4s = [re.sub("\s+", " ", val).strip() for val in h4s]
        h4s = [val for val in h4s if "Address" not in val and "Phone" not in val]
        h4s = filter(None, h4s)

        # Extract website URLs for attractions
        links = response.css('blockquote p a::attr(href)').extract()

        # Extract details associated with each attraction
        for idx, bq in enumerate(response.css("blockquote")):
            data = bq.css("p.smaller_par::text").extract()
            data = [val.encode('ascii','ignore') for val in data]
            data = [re.sub("\s+", " ", val).strip() for val in data]
            data = filter(None, data)
            if len(data) == 5:
                address = data[0]
                phone = data[1].replace("Phone: ", "")
                cost = data[2]
                lat_lon = data[3]
                website = data[4]
                yield { "title": h4s[idx], "address": address, "phone": phone,
                    "cost": cost, "lat_lon": lat_lon, "website": website }

To explain this code, let’s start at the top and work our way down. First, we import the re and scrapy packages so we can use them in the script. We need the re package to perform some pattern-based substitutions to clean the raw data. We need the scrapy package create the spider.

We create a spider by creating a class named WesternAttractionsSpider, which is a subclass of scrapy.Spider. We give our spider a name, western_attractions, and provide the url of the page we want to scrape.

The rest of the code, in the parse method, specifies how to parse the page content and extract the data we’re interested in. The first code block extracts the names/titles of the attractions contained in the h4 elements. You’ll notice the first line of code is the same line we used in scrapy shell. The remaining four lines of code clean the data — the first one removes non-ascii characters, the second one removes extra spacing in the strings, the third one removes strings that don’t contain the names/titles of attractions, and the fourth one removes blank elements in each list.

The middle line of code extracts all of the website urls for the attractions. While this list of urls is currently separate from the rest of the attractions data, we’ll associate the urls with the attractions in the next code block.

The final code block extracts the all of the details associated with each attraction, which are contained in blockquote elements. We iterate over the blockquote elements to extract the details associated with each attraction. By inspecting the elements in Chrome’s inspect console, we know the details are in p elements that all have the same class, smaller_par. Therefore, the CSS selector, bq.css(“p.smaller_par::text”).extract(), generates a list of all of the details inside the blockquote element.

Similar to the code for the h4 elements, the next three lines clean the values in the list of details. The next line uses the blockquote index to identify the website url associated with the attraction and appends the url to the list of details associated with the attraction. Given the details we want to extract from the page, i.e. address, phone, cost, website, and latitude/longitude, the list associated with each attraction should contain five elements. We test the length of the list to ensure we only extract records that contain five data elements.

Finally, to extract the data we need to yield a dictionary, so we separate the data in the list into five variables, clean up a few phone number entries, and yield a dictionary with the attraction’s title and five details.

Screen Shot 2017-12-12 at 12.25.37 PM

To scrape the page and save the data in a JSON file, run the following command:

scrapy crawl western_attractions -o western_attractions.json


Now that we have addresses in a JSON file we can focus on geocoding the addresses. We’ll use geopy, and Google Maps V3 API, to geocode the addresses. To use Google Maps V3 API, you need to acquire an API key, which you can do here.

Now that you have a Google Maps V3 API key, create a new script named and add the lines of code shown below.

First, we import GoogleV3 and GeocoderTimedOut from geopy to perform geocoding and catch timeout errors. Next, we import pandas to manage the data, including reading the input JSON file, merging the input data with the newly geocoded data, and writing the data to output files.

#!/usr/bin/env python
from geopy.geocoders import GoogleV3
from geopy.exc import GeocoderTimedOut

import pandas as pd

The next line initializes a Google locator / geocoder we’ll use to identify the latitudes and longitudes associated with our addresses. Be sure to replace YOUR_API_KEY with the API key you generated in the previous step.

google_locator = GoogleV3(api_key="YOUR_API_KEY")

Next, let’s create a function we can use to geocode the addresses. The geocoder may not be able to locate and geocode an address. It might time out as well. We’ll use try except blocks to handle these cases so the script doesn’t fail for one of these reasons.

If the geocoder returns a location, then we’ll separate the address, latitude, and longitude into separate variables and return them.

def geocode_address(address, geolocator):
    """Google Maps v3 API:"""
        location = geolocator.geocode(address, exactly_one=True, timeout=5)
    except GeocoderTimedOut as e:
        print("GeocoderTimedOut: geocode failed on input %s with message %s" % (address, e.msg))
    except AttributeError as e:
        print("AttributeError: geocode failed on input %s with message %s" % (address, e.msg))
    if location:
        address_geo = location.address
        latitude = location.latitude
        longitude = location.longitude
        return address_geo, latitude, longitude
        print("Geocoder couldn't geocode the following address: %s" % address)

When we map the data, it will be fun and helpful to be able to color it by the state in which the attraction in located. We can extract the two-letter state abbreviation from the address with basic string parsing, but through trial and error I found some of the addresses didn’t contain the two-letter state abbreviation. Let’s create the following helper function to convert all of the state locations into their two-letter abbreviations.

def convert_state_to_two_letter(state_abbreviation):
    if state_abbreviation == 'California':
        state_abbreviation = 'CA'
    if state_abbreviation == 'Idaho':
        state_abbreviation = 'ID'
    if state_abbreviation == 'Boulder' or state_abbreviation == 'Tahoe,':
        state_abbreviation = 'NV'
        state_abbreviation = state_abbreviation
    return state_abbreviation

Screen Shot 2017-12-12 at 1.20.33 PM

To begin, let’s read the JSON data we generated in the web scraping section into a pandas DataFrame and then use our convert_state_to_two_letter function to create a new column that contains the two-letter state abbreviations.

df = pd.read_json('western_attractions/western_attractions.json', orient='records')
df['state'] = df['address'].apply(lambda address: convert_state_to_two_letter(address.split()[-2]))

Now it’s time to use our geocode_address function to identify the latitudes and longitudes of our addresses. We use a for loop to iterate over the DataFrame rows, each of which represents an attraction, and use the geocode_address function to geocode the address. We collect the geocoding results into a dictionary and, if the function returns a geocoded address, we append the results into a list so the final result will be a list of dictionaries we can convert into a new pandas DataFrame.

geo_results = []
for index, row in df.iterrows():
        result = geocode_address(row.loc['address'], google_locator)
        d = {'index': index, 'address_geo': result[0], 'latitude': result[1],
            'longitude': result[2]}
        if d['address_geo'] is not None:

We want to merge the latitude and longitude data with the existing data about each attraction, so we convert the geocoded data into a DataFrame and then inner join the two DataFrames together. We’re using an inner join in this tutorial so we can proceed with attractions that were successfully geocoded. If you need to keep all of your original addresses, even if they can’t be geocoded, then you can use a left join.

geo = pd.DataFrame(geo_results)
geo.set_index('index', inplace=True)
df_geo = df.merge(geo, how='inner', left_index=True, right_index=True)

Now that the resulting DataFrame contains latitude and longitude data for each attraction, in addition to the original details, we can write the data to files. The CSV file is simply a convenient format for tabular data and spreadsheet programs. In this case, since our intention is to map the data, the more important file to write is the JSON file.

df_geo.to_csv('western_attractions_geocoded.csv', index=False)
df_geo.to_json('western_attractions_geocoded.json', orient='records')

Screen Shot 2017-12-12 at 1.21.49 PM

The input and output filenames are hardcoded in the script (feel free make the script more flexible with sys.argv[]). To geocode the addresses, run the following command:


The script will read in western_attractions/western_attractions.json and then write out western_attractions_geocoded.csv and western_attractions_geocoded.json.

Convert JSON to GeoJSON

A JSON file isn’t quite what we need to start mapping the data. To map the data, we need to convert the JSON data into GeoJSON. You can choose from several tools to convert JSON to GeoJSON, including:

Javascript geojson
Python Script

Since this probably won’t be the last time we work on a mapping project and need to convert JSON to GeoJSON, let’s copy the Python code from the last link listed above into a script that we’ll be able to reuse. Here’s the script we’ll use to create the GeoJSON:

#!/usr/bin/env python
from sys import argv
from os.path import exists
import simplejson as json

script, in_file, out_file = argv

data = json.load(open(in_file))

geojson = {
    "type": "FeatureCollection",
    "features": [
        "type": "Feature",
        "geometry" : {
            "type": "Point",
            "coordinates": [d["longitude"], d["latitude"]],
        "properties" : d,
    } for d in data]

output = open(out_file, 'w')
json.dump(geojson, output)

print geojson

Screen Shot 2017-12-12 at 1.22.54 PM

To convert the JSON into GeoJSON, run the following command:

./ western_attractions_geocoded.json western_attractions_geocoded.geojson

The script prints the resulting GeoJSON to the screen, in addition to writing it to the output file, so you’ll know when the script’s finished.

Make a Map

Now that we have a GeoJSON file that contains details about the western attractions, including their latitudes and longitudes, we can work on displaying the data on a map. Here again we have lots of options:


All of these tools are great options. In this tutorial, we’ll use Leaflet, along with Mapbox, to display our attractions on a map. Leaflet is convenient because it has easy-to-learn syntax and helpful tutorials, but it’s an arbitrary choice, so feel free to use a different mapping tool.

To begin, let’s create an HTML file named western-attractions.html and add the code in the screen shots. Most of the code is HTML boilerplate. Within the head section, we need to add Leaflet’s JS and CSS files. We’ll also add D3’s JS file so we can use it to read the GeoJSON data file. Inside the body section we add a div element with an id=”westernAttractionsMap” to contain the map we’re going to create.

Let’s add a small amount of styling in the head section to specify the document margins, the dimensions of the map, and the size and weight of the text in the popups.

Screen Shot 2017-12-12 at 1.24.32 PM

Finally, let’s add the Javascript-Leaflet code we need to generate the map of the western attractions. First, we create a variable reference to our map, westernAttractionsMap, and specify the initial latitude, longitude, and zoom level.

Next, we use the Mapbox API to add a tile layer to the map. If you don’t already have a Mapbox API access token, you need to go here to create a free account and generate a free access token. Once you have an API token, be sure to replace YOUR_MAPBOX_ACCESS_TOKEN in the Mapbox API URL in the L.tileLayer() call with your actual Mapbox API token. We set the minimum and maximum zoom levels, use the id to specify the tile layer style, and then add the layer to the map.

Finally, we use d3.json to read the GeoJSON file and extract the data we want to display on the map. The onEachFeature function generates a popup for each attraction containing the attraction’s title, cost, address, and website. We use a capitalizeWords function, defined below, on the title variable to capitalize each word in the title. In addition, we use the state attribute and a color_state function, defined below, inside a style element to color the text inside the popup based on the state in which the attraction is located. We use ternary operators for each detail to display the data if it’s available or an empty string if it isn’t available.

Leaflet’s L.geoJSON function adds the attractions, i.e. GeoJSON objects, to a layer, applies the onEachFeature function to each object to associate a popup with each attraction, and then adds the layer to the map.

Screen Shot 2017-12-12 at 1.26.30 PM

The capitalizeWords and color_state functions are simple helper functions to format the attraction titles and to color the popup text. The capitalizeWords function ensures the titles are displayed consistently by capitalizing each word in the title. The color_state function makes it easier to differentiate between states by using different colors for the text in the popups for attractions in different states.

Screen Shot 2017-12-12 at 1.27.44 PM

There are different ways to view your map. For example, you can use a Python-based server with one of the following commands, depending on your version of Python:

python -m http.server 3031 (Python 3.x) or
python -m SimpleHTTPServer 3031 (Python 2.x)

Alternatively, you can install http-server and then run the server with the following command:

http-server -p 3031

Screen Shot 2017-12-12 at 1.12.58 PM

Once the server is running, click on western-attractions.html to open your map in a browser. Click on a few of the pins to view the details associated with the attractions, and click on pins in different states to see the text color change. We’ve also made the website’s active URLs so you can click on them to go to the attraction’s official web page.

Screen Shot 2017-12-12 at 1.14.19 PM

Screen Shot 2017-12-12 at 1.15.48 PM


This tutorial covered scraping data from a web page, geocoding addresses, and displaying points on a map.

In some cases, your project may only require readily-available geographic data, in which case you can skip to the final section of this tutorial and focus on displaying the data on a map. In other cases, you may only have addresses or no geographic data at all, and in these cases the first two sections on scraping web data and geocoding it will be more valuable.

I hope that by following along with this tutorial and experimenting with the techniques you now feel more comfortable scraping, geocoding, and mapping data.


One thought on “Scraping, Geocoding, and Mapping Points with Scrapy, Geopy, and Leaflet

  1. Pingback: Linkdump #63 | WZB Data Science Blog

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s