Interline now offers OSM Extracts, a service enabling software developers and GIS professionals to download chunks of OpenStreetMap data for 200 major cities and regions around the world.
OSM Extracts is simple. It’s a 3 step process that you can run yourself. We’ll describe these steps, as well as how we run those 3 steps for all 200 extracts in parallel on Interline’s cloud infrastructure.
Every day at Interline our servers download the latest updates from OpenStreetMap, update our local copies of the planet file, and process this data to generate a variety of geospatial data products. Reliably updating a planet file requires a bit of legwork behind the scenes, which we have encapsulated as helper scripts in our PlanetUtils library. This simplifies the process down to 3 steps:
osm_planet_update
to download the most recent OSM planet file (released weekly at planet.openstreetmap.org and other mirrors) and then apply minutely “diffs” to update the planet file to the current point in time.osm_planet_extract
, specifying the local copy of the cities.json file, to generate extracts for each of the cities/regions.Behind the scenes, PlanetUtils handles calls to osmosis
and osmconvert
to update the planet and generate the extracts. For more information on each of these commands, see the PlanetUtils readme, or install the package yourself using Homebrew or Docker.
This workflow is simple but slow. Updating the planet and generating all 200 extracts on a single machine can take more than a day. In addition to the extracts, the updated planet is the starting point for additional data products, such as our Valhalla Tilepacks. Valhalla Tilepacks combine the OSM planet file with 1.6Tb of elevation data to produce downloadable data for organizations to run their own instances of the Valhalla routing engine. All of this heavy lifting requires a substantial mix of computing resources. Parallel workflows to the rescue!
At Interline, we have a great appreciation for Kubernetes which powers much of our infrastructure. Kubernetes is ideal for running highly available services and managing complex resources. We wanted our data workflow system to leverage this infrastructure and introduce as little additional complexity as possible. With these constraints, we found that Argo Workflow Manager was a great fit for our needs:
Our Argo workflow resembles the simple 3-step process with a few key adjustments:
osm_planet_update
and copy to our cloud storage bucket.cities.json
file, and divide the GeoJSON features into smaller chunks of about 8 extracts each.Because each Argo task is defined as a Kubernetes pod, we have very fine-grained control over the resources allocated to each task. The Kubernetes scheduler is very effective at auto-scaling your cluster based, quickly increasing cluster size to run all tasks in parallel, and just as importantly, reducing the cluster size when the tasks are complete. Additionally, because some tasks are CPU intensive and others are memory intensive, we define multiple node pools to ensure a high utilization of cluster resources.
Currently, we run about 25 tasks of 8 extracts each. With each task requesting an 8 CPU node, generating all 200 extracts in parallel takes about 35 minutes. This workflow also allows us to run multiple data pipelines together, sharing common outputs and reducing the amount of work that has to be duplicated. For instance, the updated planet file generated in the first step above is also used as the input to our Valhalla workflow.
Our strategy for working with OpenStreetMap and other data sources is “snout to tail” — that is, we look for opportunities to turn each piece of a giant data download into a useful “meal” for users. We’re working on additional inputs and outputs to Interline’s workflow. Tuning the workflow is also always a work in process. Each week, we find more ways to improve the performance of a step. We look forward to sharing more updates on our workflow in the future.
In the meantime, we’d like to invite you to add a few next steps to your “professional workflow”:
Originally posted to the Interline blog on May 25, 2018.
Written by: