Where are you going? Should you be going that way?
This article presents a method to predict vehicle trajectories on a digital road network using a database of past trips sampled from noisy GPS sensors. Besides predicting future directions, this method also assigns a probability to an arbitrary sequence of locations.
Central to this idea is using a digital map unto which we project all sampled locations by aggregating them into individual trajectories and matching them to the map. This matching process discretizes the continuous GPS space into predetermined locations and sequences. After encoding these locations into unique geospatial tokens, we can more easily predict sequences, evaluate the probability of current observations and estimate future directions. This is the gist of this article.
What problems am I trying to solve here? If you need to analyze vehicle path data, you might need to answer questions like those in the article’s sub-heading.
Where are you going? Should you be going that way?
How do you evaluate the probability that the path under observation follows frequently traveled directions? This is an important question as, by answering it, you could program an automated system to classify trips according to their observed frequency. A new trajectory with a low score would cause concern and prompt immediate flagging.
How do you predict which maneuvers the vehicle will do next? Will it keep going straight ahead, or will it turn right at the next intersection? Where do you expect to see the vehicle in the next ten minutes or ten miles? Quick answers to these questions will assist an online tracking software solution in providing answers and insights to delivery planners, online route optimizers, or even opportunity charging systems.
The solution I am presenting here uses a database of historical trajectories, each consisting of a timed sequence of positions generated by the motion of a specific vehicle. Each positional record must contain time, position information, a reference to the vehicle identifier, and the trajectory identifier. A vehicle has many trajectories, and each trajectory has many positional records. A sample of our input data is depicted in Figure 1 below.
I drew the data above from the Extended Vehicle Energy Dataset (EVED)  article. You can build the corresponding database by following the code in one of my previous articles.
Our first job is to match these trajectories to a supporting digital map. The purpose of this step is not only to eliminate the GPS data sampling errors but, most importantly, to coerce the acquired trip data to an existing road network where each node and edge are known and fixed. Each recorded trajectory is thus converted from a sequence of geospatial locations into another sequence of numeric tokens coinciding with the existing digital map nodes. Here, we will use open-sourced data and software, with map data sourced from OpenStreetMap (compiled by Geofabrik), the Valhalla map-matching package, and H3 as the geospatial tokenizer.
Edge Versus Node Matching
Map-matching is more nuanced than it might look at first sight. To illustrate what this concept entails, let us look at Figure 2 below.
Figure 2 above shows that we can derive two trajectories from an original GPS sequence. We obtain the first trajectory by projecting the original GPS locations into the nearest (and most likely) road network segments. As you can see, the resulting polyline will only sometimes follow the road because the map uses graph nodes to define its basic shapes. By projecting the original locations to the map edges, we get new points that belong to the map but may stray from the map’s geometry when connected to the next ones by a straight line.
By projecting the GPS trajectory to the map nodes, we get a path that perfectly overlays the map, as shown by the green line in Figure 2. Although this path better represents the initially driven trajectory, it does not necessarily have a one-to-one location correspondence with the original. Fortunately, this will be fine for us as we will always map-match any trajectory to the map nodes, so we will always get coherent data, with one exception. The Valhalla map-matching code always edge-projects the initial and final trajectory points, so we will systematically discard them as they do not correspond to map nodes.
Unfortunately, Valhalla does not report the unique road network node identifiers, so we must convert the node coordinates to unique integer tokens for later sequence frequency calculation. This is where H3 enters the picture by allowing us to encode the node coordinates into a sixty-four-bit integer uniquely. We pick the Valhalla-generated polyline, strip the initial and final points (these points are not nodes but edge projections), and map all remaining coordinates to level 15 H3 indices.
The Dual Graph
Using the process above, we convert each historical trajectory into a sequence of H3 tokens. The next step is to convert each trajectory to a sequence of token triplets. Three values in a sequence represent two consecutive edges of the prediction graph, and we want to know the frequencies of these, as they will be the core data for both the prediction and the probability assessment. Figure 3 below depicts this process visually.
The transformation above computes the dual of the road graph, reversing the roles of the original nodes and edges.
We can now start to answer the proposed questions.
Should you be going that way?
We need to know the vehicle trajectory up to a given moment to answer this question. We map-match and tokenize the trajectory using the same process as above and then compute each trajectory triplet frequency using the known historic frequencies. The final result is the product of all individual frequencies. If the input trajectory has an unknown triplet, its frequency will be zero as the final path probability.
A triplet probability is the ratio of counts of a specific sequence (A, B, C) to the count of all (A, B, *) triplets, as depicted in Figure 4 below.
The trip probability is just the product of individual trip triplets, as depicted in Figure 5 below.
Where are you going?
We use the same principles to answer this question but start with the last known triplet only. We can predict the k most likely successors using this triplet as input by enumerating all triplets that have as their first two tokens the last two of the input. Figure 6 below illustrates the process for triplet sequence generation and evaluation.
We can extract the top k successor triplets and repeat the process to predict the most likely trip.
We are ready to discuss the implementation details, starting with map-matching and some associated concepts. Next, we will see how to use the Valhalla toolset from Python, extract the matched paths and generate the token sequences. The data preprocessing step will be over once we store the result in the database.
Finally, I illustrate a simple user interface using Streamlit that calculates the probability of any hand-drawn trajectory and then projects it into the future.
Map-matching converts GPS coordinates sampled from a moving object’s path into an existing road graph. A road graph is a discrete model of the underlying physical road network consisting of nodes and connecting edges. Each node corresponds to a known geospatial location along the road, encoded as a latitude, longitude, and altitude tuple. Each directed edge connects adjacent nodes following the underlying road and contains many properties such as the heading, maximum speed, road type, and more. Figure 7 below illustrates the concept with a straightforward example.
When successful, the map-matching process produces relevant and valuable information on the sampled trajectory. On the one hand, the process projects the sampled GPS points to locations along the most likely road graph edges. The map-matching process “corrects” the observed spots by squarely placing them over the inferred road graph edges. On the other hand, the method also reconstructs the sequence of graph nodes by providing the most likely path through the road graph corresponding to the sampled GPS locations. Note that, as previously explained, these outputs are different. The first output contains coordinates along the edges of the most likely path, while the second output consists of the reconstructed sequence of graph nodes. Figure 8 below illustrates the process.
A byproduct of the map-matching process is the standardization of the input locations using a shared road network representation, especially when considering the second output type: the most likely sequence of nodes. When converting sampled GPS trajectories to a series of nodes, we make them comparable by reducing the inferred path to a series of node identifiers. We can think of these node sequences as phrases of a known language, where each inferred node identifier is a word, and their arrangement conveys behavioral information.
This is the fifth article where I explore the Extended Vehicle Energy Dataset¹ (EVED) . This dataset is an enhancement and review of prior work and provides the map-matched versions of the original GPS-sampled locations (the orange diamonds in Figure 8 above).
Unfortunately, the EVED only contains the projected GPS locations and misses the reconstructed road network node sequences. In my previous two articles, I addressed the issue of rebuilding the road segment sequences from the transformed GPS locations without map-matching. I found the result somewhat disappointing, as I expected less than the observed 16% of defective reconstructions. You can follow this discussion from the articles below.
Now I am looking at the source map-matching tool to see how far it can go in correcting the defective reconstructions. So let’s put Valhalla through its paces. Below are the steps, references, and code I used to run Valhalla on a Docker container.
Here I closely follow the instructions provided by Sandeep Pandey  on his blog.
First, make sure that you have Docker installed on your machine. To install the Docker engine, please follow the online instructions. If you work on a Mac, a great alternative is Colima.
Once installed, you must pull a Valhalla image from GitHub by issuing the following commands at your command line, as the shell code in Figure 9 below depicts.
While executing the above commands, you may have to enter your GitHub credentials. Also, ensure you have cloned this article’s GitHub repository, as some files and folder structures refer to it.
Once done, you should open a new terminal window and issue the following command to start the Valhalla API server (MacOS, Linux, WSL):
The command line above explicitly states which OSM file to download from the Geofabrik service, the latest Michigan file. This specification means that when executed the first time, the server will download and process the file and generate an optimized database. In subsequent calls, the server omits these steps. When needed, delete everything under the target directory to refresh the downloaded data and spin up Docker again.
We can now call the Valhalla API with a specialized client.
This spin-off project simply offers packaged Python bindings to the fantastic Valhalla project.
Using the PyValhalla Python package is quite simple. We start with a neat install procedure using the following command line.
In your Python code, you must import the required references, instantiate a configuration from the processed GeoFabrik files and finally create an Actor object, your gateway to the Valhalla API.
Before we call the Meili map-matching service, we must get the trajectory GPS locations using the function listed below in Figure 13.
We can now set up the parameter dictionary to pass into the PyValhalla call to trace the route. Please refer to the Valhalla documentation for more details on these parameters. The function below calls the map-matching feature in Valhalla (Meili) and is included in the data preparation script. It illustrates how to determine the inferred route from a Pandas data frame containing the observed GPS locations encoded as latitude, longitude, and time tuples.
The above function returns the matched path as a string-encoded polyline. As illustrated in the data preparation code below, we can easily decode the returned string using a PyValhalla library call. Note that this function returns a polyline whose first and last locations are projected to edges, not graph nodes. You will see these extremities removed by code later in the article.
Let us now look at the data preparation phase, where we convert all the trajectories in the EVED database into a set of map edge sequences, from where we can derive pattern frequencies.
Data preparation aims at converting the noisy GPS-acquired trajectories into sequences of geospatial tokens corresponding to known map locations. The main code iterates through the existing trips, processing one at a time.
In this article, I use an SQLite database to store all the data processing results. We start by filling the matched trajectory path. You can follow the description using the code in Figure 15 below.
For each trajectory, we instantiate an object of the Actor type (line 9). This is an unstated requirement, as each call to the map-matching service requires a new instance. Next, we load the trajectory points (line 13) acquired by the vehicles’ GPS receivers with the added noise, as stated in the original VED article. In line 14, we make the map-matching call to Valhalla, retrieve the string-encoded matched path, and save it to the database. Next, we decode the string into a list of geospatial coordinates, remove the extremities (line 17) and then convert them to a list of H3 indices computed at level 15 (line 19). On line 23, we save the converted H3 indices and the original coordinates to the database for later reverse mapping. Finally, on lines 25 to 27, we generate a sequence of 3-tuples based on the H3 indices list and save them for later inference calculations.
Let’s go through each of these steps and explain them in detail.
We have seen how to load each trajectory from the database (see Figure 13). A trajectory is a time-ordered sequence of sampled GPS locations encoded as a latitude and longitude pair. Note that we are not using the matched versions of these locations as provided by the EVED data. Here, we use the noisy and original coordinates as they existed in the initial VED database.
The code that calls the map-matching service is already presented in Figure 14 above. Its central issue is the configuration settings; other than that; it is a pretty straightforward call. Saving the resulting encoded string to the database is also simple.
On line 17 of the main loop (Figure 15), we decode the geometry string into a list of latitude and longitude tuples. Note that this is where we strip out the initial and final locations, as they are not projected to nodes. Next, we convert this list to its corresponding H3 token list on line 19. We use the maximum detail level to try and avoid overlaps and ensure a one-to-one relationship between H3 tokens and map graph nodes. We insert the tokens in the database in the following two lines. First, we save the whole token list associating it to the trajectory.
Next, we insert the mapping of node coordinates to H3 tokens to enable drawing polylines from a given list of tokens. This feature will be helpful later on when inferring future trip directions.
We can now generate and save the corresponding token triples. The function below uses the newly generated list of H3 tokens and expands it to another list of triples, as detailed in Figure 3 above. The expansion code is depicted in Figure 19 below.
After triplet expansion, we can finally save the final product to the database, as shown by the code in Figure 20 below. Through clever querying of this table, we will infer current trip probabilities and future most-likely trajectories.
We are now done with one cycle of the data preparation loop. Once the outer loop is completed, we have a new database with all the trajectories converted to token sequences that we can explore at will.
You can find the whole data preparation code in the GitHub repository.
We now turn to the problem of estimating existing trip probabilities and predicting future directions. Let’s start by defining what I mean by “existing trip probabilities.”
We start with an arbitrary path projected into the road network nodes through map-matching. Thus, we have a sequence of nodes from the map and want to assess how probable that sequence is, using as a frequency reference the known trip database. We use the formula in Figure 5 above. In a nutshell, we compute the product of the probabilities of all individual token triplets.
To illustrate this feature, I implemented a simple Streamlit application that allows the user to draw an arbitrary trip over the covered Ann Arbor area and immediately compute its probability.
Once the user draws points on the map representing the trip or the hypothetical GPS samples, the code map matches them to retrieve the underlying H3 tokens. From then on, it’s a simple matter of computing the individual triplet frequencies and multiplying them to compute the total probability. The function in Figure 21 below computes the probability of an arbitrary trip.
The code gets support from another function that retrieves the successors of any existing pair of H3 tokens. The function listed below in Figure 22 queries the frequency database and returns a Python Counter object with the counts of all successors of the input token pair. When the query finds no successors, the function returns the None constant. Note how the function uses a cache to improve database access performance (code not listed here).
I designed both functions such that the computed probability is zero when no known successors exist for any given node.
Let us look at how we can predict a trajectory’s most probable future path.
We only need the last two tokens from a given running trip to predict its most likely future directions. The idea involves expanding all the successors of that token pair and selecting the most frequent ones. The code below shows the function as the entry point to the directions prediction service.
The above function starts by retrieving the user-drawn trajectory as a list of map-matched H3 tokens and extracting the last pair. We call this token pair the seed and will expand it further in the code. At line 9, we call the seed-expansion function that returns a list of polylines corresponding to the input expansion criteria: the maximum branching per iteration and the total number of iterations.
Let us see how the seed expansion function works by following the code listed below in Figure 24.
By calling a path expansion function that generates the best successor paths, the seed expansion function iteratively expands paths, starting with the initial one. Path expansion operates by picking a path and generating the most probable expansions, as shown below in Figure 25.
The code generates new paths by appending the successor nodes to the source path, as shown in Figure 26 below.
The code implements predicted paths using a specialized class, as shown in Figure 27.
We can now see the resulting Streamlit application in Figure 28 below.
This post originally appeared on TechToday.