Dijkstra's algorithm is fairly straightforward to implement and its runtime can be sped up considerably with the introduction of specialized data structures (for example, min-heap or Fibonacci heap) for selecting the next vertex for each iteration. Let's take a look at how we can leverage the graph processing system that we have built to execute Dijkstra's algorithm in parallel.
To break the sequential nature of the original algorithm, we will swap out the next vertex selection step and replace it with a gossip protocol. Whenever a vertex identifies a better path to it via another vertex, it will broadcast this information to all its neighbors by sending them a PathCostMessage. The neighbors would then process these messages during the next super-step, update their own min-distance estimates, and broadcast any better paths, if found, to their own neighbors. The key concept here is to trigger a wavefront of path updates throughout the graph that can be processed by each vertex in parallel.
The first thing we need to do is to define the types for the following:
- The message that's exchanged by the vertices
- Storing the state for each vertex
Consider the following piece of code:
type PathCostMessage struct { // The ID of the vertex this cost announcement originates from. FromID string // The cost of the path from this vertex to the source vertex via
// FromID. Cost int } func (pc PathCostMessage) Type() string { return "cost" } type pathState struct { minDist int prevInPath string }
The pathState struct encodes the same kind information as the min_cost_via and prev arrays from the sequential version of the algorithm. The only difference is that each vertex maintains its own pathState instance, which is stored as the vertex value.
Next, let's try to put together a compute function for the graph. As you may recall from the previous sections, compute functions receive the following input arguments: a pointer to the graph, the currently processed vertex, and an iterator for the messages that are sent to the vertex during the previous super-step. At super-step 0, each vertex initializes its own internal state with the maximum possible distance value:
Then, each vertex processes any path announcements from its neighbors and keeps track of the path announcement with the minimum cost:
minDist := int(math.MaxInt64) if v.ID() == c.srcID { // min cost from source to source is always 0 minDist = 0 } var via string for msgIt.Next() { m := msgIt.Message().(*PathCostMessage) if m.Cost < minDist { minDist = m.Cost via = m.FromID } }
After all the messages have been processed, we compare the cost of the best path from all the announcements to the cost of the best path we've seen so far by this vertex. If the vertex is already aware of a better path with a lower cost, we don't really need to do anything. Otherwise, we update the local vertex state to reflect the new best path and send out a message to each of our neighbors:
st := v.Value().(*pathState) if minDist < st.minDist { st.minDist = minDist st.prevInPath = via for _, e := range v.Edges() { costMsg := &PathCostMessage{ FromID: v.ID(), Cost: minDist + e.Value().(int), } if err := g.SendMessage(e.DstID(), costMsg); err != nil { return err } } } v.Freeze()
Each outgoing PathCostMessage includes the cost of reaching each neighbor through the current vertex and is calculated by adding the cost for the next hop (the value associated with the outgoing edge) to the new minimum cost for reaching the current vertex.
Regardless of whether the best path to a vertex was updated or not, we always invoke the Freeze method on each vertex and mark it as processed. This means that the vertex will not be reactivated in a future super-step unless it receives a message from its neighbors. Eventually, all the vertices will figure out the optimal path to the source vertex and stop broadcasting cost updates to their neighbors. When this happens, all the vertices will end up in a frozen state and the algorithm will terminate.
We could definitely argue that this particular approach requires much more effort compared to the traditional sequential version. However, contrary to the sequential version of the algorithm, the parallel version can run efficiently on massive graphs that can be potentially distributed across multiple compute nodes.
The full source code and tests for the shortest path calculator from this section can be found in this book's GitHub repository in the Chapter08/shortestpath folder.