The Problem(s) #
We have:
- A weighted, directed graph.
- Some source vertex, s.
- Some target vertex, t.
We want to find the shortest path from s to t.
Generalization: Path Tree
We have:
- A weighted, directed graph.
- Some source vertex, s.
We want to find the shortest paths from s to every vertex in the graph.
This will be a tree, rooted at s. Why?
Problem: Negative Edges
- Negative edges are a complication.
- If we’re really solving the “shortest walk” problem, then a negative cycle means there is no shortest path; it’s always better to find the negative cycle and loop forever.
- Even with a “shortest path” problem, negative edges mess up everything. What if there’s a heavily-negative edge on the other side of the graph and we can get a lower path cost by going over there first?
The Algorithm #
We’ll solve both versions of the problem problems the same way. If we don’t need the whole tree, we can stop when we have the shortest path from s to t.
Setup:
- An array
dist[]gives the shortest distance found so far from s to each vertex. Init to infinity. - An array
pred[]gives the predecessor node in that candidate shortest path. Init this to null.
Tense edges:
- An edge (u, v) is tense if
dist[u] + weight(u, v) < dist[v]. In that case the path to v through u is clearly better than our current best path. - We can relax the edge by updating our best path v to use that edge.
The generic algorithm:
- Find and relax tense edges until we’ve run out of them.
Notes:
- An edge (u, v) can only be tense if we already have a distance to u, which means we’ve already got a path to u.
- So this class of algorithms will build out a path tree from the source to all other vertices.
- We should be able to prove that relaxation will find shortest paths by contradiction. Can we?
- We might need to assume positive edge weights.
Specific Case: Unweighted Directed Graph #
We can think of an unweighted graph as a graph with all weights = 1.
For this, we can just do breadth-first search with dist and pred arrays. Because we’re going breadth first, we’ll hit vertices in distance order and we’ll only need to relax an edge to each vertex once.
Specific Case: DAG #
We’ve got no cycles, so we can safely write down this recurrence:
- We know
dist[v] = 0if v = s. - Otherwise,
dist[v] = min((for each u) dist[u] + weight(u, v))
We can do memoization / dynamic programming to solve this efficiently.
More generally: Dijkstra’s Algorithm #
This just best-first search. We use a priority queue to hit vertices in distance order by weight.
Most simply: Bellman-Ford #
until we run out of tense edges:
for each edge (u, v):
relax((u, v))
Recurrance variantion for dynamic programming
# Shortest distance from s to v, in at most i steps:
dist(i, v):
if i == 0 and v == s:
return 0
if i == 0 and v != s:
return inf
return min(
dist(i-1, v),
min((for each u) dist(i-1, u) + weight(u, v))
)