Dijkstra's algorithm was taught to me was as follows

while pqueue is not empty:
    distance, node = pqueue.delete_min()
    if node has been visited:
        continue
    else:
        mark node as visited
    if node == target:
        break
    for each neighbor of node:
         pqueue.insert(distance + distance_to_neighbor, neighbor)

But I've been doing some reading regarding the algorithm, and a lot of versions I see use decrease-key as opposed to insert.

Why is this, and what are the differences between the two approaches?


Solution 1:

The reason for using decrease-key rather than reinserting nodes is to keep the number of nodes in the priority queue small, thus keeping the total number of priority queue dequeues small and the cost of each priority queue balance low.

In an implementation of Dijkstra's algorithm that reinserts nodes into the priority queue with their new priorities, one node is added to the priority queue for each of the m edges in the graph. This means that there are m enqueue operations and m dequeue operations on the priority queue, giving a total runtime of O(m Te + m Td), where Te is the time required to enqueue into the priority queue and Td is the time required to dequeue from the priority queue.

In an implementation of Dijkstra's algorithm that supports decrease-key, the priority queue holding the nodes begins with n nodes in it and on each step of the algorithm removes one node. This means that the total number of heap dequeues is n. Each node will have decrease-key called on it potentially once for each edge leading into it, so the total number of decrease-keys done is at most m. This gives a runtime of (n Te + n Td + m Tk), where Tk is the time required to call decrease-key.

So what effect does this have on the runtime? That depends on what priority queue you use. Here's a quick table that shows off different priority queues and the overall runtimes of the different Dijkstra's algorithm implementations:

Queue          |  T_e   |  T_d   |  T_k   | w/o Dec-Key |   w/Dec-Key
---------------+--------+--------+--------+-------------+---------------
Binary Heap    |O(log N)|O(log N)|O(log N)| O(M log N)  |   O(M log N)
Binomial Heap  |O(log N)|O(log N)|O(log N)| O(M log N)  |   O(M log N)
Fibonacci Heap |  O(1)  |O(log N)|  O(1)  | O(M log N)  | O(M + N log N)

As you can see, with most types of priority queues, there really isn't a difference in the asymptotic runtime, and the decrease-key version isn't likely to do much better. However, if you use a Fibonacci heap implementation of the priority queue, then indeed Dijkstra's algorithm will be asymptotically more efficient when using decrease-key.

In short, using decrease-key, plus a good priority queue, can drop the asymptotic runtime of Dijkstra's beyond what's possible if you keep doing enqueues and dequeues.

Besides this point, some more advanced algorithms, such as Gabow's Shortest Paths Algorithm, use Dijkstra's algorithm as a subroutine and rely heavily on the decrease-key implementation. They use the fact that if you know the range of valid distances in advance, you can build a super efficient priority queue based on that fact.

Hope this helps!

Solution 2:

In 2007, there was a paper that studied the differences in execution time between using the decrease-key version and the insert version. See http://www.cs.sunysb.edu/~rezaul/papers/TR-07-54.pdf

Their basic conclusion was not to use the decrease-key for most graphs. Especially for sparse graphs, the non-decrease key is significantly faster than the decrease-key version. See the paper for more details.