Overview
A priority queue (PQ) stores items keyed by priority and supports fast retrieval of the highest priority element (either min or max depending on convention). The PQ is an abstract interface with multiple concrete implementations. The most common backing is a binary heap, but variants such as d-ary heaps, pairing heaps, binomial/Fibonacci heaps, bucket/radix queues (for bounded integer keys), and tree-based structures (balanced BSTs/B-trees) trade off operation costs and constants.
Typical operations:
-
INSERT(x, key)— add an item with a given key. -
FIND_MIN()/FIND_MAX()— peek the top item without removing. -
EXTRACT_MIN()/EXTRACT_MAX()— remove and return the top. -
DECREASE_KEY(handle, newKey)— lower an item’s key (min-PQ; dual isINCREASE_KEYfor max-PQ). -
MERGE(Q1, Q2)— optional meld operation for some heaps.
Note
Many algorithmic guarantees (e.g., Dijkstra, Prim) assume
DECREASE_KEYis available or emulated (the “lazy” approach pushes duplicates and discards stale entries on pop).
Structure Definition
Binary Heap (array-based, 0-indexed)
A binary heap is a complete binary tree embedded in an array H[0..n-1] satisfying the heap-order property (for a min-heap: H[i].key ≤ H[left(i)].key and H[i].key ≤ H[right(i)].key whenever children exist).
Indexing (0-based):
-
left(i) = 2*i + 1 -
right(i) = 2*i + 2 -
parent(i) = floor((i-1)/2)
Fields per entry (conceptual):
H[i].key // priority
H[i].value // payload (optional)
The heap shape is always complete (filled left-to-right) which enables compact storage and simple sift-up / sift-down operations.
Alternative Backings
-
d-ary heap: node has
dchildren; reduces height to⌈log_d n⌉(fewer levels) but increases per-level comparisons onsift_down. Useful whenDECREASE_KEYis frequent relative toEXTRACT_MINor when cache behavior favors broader nodes. -
Pairing heap: a self-adjusting multiway heap with excellent empirical
DECREASE_KEY; simple to implement; good default when you truly needDECREASE_KEY. -
Binomial/Fibonacci heaps: offer theoretical meld-friendly bounds; Fibonacci gives
O(1)amortizedDECREASE_KEYandINSERT,O(log n)EXTRACT_MIN—great on paper, large constants in practice. -
Balanced trees (RB-tree/B-tree): support ordered operations and iteration;
O(log n)for insert/extract; stable iteration order. -
Bucket/radix queues: for small integer priorities in
[0..C], achieve near-O(1)ops (e.g., Dial’s algorithm), ideal for integer-weight SSSP variants.
Core Operations
Below are canonical heap-based operations (min-heap).
function INSERT(H, x):
H.push(x) // append at end
i = H.size - 1
while i > 0 and H[i].key < H[parent(i)].key:
swap(H[i], H[parent(i)])
i = parent(i)
function FIND_MIN(H):
if H.size == 0: return NIL
return H[0]
function EXTRACT_MIN(H):
if H.size == 0: return NIL
min = H[0]
H[0] = H.back()
H.pop()
heapify_down(H, 0)
return min
function heapify_down(H, i):
n = H.size
while true:
l = left(i); r = right(i); s = i
if l < n and H[l].key < H[s].key: s = l
if r < n and H[r].key < H[s].key: s = r
if s == i: break
swap(H[i], H[s])
i = s
function DECREASE_KEY(H, handle, newKey):
i = handle.index
if newKey > H[i].key: error
H[i].key = newKey
while i > 0 and H[i].key < H[parent(i)].key:
swap(H[i], H[parent(i)])
i = parent(i)Tip
If your language lacks handles (indexes or pointers into the heap), emulate
DECREASE_KEYwith the lazy pattern: push a duplicate entry with the smaller key and mark the old one as stale on extract.
Example (Stepwise)
Min-heap inserts of keys: 7, 3, 10, 1, 5
-
Insert
7→[7] -
Insert
3→[3, 7](sift-up) -
Insert
10→[3, 7, 10] -
Insert
1→[1, 3, 10, 7](bubble above 3, then 7) -
Insert
5→[1, 3, 10, 7, 5]
EXTRACT_MIN():
- Swap root with last:
[5, 3, 10, 7], pop1, thenheapify_downto[3, 5, 10, 7].
Complexity and Performance
For a binary heap (array-based):
-
INSERT:O(log n)worst-case (sift-up). -
FIND_MIN:O(1). -
EXTRACT_MIN:O(log n)(sift-down). -
DECREASE_KEY:O(log n)(sift-up). -
BUILD_HEAP(A[0..n-1]):O(n)using bottom-up heapify.
For a d-ary heap:
-
Height
≈ log_d n. -
EXTRACT_MIN:O(d log_d n)(pick min among up todchildren per level; oftendis tuned to cache line width). -
DECREASE_KEY:O(log_d n)(still just rise along parents, one comparison per level).
For pairing/Fibonacci heaps:
-
Pairing heap: excellent
DECREASE_KEYempirically, but only amortized analyses;EXTRACT_MIN~O(log n)amortized. -
Fibonacci heap: amortized
INSERT, DECREASE_KEY = O(1),EXTRACT_MIN = O(log n); best for theory or cases with heavyDECREASE_KEY.
For bucket/radix queues (bounded integer priorities):
- Many ops are amortized
O(1); total time can beO(m + C)for graph algorithms whereCis max key.
Note
Real-world performance depends on memory locality and constant factors. Binary heaps are hard to beat when you don’t need a fast
DECREASE_KEY.
Implementation Details or Trade-offs
Min vs Max and Comparators
-
Implement min-heap by default; transform to max-heap by negating keys or swapping comparator direction.
-
Supply a comparator (less-than) for custom priorities; ensure it’s strict weak ordering for correctness.
Stability (tie-breaking)
PQ retrieval is typically not stable. To get stability:
- Store composite keys
(priority, seqno)whereseqnoincrements per insert (the smallerseqnowins when priorities tie).
Handles and DECREASE_KEY
-
Heaps rearrange items, so a handle is needed to find an item’s current index quickly. Strategies:
-
Store index back-pointers in items.
-
Keep a map from IDs to indices; update on swaps.
-
Use lazy duplicates if handle maintenance is complex.
-
Memory Layout & Cache
-
Use a contiguous vector/array for heaps to maximize cache locality.
-
Favor 0-based indexing (matches typical language arrays and simple arithmetic).
-
For d-ary heaps, choose
dto cohere with cache lines (e.g.,d=4or8).
Decrease-key Alternatives
-
Lazy: push updated key as a new entry; on pop, skip if the entry is stale. Simpler, often faster with binary heaps.
-
Pairing heap: choose when algorithm spends most time in
DECREASE_KEY(e.g., some MST/SSSP workloads).
Merge (Meldable PQs)
-
Binomial/Fibonacci and pairing heaps support efficient
MERGE(Q1,Q2). -
Array heaps don’t meld efficiently without rebuilding.
Integer Keys and Buckets
- When keys are small nonnegative integers, bucket queues or radix heaps can outperform comparison-based heaps (Dial’s algorithm, Thorup’s SSSP).
Concurrency
- Simple locks around heap ops serialize access; for high throughput, consider work-stealing or sharded PQs with occasional global combine steps.
Practical Use Cases
-
Graph algorithms: Dijkstra’s Algorithm (needs
DECREASE_KEYor lazy), Prim’s Algorithm (keys = cheapest frontier edges), A* (priority =g+h). -
Scheduling: CPU run queues, event-driven simulators, job schedulers (priority/time).
-
Streaming/selection: maintain top-k (max-heap of size k), order statistics approximations.
-
Rate limiting/throttling: next-eligible timestamps in a min-PQ.
Limitations / Pitfalls
Warning
No stable order by default. If two items tie, their return order is unspecified unless you embed a secondary key.
Warning
Incorrect
DECREASE_KEY. Forgetting to bubble up after lowering a key breaks the heap invariant; tests may silently pass on lucky inputs.
Warning
Stale entries (lazy design). Always check whether the popped entry’s key matches the current best in your side table before acting.
Warning
Comparator mistakes. A non-transitive or stateful comparator can corrupt ordering and cause infinite loops or invalid heaps.
Warning
Unbounded growth with lazy duplicates. Periodically purge or cap queue size if the workload creates many updates.
Summary
A priority queue abstracts “give me the best next item” with operations optimized by the chosen backing. Binary heaps are compact, cache-friendly, and strong general-purpose defaults; d-ary and pairing heaps shift trade-offs when DECREASE_KEY frequency or cache behavior dominates; bucket/radix structures win for bounded integer keys. Robust implementations define a clear comparator, handle ties (optionally with stability), choose a strategy for DECREASE_KEY (handles or lazy), and keep memory locality in mind.