Overview
A linked-list queue implements FIFO with nodes connected via next pointers and two external references: head (front) and tail (last). The key strengths are:
-
Strict
O(1)ENQUEUEandDEQUEUEwithout shifting/copying elements. -
Unbounded growth (limited only by memory).
-
Stable references to enqueued elements (until removed).
Trade-offs include extra memory per node, potential allocator fragmentation, and poorer cache locality than array-based rings (see Queue Using Array).
Note
Representation used below is singly linked with a
tailpointer forO(1)appends. A dummy (sentinel) head is optional; it simplifies edge cases at the cost of one extra node.
Structure Definition
struct Node { value: T, next: Node* }
struct QueueList {
head: Node* // first element or NIL
tail: Node* // last element or NIL (must be NIL when head is NIL)
n: integer // size (optional but useful for diagnostics)
}
Invariants
-
n == 0⇔head == NILandtail == NIL -
n > 0⇒head != NILandtail != NILandtail->next == NIL -
Following
nextfromheadyields exactlynnodes ending attail
Tip
Keep the
tail->next == NILinvariant sacred. Many subtle bugs arise from forgetting to null the new tail after a dequeue of the last element.
Core Operations
Enqueue at tail (append)
function ENQUEUE(Q, x):
node = new Node(x, NIL)
if Q.tail == NIL: // queue is empty
Q.head = node
Q.tail = node
else:
Q.tail.next = node
Q.tail = node
Q.n = Q.n + 1Dequeue at head (remove front)
function DEQUEUE(Q):
if Q.head == NIL: error "underflow"
node = Q.head
Q.head = node.next
if Q.head == NIL: // became empty
Q.tail = NIL
Q.n = Q.n - 1
x = node.value
free(node) // or let GC reclaim
return xPeek and emptiness
function PEEK(Q):
if Q.head == NIL: error "underflow"
return Q.head.value
function EMPTY(Q):
return Q.head == NILExample (Stepwise)
Start empty: head=tail=NIL.
-
ENQ(7)→head=tail=Node(7) -
ENQ(4)→ list7 → 4,tail=4 -
DEQ()→ returns7, list4,head=tail=4 -
DEQ()→ returns4, list empty: set bothhead=tail=NIL -
DEQ()→ underflow error
Note
The empty → one and one → empty transitions are the only tricky states. Test them specifically.
Complexity and Performance
| Operation | Cost | Notes |
|---|---|---|
ENQUEUE | O(1) | Append at tail |
DEQUEUE | O(1) | Remove from head |
PEEK | O(1) | Read head.value |
| Memory | Θ(n) | Per-node pointer + allocator metadata |
Performance discussion:
-
Locality: Nodes are typically scattered on the heap → poorer cache performance than a ring buffer.
-
Allocation cost: One alloc per enqueue (and free per dequeue) unless using object pools.
-
Throughput: Still predictable
O(1)ops; allocator behavior can dominate latency tails.
Implementation Details or Trade-offs
Sentinel head variant
Using a dummy head (head always points to a non-data node; first element at head.next) removes special cases:
struct QueueList {
head: Node* // sentinel (never NIL after init)
tail: Node* // either sentinel or last data node
n: integer
}-
Empty iff
head == tail. -
ENQUEUE: append aftertail. -
DEQUEUE: removehead.next; if it becomesNIL, settail=head.
This yields less branching at the cost of one extra node and a slightly different API surface.
Size tracking vs traversal
-
Maintaining
nmakesSIZE()O(1)and simplifies capacity checks. -
Omitting
navoids one integer update per op but makesSIZE()O(n).
Memory management and fragmentation
-
In manual-memory languages, pool nodes: pre-allocate blocks or use a free-list to avoid fragmentation and cut allocator overhead.
-
In GC languages, fragmentation still affects cache locality; pooling may help for latency-sensitive paths.
Multi-producer/consumer considerations (brief)
-
Single-threaded: above ops suffice.
-
SPSC (single-producer/single-consumer): a linked queue can be made lock-free with atomic
nextpublication and head/tail moves, but array rings are usually simpler. -
MPMC: use a well-tested concurrent queue algorithm (Michael–Scott) or a mutex with clear blocking semantics.
Error handling and API surface
Offer both throwing and non-throwing forms:
-
DEQUEUE()vsTRY_DEQUEUE(&x)(returns false on empty). -
PEEK()vsTRY_PEEK(&x).
Determinism and iteration
Iteration from head to tail preserves arrival order. Avoid exposing raw node pointers to clients unless ownership is clear.
Practical Use Cases
-
Unbounded buffering where input rates may spike unpredictably.
-
Task schedulers that attach metadata to tasks stored in nodes (links, timestamps).
-
BFS on massive graphs that exceed contiguous memory benefits (though Queue Using Array is usually faster).
-
I/O pipelines where objects must survive list re-links without copying (handles/pointers stored in nodes).
Limitations / Pitfalls
Warning
Forgetting
tail = NILon last dequeue. If the queue becomes empty andtailis not nulled, the next enqueue may link from a stale node.
Warning
Leaking nodes on errors. Ensure allocations that fail mid-operation (or exceptions) roll back or free correctly.
Warning
Concurrent misuse. Without synchronization, two producers appending to
tailcan corrupt links. Use locks or a proper concurrent algorithm.
Warning
Unbounded memory growth. A stalled consumer with fast producers will allocate indefinitely. Consider backpressure or a hard cap.
Implementation Notes or Trade-offs
-
Payload strategy: Store
Tby value when small; otherwise store pointers/handles in nodes to keep nodes slim. -
Testing hooks: Add fuzz tests with alternating
ENQUEUE/DEQUEUEto hammer empty ↔ one ↔ many transitions and random interleavings. -
Instrumentation: Track max size and allocation failures for capacity planning.
-
Hybrid approach: Use a linked list of fixed-size blocks to reduce per-element overhead while retaining unbounded growth.
Summary
A linked-list queue provides FIFO with constant-time appends/removals via head/tail pointers. It shines when capacity is unknown or unbounded and when moving/copying elements is undesirable. To build a robust version, rigorously handle the empty ↔ one transitions, manage memory (pools or GC), and choose synchronization appropriate to the producer/consumer pattern. For cache-friendly, fixed-capacity pipelines, prefer an array ring; for flexible growth and stable references, the linked queue is the right tool.