🚧 We are still building the lighthouse. Hang tight!

projectlighthouse logoprojectlighthouse

Sign in with your Google or GitHub account to get started

Queues

Understanding the queue data structure and FIFO principle. Learn implementations using arrays and linked lists.

If a stack is like a stack of plates (last in, first out), a queue is like a line at a store: first come, first served. This First-In-First-Out (FIFO) behavior models many real-world scenarios.

What is a Queue?

A queue is a linear data structure that follows the FIFO (First In, First Out) principle. The first element added is the first one removed.

Queue Operations:

enqueue(40)                    dequeue()
    ↓                              ↓
+------+------+------+------+  +------+------+------+
|  10  |  20  |  30  |  40  |  |  20  |  30  |  40  |
+------+------+------+------+  +------+------+------+
   ↑                     ↑        ↑                ↑
 front                 back     front            back
 (dequeue here)   (enqueue here)

Core Operations

| Operation | Description | Time Complexity | |-----------|-------------|-----------------| | enqueue(item) | Add item to back | O(1) | | dequeue() | Remove and return front item | O(1) | | peek() / front() | Return front item without removing | O(1) | | isEmpty() | Check if queue is empty | O(1) | | size() | Return number of elements | O(1) |

Implementation with Linked List

A linked list naturally supports efficient queue operations:

struct Node<T> {
    data: T,
    next: Option<Box<Node<T>>>,
}

pub struct Queue<T> {
    front: Option<Box<Node<T>>>,
    back: *mut Node<T>,
    size: usize,
}

impl<T> Queue<T> {
    pub fn new() -> Self {
        Self {
            front: None,
            back: std::ptr::null_mut(),
            size: 0,
        }
    }

    pub fn enqueue(&mut self, item: T) {
        let mut new_node = Box::new(Node {
            data: item,
            next: None,
        });

        let raw_node: *mut _ = &mut *new_node;

        if self.front.is_none() {
            self.front = Some(new_node);
            self.back = raw_node;
        } else {
            unsafe {
                (*self.back).next = Some(new_node);
                self.back = raw_node;
            }
        }

        self.size += 1;
    }

    pub fn dequeue(&mut self) -> Option<T> {
        self.front.take().map(|node| {
            self.front = node.next;
            if self.front.is_none() {
                self.back = std::ptr::null_mut();
            }
            self.size -= 1;
            node.data
        })
    }

    pub fn peek(&self) -> Option<&T> {
        self.front.as_ref().map(|node| &node.data)
    }

    pub fn is_empty(&self) -> bool {
        self.front.is_none()
    }

    pub fn size(&self) -> usize {
        self.size
    }
}

Safer Implementation Using VecDeque

For production Rust code, use the standard library's VecDeque:

use std::collections::VecDeque;

pub struct Queue<T> {
    items: VecDeque<T>,
}

impl<T> Queue<T> {
    pub fn new() -> Self {
        Self {
            items: VecDeque::new(),
        }
    }

    pub fn enqueue(&mut self, item: T) {
        self.items.push_back(item);
    }

    pub fn dequeue(&mut self) -> Option<T> {
        self.items.pop_front()
    }

    pub fn peek(&self) -> Option<&T> {
        self.items.front()
    }

    pub fn is_empty(&self) -> bool {
        self.items.is_empty()
    }

    pub fn size(&self) -> usize {
        self.items.len()
    }
}

Implementation with Circular Array

Arrays can implement queues efficiently using the circular buffer technique:

pub struct ArrayQueue<T> {
    array: Vec<Option<T>>,
    capacity: usize,
    front: usize,
    back: usize,
    size: usize,
}

impl<T> ArrayQueue<T> {
    pub fn with_capacity(capacity: usize) -> Self {
        let mut array = Vec::with_capacity(capacity);
        for _ in 0..capacity {
            array.push(None);
        }

        Self {
            array,
            capacity,
            front: 0,
            back: 0,
            size: 0,
        }
    }

    pub fn enqueue(&mut self, item: T) {
        if self.size == self.capacity {
            self.resize(2 * self.capacity);
        }

        self.array[self.back] = Some(item);
        self.back = (self.back + 1) % self.capacity;
        self.size += 1;
    }

    pub fn dequeue(&mut self) -> Option<T> {
        if self.is_empty() {
            return None;
        }

        let data = self.array[self.front].take();
        self.front = (self.front + 1) % self.capacity;
        self.size -= 1;
        data
    }

    fn resize(&mut self, new_capacity: usize) {
        let mut new_array = Vec::with_capacity(new_capacity);
        for _ in 0..new_capacity {
            new_array.push(None);
        }

        for i in 0..self.size {
            new_array[i] = self.array[(self.front + i) % self.capacity].take();
        }

        self.array = new_array;
        self.front = 0;
        self.back = self.size;
        self.capacity = new_capacity;
    }

    pub fn is_empty(&self) -> bool {
        self.size == 0
    }

    pub fn size(&self) -> usize {
        self.size
    }
}
Circular Array Queue (capacity = 5):

Initial: front=0, back=0
+---+---+---+---+---+
|   |   |   |   |   |
+---+---+---+---+---+
  ↑
f,b

After enqueue(10, 20, 30):
+---+---+---+---+---+
| 10| 20| 30|   |   |
+---+---+---+---+---+
  ↑           ↑
  f           b

After dequeue() twice:
+---+---+---+---+---+
|   |   | 30|   |   |
+---+---+---+---+---+
          ↑   ↑
          f   b

After enqueue(40, 50, 60): wraps around!
+---+---+---+---+---+
| 60|   | 30| 40| 50|
+---+---+---+---+---+
      ↑   ↑
      b   f

Why Circular?

Without wrapping, we'd waste space at the front after dequeuing. The modulo operator (%) lets us reuse those positions, treating the array as circular.

Real-World Applications

Task Scheduling

pub struct TaskScheduler {
    task_queue: VecDeque<String>,
}

impl TaskScheduler {
    pub fn new() -> Self {
        Self {
            task_queue: VecDeque::new(),
        }
    }

    pub fn add_task(&mut self, task: String) {
        self.task_queue.push_back(task.clone());
        println!("Task '{}' added to queue", task);
    }

    pub fn process_next(&mut self) -> Option<String> {
        if let Some(task) = self.task_queue.pop_front() {
            println!("Processing: {}", task);
            Some(task)
        } else {
            println!("No tasks to process");
            None
        }
    }
}

Print Spooler

pub struct PrintJob {
    document: String,
    user: String,
}

pub struct PrintSpooler {
    jobs: VecDeque<PrintJob>,
}

impl PrintSpooler {
    pub fn new() -> Self {
        Self {
            jobs: VecDeque::new(),
        }
    }

    pub fn add_job(&mut self, document: String, user: String) {
        let job = PrintJob { document, user };
        self.jobs.push_back(job);
    }

    pub fn print_next(&mut self) {
        if let Some(job) = self.jobs.pop_front() {
            println!("Printing {} for {}", job.document, job.user);
        }
    }
}

BFS (Breadth-First Search)

Queues are essential for level-order traversal and shortest path:

use std::collections::{VecDeque, HashSet, HashMap};

fn bfs(graph: &HashMap<i32, Vec<i32>>, start: i32) {
    let mut visited = HashSet::new();
    let mut queue = VecDeque::new();

    queue.push_back(start);
    visited.insert(start);

    while let Some(node) = queue.pop_front() {
        println!("{}", node); // Process node

        if let Some(neighbors) = graph.get(&node) {
            for &neighbor in neighbors {
                if !visited.contains(&neighbor) {
                    visited.insert(neighbor);
                    queue.push_back(neighbor);
                }
            }
        }
    }
}

Rate Limiter

use std::time::{SystemTime, UNIX_EPOCH};

pub struct RateLimiter {
    max_requests: usize,
    time_window: u64, // in seconds
    requests: VecDeque<u64>,
}

impl RateLimiter {
    pub fn new(max_requests: usize, time_window: u64) -> Self {
        Self {
            max_requests,
            time_window,
            requests: VecDeque::new(),
        }
    }

    pub fn is_allowed(&mut self) -> bool {
        let current_time = SystemTime::now()
            .duration_since(UNIX_EPOCH)
            .unwrap()
            .as_secs();

        // Remove old requests outside the window
        while let Some(&front) = self.requests.front() {
            if front < current_time - self.time_window {
                self.requests.pop_front();
            } else {
                break;
            }
        }

        if self.requests.len() < self.max_requests {
            self.requests.push_back(current_time);
            true
        } else {
            false
        }
    }
}

Priority Queue

Sometimes items have different priorities. A priority queue dequeues by priority, not arrival order:

use std::collections::BinaryHeap;
use std::cmp::Ordering;

#[derive(Eq, PartialEq)]
struct Task {
    priority: i32,
    name: String,
}

impl Ord for Task {
    fn cmp(&self, other: &Self) -> Ordering {
        // Reverse order for min-heap (lower priority = higher urgency)
        other.priority.cmp(&self.priority)
    }
}

impl PartialOrd for Task {
    fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
        Some(self.cmp(other))
    }
}

pub struct PriorityQueue {
    heap: BinaryHeap<Task>,
}

impl PriorityQueue {
    pub fn new() -> Self {
        Self {
            heap: BinaryHeap::new(),
        }
    }

    pub fn enqueue(&mut self, name: String, priority: i32) {
        self.heap.push(Task { priority, name });
    }

    pub fn dequeue(&mut self) -> Option<String> {
        self.heap.pop().map(|task| task.name)
    }

    pub fn is_empty(&self) -> bool {
        self.heap.is_empty()
    }
}

fn main() {
    let mut pq = PriorityQueue::new();
    pq.enqueue("Low priority task".to_string(), 3);
    pq.enqueue("High priority task".to_string(), 1);
    pq.enqueue("Medium priority task".to_string(), 2);

    println!("{:?}", pq.dequeue()); // Some("High priority task")
    println!("{:?}", pq.dequeue()); // Some("Medium priority task")
    println!("{:?}", pq.dequeue()); // Some("Low priority task")
}

Queue vs Priority Queue

Regular queues are FIFO. Priority queues dequeue by priority (often implemented with heaps). Don't confuse them—they serve different purposes.

Stack vs Queue

| Aspect | Stack | Queue | |--------|-------|-------| | Order | LIFO (Last In, First Out) | FIFO (First In, First Out) | | Add | push (top) | enqueue (back) | | Remove | pop (top) | dequeue (front) | | Use case | Undo, recursion, parsing | Scheduling, BFS, buffering | | Real-world | Stack of plates | Line at store |

Implementation Comparison

| Aspect | Linked List | Circular Array | VecDeque | |--------|-------------|----------------|----------| | Memory | Dynamic, extra pointer overhead | Fixed or grows | Dynamic, efficient | | Cache | Scattered nodes | Contiguous | Mostly contiguous | | Enqueue | O(1) | O(1) amortized | O(1) amortized | | Dequeue | O(1) | O(1) | O(1) | | Best for | Educational | Known max size | Production Rust code |

Key Takeaways

  1. Queues follow FIFO: first in, first out
  2. Core operations (enqueue, dequeue, peek) are O(1)
  3. Linked list implementation is natural and efficient
  4. Circular arrays avoid wasted space in array implementation
  5. Queues are essential for BFS, scheduling, and buffering
  6. In Rust, VecDeque is the recommended choice for most applications
  7. Priority queues order by priority, not arrival time

Queues model waiting lines, task scheduling, and level-order traversal. Understanding both their behavior and implementation prepares you for countless algorithms that rely on processing elements in order of arrival.

Notes & Highlights

© 2025 projectlighthouse. All rights reserved.