🚧 We are still building the lighthouse. Hang tight!

projectlighthouse logoprojectlighthouse

Sign in with your Google or GitHub account to get started

Dynamic Arrays

Understanding resizable arrays, amortized analysis, and how languages implement them. Learn about array doubling and the ArrayList/Vector pattern.

Static arrays require knowing the size upfront. But what if you don't know how many elements you'll need? Dynamic arrays solve this by automatically growing when they run out of space.

What is a Dynamic Array?

A dynamic array is a resizable array that automatically increases its capacity when full. You interact with it like a regular array, but it handles memory management internally.

Languages implement dynamic arrays differently:

  • Python: list
  • JavaScript: Array
  • Java: ArrayList
  • C++: std::vector
  • Go: slices
  • Rust: Vec

How Dynamic Arrays Work

A dynamic array maintains:

  1. An underlying static array that stores the actual elements
  2. Size (length): The number of elements currently stored
  3. Capacity: The total space available in the underlying array
Dynamic Array with size=5, capacity=8

Underlying array:
Index:    [0]  [1]  [2]  [3]  [4]  [5]  [6]  [7]
          +----+----+----+----+----+----+----+----+
Value:    | 10 | 20 | 30 | 40 | 50 |    |    |    |
          +----+----+----+----+----+----+----+----+
                                    ^--- unused capacity

The Growth Strategy

When you append an element and the array is full, it:

  1. Allocates a new, larger array (typically 2x the current capacity)
  2. Copies all elements to the new array
  3. Deallocates the old array
  4. Appends the new element
Before resize (capacity=4, size=4):
[10, 20, 30, 40] <- full!

Append 50...

Step 1: Allocate new array with capacity=8
[_, _, _, _, _, _, _, _]

Step 2: Copy elements
[10, 20, 30, 40, _, _, _, _]

Step 3: Append new element
[10, 20, 30, 40, 50, _, _, _]

After resize (capacity=8, size=5)

Amortized Analysis

Individual operations can be O(n) when resizing, but amortized over many operations, appending is O(1).

Here's why: If we double the capacity each time, we copy elements at capacities 1, 2, 4, 8, 16, ...

To reach size n, total copies = 1 + 2 + 4 + ... + n/2 = n - 1

So n insertions cost O(n) total work, meaning each insertion is O(n/n) = O(1) amortized.

Insertions: 1  2  3  4  5  6  7  8  9  ...
Cost:       1  1  2  1  4  1  1  1  8  ...
            ^     ^     ^           ^
            resize events (copy old elements)

Average cost per operation = O(1)

Implementation

Here's a simplified implementation:

pub struct DynamicArray<T> {
    array: Vec<Option<T>>,
    capacity: usize,
    size: usize,
}

impl<T> DynamicArray<T> {
    pub fn new() -> Self {
        Self {
            array: vec![None],
            capacity: 1,
            size: 0,
        }
    }

    pub fn append(&mut self, value: T) {
        // Resize if full
        if self.size == self.capacity {
            self.resize(2 * self.capacity);
        }

        self.array[self.size] = Some(value);
        self.size += 1;
    }

    pub fn get(&self, index: usize) -> Option<&T> {
        if index >= self.size {
            return None;
        }
        self.array[index].as_ref()
    }

    pub fn set(&mut self, index: usize, value: T) -> Result<(), &'static str> {
        if index >= self.size {
            return Err("Index out of bounds");
        }
        self.array[index] = Some(value);
        Ok(())
    }

    pub fn pop(&mut self) -> Option<T> {
        if self.size == 0 {
            return None;
        }
        self.size -= 1;
        self.array[self.size].take()
    }

    fn resize(&mut self, new_capacity: usize) {
        let mut new_array = Vec::with_capacity(new_capacity);
        for _ in 0..new_capacity {
            new_array.push(None);
        }

        for i in 0..self.size {
            new_array[i] = self.array[i].take();
        }

        self.array = new_array;
        self.capacity = new_capacity;
    }

    pub fn len(&self) -> usize {
        self.size
    }
}

Time Complexity

| Operation | Average | Worst Case | |-----------|---------|------------| | Access by index | O(1) | O(1) | | Update by index | O(1) | O(1) | | Append | O(1)* | O(n) | | Pop (end) | O(1) | O(1) | | Insert at index | O(n) | O(n) | | Delete at index | O(n) | O(n) | | Search | O(n) | O(n) |

*Amortized O(1)

Growth Factor Trade-offs

The growth factor affects both time and space:

Factor of 2 (doubling):

  • Pro: Fewer resize operations
  • Con: Up to 50% wasted space
  • Most common choice (Java ArrayList, Rust Vec)

Factor of 1.5:

  • Pro: Less wasted space (~33% max)
  • Con: More frequent resizing
  • Used by some implementations (C++ vector in some libraries)

Factor of 1 + small constant:

  • Pro: Minimal waste
  • Con: Many resize operations, poor amortized performance

Shrinking Strategy

Some implementations also shrink when the array becomes too empty:

pub fn pop(&mut self) -> Option<T> {
    if self.size == 0 {
        return None;
    }
    self.size -= 1;
    let value = self.array[self.size].take();

    // Shrink if less than 25% full
    if self.size > 0 && self.size <= self.capacity / 4 {
        self.resize(self.capacity / 2);
    }

    value
}

Thrashing Warning

Be careful with shrinking thresholds. If you shrink at 50% and grow at 100%, adding/removing one element repeatedly could cause constant resizing. Use asymmetric thresholds (shrink at 25%, grow at 100%).

Space Complexity

  • Best case: O(n) where n is the number of elements
  • Worst case: O(2n) = O(n) when array is just over half full after doubling

The constant factor overhead is the trade-off for having O(1) amortized append.

Comparison with Static Arrays

| Aspect | Static Array | Dynamic Array | |--------|--------------|---------------| | Size | Fixed | Grows automatically | | Memory | Exact | May have unused capacity | | Append | O(1) or impossible | O(1) amortized | | Random access | O(1) | O(1) | | Implementation | Simple | More complex |

Common Patterns

Building Arrays

// Build array element by element
let mut result = Vec::new();
for item in data {
    if some_condition(&item) {
        result.push(item);
    }
}

Array as Stack

let mut stack = Vec::new();
stack.push(1);  // push
stack.push(2);
stack.push(3);
let top = stack.pop();  // pop: returns Some(3)

Preallocating Capacity

When you know the final size, preallocate to avoid resizing:

// Rust: Pre-allocate capacity
let mut result = Vec::with_capacity(n);
for i in 0..n {
    result.push(compute(i));
}

// Or create with default values
let mut result = vec![0; n];
for i in 0..n {
    result[i] = compute(i);
}

Key Takeaways

  1. Dynamic arrays automatically resize when full
  2. Doubling capacity gives O(1) amortized append
  3. The trade-off is some wasted space (up to 2x)
  4. Access remains O(1) like static arrays
  5. Insert/delete in middle is still O(n)
  6. Most languages provide dynamic arrays as their default array type

Dynamic arrays combine the fast random access of static arrays with the flexibility of automatic resizing. Understanding their amortized analysis explains why they're the go-to data structure for ordered collections in most programs.

Notes & Highlights

© 2025 projectlighthouse. All rights reserved.