RAM & Memory Basics
Understanding how Random Access Memory works and why it matters for data structures. Learn about memory addresses, bytes, and how computers store data.
Before diving into data structures, we need to understand where our data actually lives: Random Access Memory (RAM). This foundational knowledge explains why certain operations are fast and others are slow.
What is RAM?
RAM (Random Access Memory) is your computer's short-term memory. When you run a program, your code and data get loaded into RAM because it's fastβmuch faster than reading from a hard drive or SSD.
Think of RAM as a giant array of bytes. Each byte has an address, starting from 0 and going up to however much memory your computer has. A computer with 8GB of RAM has roughly 8 billion bytes, each with its own unique address.
Address: 0x0000 0x0001 0x0002 0x0003 0x0004 ...
+-------+-------+-------+-------+-------+
Value: | 42 | 0 | 255 | 17 | 100 | ...
+-------+-------+-------+-------+-------+
Why "Random Access"?
The key property of RAM is that accessing any memory location takes the same amount of time. Whether you read byte 0 or byte 8,000,000,000, the access time is identicalβtypically a few nanoseconds.
This is different from a tape or traditional hard drive, where you might need to physically move a read head to reach distant data. RAM has no such limitation.
This constant-time access is written as O(1) in Big O notation, meaning the time doesn't depend on which address you're accessing.
Bytes and Data Types
A byte is 8 bits, where each bit is either 0 or 1. One byte can represent 256 different values (2^8 = 256).
Different data types use different amounts of memory:
| Type | Size in Rust | Range | |------|--------------|-------| | u8 / i8 | 1 byte | 0-255 or -128 to 127 | | u16 / i16 | 2 bytes | 0-65535 or -32,768 to 32,767 | | u32 / i32 | 4 bytes | ~0-4 billion or ~-2 billion to 2 billion | | u64 / i64 | 8 bytes | ~0-18 quintillion or ~-9 to 9 quintillion | | f32 | 4 bytes | Decimal numbers | | f64 | 8 bytes | More precise decimals |
When you store an integer, it occupies consecutive bytes in memory:
// Storing the integer 1000 (4 bytes)
Address: 0x1000 0x1001 0x1002 0x1003
+-------+-------+-------+-------+
Value: | 0xE8 | 0x03 | 0x00 | 0x00 |
+-------+-------+-------+-------+
(1000 in hexadecimal = 0x000003E8, stored in little-endian order)
Memory Addresses and Pointers
Every piece of data in RAM has an address. A pointer is simply a variable that stores a memory address.
fn main() {
let x: i32 = 42; // x is stored at some address
let ptr = &x; // ptr holds the address of x
println!("Value: {}", x);
println!("Address: {:p}", ptr);
}
Pointers are fundamental to data structures. A linked list node stores a pointer to the next node. A tree node stores pointers to its children. Understanding pointers means understanding how data structures connect their pieces.
Contiguous vs Non-Contiguous Memory
Contiguous memory means data is stored in consecutive addresses:
Array: [10, 20, 30, 40, 50]
Address: 0x100 0x104 0x108 0x10C 0x110
+-------+-------+-------+-------+-------+
Value: | 10 | 20 | 30 | 40 | 50 |
+-------+-------+-------+-------+-------+
Because elements are contiguous, we can calculate any element's address:
address of element[i] = base_address + (i * element_size)element[3] is at 0x100 + (3 * 4) = 0x10C
Non-contiguous memory means data is scattered across different locations, connected by pointers:
Linked List: 10 -> 20 -> 30
Node at 0x100: [value: 10, next: 0x250]
Node at 0x250: [value: 20, next: 0x180]
Node at 0x180: [value: 30, next: null]
Why This Matters for Data Structures
The memory layout directly impacts performance:
-
Arrays use contiguous memory, so accessing element[i] is O(1)βjust calculate the address.
-
Linked lists use non-contiguous memory. To find the 5th element, you must follow 4 pointers, making access O(n).
-
CPU caches work better with contiguous data. When you access one array element, nearby elements are loaded into cache automatically, making subsequent accesses faster.
Memory Allocation
When your program needs memory, it requests it from the operating system:
-
Stack allocation: Fast, automatic. Local variables live on the stack. Memory is freed when the function returns.
-
Heap allocation: Slower, manual or managed. You explicitly request memory. In Rust, heap memory is managed through ownership.
fn example() {
// Stack allocation (automatic)
let x = 10; // x lives on the stack, freed when function ends
// Heap allocation (explicit)
let vec = vec![1, 2, 3, 4, 5]; // Data lives on heap, managed by Vec
// Box for explicit heap allocation
let boxed: Box<i32> = Box::new(42);
}
Rust's ownership system handles heap memory automatically without garbage collection, giving you both safety and performance.
Key Takeaways
- RAM provides O(1) access to any address
- Data types have fixed sizes in memory
- Contiguous memory enables fast indexed access
- Non-contiguous structures use pointers to connect elements
- Memory layout affects cache performance and operation speed
Understanding RAM provides the foundation for analyzing why different data structures have different time complexities. Arrays are fast for random access because of contiguous memory. Linked lists are fast for insertion because they only need to update pointers. These trade-offs stem directly from how memory works.
Previous
No previous page
Next
Static Arrays
