Testing in Go
Master Go's built-in testing framework. Learn to write unit tests, table-driven tests, benchmarks, and examples. Understand Go's testing philosophy and best practices.
Code without tests is code you can't confidently change. Tests verify behavior, document expectations, and catch regressions. Some languages make testing feel like a separate activity—special frameworks, complex setup, magic annotations. Go treats tests as regular code. Test files live alongside source files, test functions follow simple naming conventions, and the go test command runs everything.
Writing Tests
Create a file ending in _test.go. Write functions starting with Test that accept *testing.T:
// math.go
package math
func Add(a, b int) int {
return a + b
}
// math_test.go
package math
import "testing"
func TestAdd(t *testing.T) {
result := Add(2, 3)
if result != 5 {
t.Errorf("Add(2, 3) = %d; want 5", result)
}
}
Run tests:
go test
# PASS
# ok myproject/math 0.001s
The test runner finds all _test.go files, compiles them with the package code, and executes functions starting with Test. If any test calls t.Error or t.Fatal, the test fails.
Test functions have a standard signature:
func TestName(t *testing.T) {
// Test code
}
The name after Test should describe what's being tested. Use camel case: TestAdd, TestUserCreation, TestHTTPHandler.
Assertions
Go doesn't have assertion libraries. Tests use if statements and the testing.T methods:
func TestDivide(t *testing.T) {
result, err := Divide(10, 2)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if result != 5 {
t.Errorf("Divide(10, 2) = %f; want 5", result)
}
}
t.Error and t.Errorf report failures but continue the test. Use them when subsequent checks might provide useful information.
t.Fatal and t.Fatalf report failures and stop the test immediately. Use them when continuing doesn't make sense:
func TestReadFile(t *testing.T) {
data, err := ReadFile("test.txt")
if err != nil {
t.Fatalf("ReadFile failed: %v", err)
// Doesn't execute: test stopped
}
// Only runs if ReadFile succeeded
if len(data) == 0 {
t.Error("expected non-empty data")
}
}
The pattern is: if you can't continue after a failure, use Fatal. If you can, use Error.
Table-Driven Tests
Testing multiple inputs and outputs with separate test functions creates repetition:
func TestAddPositive(t *testing.T) { ... }
func TestAddNegative(t *testing.T) { ... }
func TestAddZero(t *testing.T) { ... }
Table-driven tests handle multiple cases elegantly:
func TestAdd(t *testing.T) {
tests := []struct {
name string
a, b int
want int
}{
{"positive", 2, 3, 5},
{"negative", -2, -3, -5},
{"mixed", -2, 3, 1},
{"zero", 0, 0, 0},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := Add(tt.a, tt.b)
if got != tt.want {
t.Errorf("Add(%d, %d) = %d; want %d",
tt.a, tt.b, got, tt.want)
}
})
}
}
Each test case is a struct with inputs and expected outputs. The loop runs a subtest for each case using t.Run. Subtests appear in output:
go test -v
# === RUN TestAdd
# === RUN TestAdd/positive
# === RUN TestAdd/negative
# === RUN TestAdd/mixed
# === RUN TestAdd/zero
# --- PASS: TestAdd (0.00s)
# --- PASS: TestAdd/positive (0.00s)
# --- PASS: TestAdd/negative (0.00s)
# --- PASS: TestAdd/mixed (0.00s)
# --- PASS: TestAdd/zero (0.00s)
If one case fails, others still run. You see exactly which case failed.
Run a specific subtest:
go test -run TestAdd/negative
This isolation helps debugging. Table-driven tests scale—adding cases means adding structs to the slice, not writing new functions.
Test Helpers
Extract repeated setup into helper functions. Mark them with t.Helper() so failures report the calling line, not the helper line:
func assertEqual(t *testing.T, got, want int) {
t.Helper()
if got != want {
t.Errorf("got %d; want %d", got, want)
}
}
func TestAdd(t *testing.T) {
result := Add(2, 3)
assertEqual(t, result, 5) // Failure reports this line
}
Without t.Helper(), failures would point to the if got != want line inside assertEqual, making it harder to find which test failed.
Setup and teardown use helpers too:
func setupDatabase(t *testing.T) *sql.DB {
t.Helper()
db, err := sql.Open("postgres", "test_db")
if err != nil {
t.Fatalf("failed to open database: %v", err)
}
t.Cleanup(func() {
db.Close()
})
return db
}
func TestUserRepository(t *testing.T) {
db := setupDatabase(t)
// Test code using db
}
The t.Cleanup function registers cleanup code that runs when the test finishes, whether it passes or fails. This ensures resources get released.
Testing Errors
Test both success and failure cases:
func TestDivide(t *testing.T) {
tests := []struct {
name string
a, b float64
want float64
wantError bool
}{
{"valid", 10, 2, 5, false},
{"zero divisor", 10, 0, 0, true},
{"negative", -10, 2, -5, false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := Divide(tt.a, tt.b)
if tt.wantError {
if err == nil {
t.Error("expected error, got nil")
}
return
}
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if got != tt.want {
t.Errorf("got %f; want %f", got, tt.want)
}
})
}
}
The wantError field indicates whether an error is expected. Tests verify both the error presence and the result value.
For specific error types, use errors.Is or errors.As:
func TestValidate(t *testing.T) {
err := Validate(User{Email: ""})
if !errors.Is(err, ErrInvalidEmail) {
t.Errorf("expected ErrInvalidEmail, got %v", err)
}
}
Test Fixtures
Test fixtures are data or state needed for tests. Put them in testdata directories:
mypackage/
math.go
math_test.go
testdata/
input.txt
expected.txt
The testdata directory is special—the go tool ignores it during builds but includes it during tests. Read fixtures in tests:
func TestProcessFile(t *testing.T) {
data, err := os.ReadFile("testdata/input.txt")
if err != nil {
t.Fatalf("failed to read fixture: %v", err)
}
result := ProcessFile(data)
expected, err := os.ReadFile("testdata/expected.txt")
if err != nil {
t.Fatalf("failed to read expected: %v", err)
}
if string(result) != string(expected) {
t.Errorf("output mismatch")
}
}
Fixtures keep test data separate from test logic. Large inputs or complex expected outputs live in files, not string literals in code.
Mocking with Interfaces
Interfaces enable testing without external dependencies. Define an interface for the dependency:
type Database interface {
GetUser(id int) (*User, error)
SaveUser(user *User) error
}
type UserService struct {
db Database
}
func (s *UserService) ActivateUser(id int) error {
user, err := s.db.GetUser(id)
if err != nil {
return err
}
user.Active = true
return s.db.SaveUser(user)
}
In tests, implement a mock:
type mockDatabase struct {
users map[int]*User
}
func (m *mockDatabase) GetUser(id int) (*User, error) {
user, ok := m.users[id]
if !ok {
return nil, errors.New("not found")
}
return user, nil
}
func (m *mockDatabase) SaveUser(user *User) error {
m.users[user.ID] = user
return nil
}
func TestActivateUser(t *testing.T) {
db := &mockDatabase{
users: map[int]*User{
1: {ID: 1, Name: "Alice", Active: false},
},
}
service := &UserService{db: db}
err := service.ActivateUser(1)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if !db.users[1].Active {
t.Error("user should be active")
}
}
The mock implements the Database interface. Tests control its behavior without real database connections. This makes tests fast, deterministic, and independent.
Benchmarks
Benchmark functions measure performance. They start with Benchmark and accept *testing.B:
func BenchmarkAdd(b *testing.B) {
for i := 0; i < b.N; i++ {
Add(2, 3)
}
}
The loop runs b.N times. The benchmark framework adjusts b.N until the benchmark runs long enough for reliable timing.
Run benchmarks:
go test -bench=.
# BenchmarkAdd-8 1000000000 0.25 ns/op
The output shows the function ran 1 billion times at 0.25 nanoseconds per operation. The -8 indicates 8 CPU cores were available.
Benchmark complex operations:
func BenchmarkJSONMarshal(b *testing.B) {
user := User{
ID: 1,
Name: "Alice",
Email: "[email protected]",
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
json.Marshal(user)
}
}
The b.ResetTimer() call excludes setup time from measurements. Only the loop iterations count.
Measure allocations:
go test -bench=. -benchmem
# BenchmarkJSONMarshal-8 5000000 250 ns/op 128 B/op 2 allocs/op
This shows 128 bytes allocated per operation with 2 allocations. Use this to find allocation hotspots.
Table-driven benchmarks work like table-driven tests:
func BenchmarkAdd(b *testing.B) {
tests := []struct {
name string
a, b int
}{
{"small", 1, 2},
{"large", 1000000, 2000000},
}
for _, tt := range tests {
b.Run(tt.name, func(b *testing.B) {
for i := 0; i < b.N; i++ {
Add(tt.a, tt.b)
}
})
}
}
Coverage
Test coverage shows which code the tests execute:
go test -cover
# PASS
# coverage: 85.7% of statements
Generate detailed coverage reports:
go test -coverprofile=coverage.out
go tool cover -html=coverage.out
The HTML report highlights covered and uncovered code. Green lines executed during tests, red lines didn't.
Coverage is a useful metric but not a goal. High coverage doesn't guarantee good tests—tests might exercise code without verifying behavior. Low coverage suggests missing tests, but 100% coverage isn't always necessary. Focus on testing important behaviors, not reaching arbitrary coverage percentages.
Parallel Tests
Tests run sequentially by default. Mark tests as safe to run in parallel:
func TestExpensiveOperation(t *testing.T) {
t.Parallel()
result := ExpensiveOperation()
if result != expected {
t.Error("mismatch")
}
}
Parallel tests run concurrently with other parallel tests, speeding up the test suite. Only use t.Parallel() when tests don't share state—no shared global variables, no shared file system resources, no shared databases.
Testing Main
Test the main function by extracting logic into testable functions:
// main.go
package main
func main() {
if err := run(); err != nil {
log.Fatal(err)
}
}
func run() error {
// Application logic
return nil
}
// main_test.go
package main
func TestRun(t *testing.T) {
err := run()
if err != nil {
t.Errorf("run() failed: %v", err)
}
}
The main function becomes a thin wrapper around run(), which contains the logic and is testable.
Example Tests
Example tests demonstrate usage and appear in documentation:
func ExampleAdd() {
result := Add(2, 3)
fmt.Println(result)
// Output: 5
}
The comment // Output: 5 specifies expected output. The test runner captures stdout and compares it. If output doesn't match, the test fails.
Examples appear in godoc:
go doc Add
# func Add(a, b int) int
# Add returns the sum of a and b.
#
# Example:
# result := Add(2, 3)
# fmt.Println(result)
# // Output: 5
Use examples to show how functions work. They're executable documentation—they must compile and their output is verified.
Skipping Tests
Skip tests conditionally:
func TestDatabaseIntegration(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")
}
// Integration test code
}
Run only fast tests:
go test -short
The -short flag sets the short mode flag, and testing.Short() returns true. Use this for expensive tests—integration tests, tests requiring external services, slow benchmarks.
Skip based on environment:
func TestProduction(t *testing.T) {
if os.Getenv("ENV") != "production" {
t.Skip("production-only test")
}
// ...
}
Test Organization
Keep tests focused. One test function should test one behavior:
// Good: focused tests
func TestUserValidation(t *testing.T) { ... }
func TestUserCreation(t *testing.T) { ... }
func TestUserUpdate(t *testing.T) { ... }
// Bad: one giant test
func TestUser(t *testing.T) {
// Tests validation, creation, updates, everything
}
Focused tests are easier to understand, debug, and maintain. When a test fails, the name tells you what broke.
Group related tests using subtests:
func TestUser(t *testing.T) {
t.Run("validation", func(t *testing.T) {
// Validation tests
})
t.Run("creation", func(t *testing.T) {
// Creation tests
})
}
This groups related tests under a common prefix without creating enormous test functions.
What's Next
Go's testing package treats tests as regular code. Write test functions, use if statements for checks, and run go test. Table-driven tests scale to hundreds of cases, benchmarks measure performance, and coverage reports show what's tested.
The next article tours the standard library. You'll see io.Reader and io.Writer abstract I/O operations, encoding/json handles JSON encoding and decoding, net/http serves and consumes HTTP, and time manages temporal data. These packages form the foundation of most Go programs—understanding them deeply enables building anything.
Ready to explore the standard library and the interfaces that power Go programs?
Previous
Packages and Modules
Next
No next page
