Skip to main content

pytest Fundamentals

What You'll Learn

How to install pytest, write tests that actually catch bugs, run them, and interpret the results.

Installing pytest

pip install pytest
pip install pytest-cov # for coverage reports

Your First Test

Create a file named test_math.py (test files must start with test_):

# math_utils.py — the code you want to test
def add(a: int, b: int) -> int:
return a + b

def divide(a: float, b: float) -> float:
if b == 0:
raise ValueError("Cannot divide by zero")
return a / b
# test_math.py — tests for math_utils.py
from math_utils import add, divide
import pytest

def test_add_positive_numbers():
assert add(2, 3) == 5

def test_add_negative_numbers():
assert add(-1, -2) == -3

def test_add_zero():
assert add(0, 0) == 0

def test_divide_normal():
assert divide(10, 2) == 5.0

def test_divide_by_zero_raises():
with pytest.raises(ValueError, match="Cannot divide by zero"):
divide(10, 0)

Run tests:

pytest # all tests in current directory
pytest test_math.py # specific file
pytest -v # verbose output
pytest -v -k "divide" # only tests with "divide" in the name

Output:

test_math.py::test_add_positive_numbers PASSED
test_math.py::test_add_negative_numbers PASSED
test_math.py::test_add_zero PASSED
test_math.py::test_divide_normal PASSED
test_math.py::test_divide_by_zero_raises PASSED

5 passed in 0.12s

Test File Organization

project/
├── src/
│ └── myapp/
│ ├── utils.py
│ └── processor.py
└── tests/
├── conftest.py ← shared fixtures
├── test_utils.py
└── test_processor.py

pytest finds tests by searching for:

  • Files matching test_*.py or *_test.py
  • Functions starting with test_
  • Classes starting with Test

Assertions — What You Can Test

# Equality
assert result == 42
assert name == "Alice"
assert items == [1, 2, 3]

# Truth/falsy
assert is_active
assert not is_banned

# Comparison
assert score >= 90
assert price < 100.0

# Membership
assert "admin" in user["roles"]
assert "error" not in log_output

# Types
assert isinstance(result, dict)
assert isinstance(items, list)

# None
assert user is None
assert config is not None

# Exceptions
with pytest.raises(ValueError):
risky_function()

with pytest.raises(ValueError, match="specific message"):
risky_function()

# Approximate floating point
assert abs(result - 3.14159) < 0.001
import pytest
assert result == pytest.approx(3.14159, rel=1e-3)

Parametrize — Test Many Inputs at Once

import pytest
from math_utils import add

@pytest.mark.parametrize("a, b, expected", [
(1, 2, 3),
(-1, 1, 0),
(0, 0, 0),
(100, -50, 50),
(1.5, 1.5, 3.0),
])
def test_add(a, b, expected):
assert add(a, b) == expected

This runs 5 separate tests — each shown individually if one fails.

Testing Classes

from myapp.models import ShoppingCart, Product

class TestShoppingCart:
def test_empty_cart_total(self):
cart = ShoppingCart()
assert cart.total() == 0.0

def test_add_item(self):
cart = ShoppingCart()
cart.add(Product("apple", price=1.50), quantity=3)
assert cart.total() == 4.50

def test_remove_item(self):
cart = ShoppingCart()
cart.add(Product("apple", price=1.50), quantity=2)
cart.remove("apple")
assert cart.total() == 0.0

def test_empty_cart_raises_on_checkout(self):
cart = ShoppingCart()
with pytest.raises(ValueError, match="cannot checkout empty cart"):
cart.checkout()

Code Coverage

# Run tests with coverage
pytest --cov=src --cov-report=term-missing

# Generate HTML report
pytest --cov=src --cov-report=html
open htmlcov/index.html

Coverage output:

Name Stmts Miss Cover Missing
---------------------------------------------------
src/myapp/utils.py 25 3 88% 45-47
src/myapp/processor.py 40 0 100%
---------------------------------------------------
TOTAL 65 3 95%

Aim for 80–90% coverage. 100% is rarely worth the effort.

Running Only Failed Tests

pytest --lf # run only tests that failed last time
pytest --ff # run failed tests first, then rest
pytest -x # stop after first failure
pytest --tb=short # shorter traceback format

Common Mistakes

MistakeFix
Test file not named test_*.pyMust start with test_
Test function not named test_*Must start with test_
Testing implementation detailsTest behavior and outputs
One mega-test that tests everythingOne assertion focus per test
No test for error casesAlways test unhappy paths

Quick Reference

# Install
pip install pytest pytest-cov

# Run
pytest # all tests
pytest -v # verbose
pytest test_file.py # specific file
pytest -k "keyword" # filter by name
pytest --lf # failed only
pytest -x # stop on first fail
pytest --cov=src --cov-report=term-missing
# Assert
assert result == expected
assert pytest.approx(3.14) == result
with pytest.raises(ValueError):
fn()

# Parametrize
@pytest.mark.parametrize("input, expected", [(1, 2), (3, 4)])
def test_fn(input, expected):
assert fn(input) == expected

What's Next

Lesson 2: Mocking APIs, Files, and Commands