Skip to main content

Logging for Humans and Machines

What You'll Learn

How to set up Python's logging module, choose the right log levels, format log output for both humans and log aggregators, and write logs that actually help during incidents.

Why Logging Over print()?

print()logging
Always outputs — no way to silenceCan be filtered by level
Goes to stdout onlyCan go to file, stdout, stderr, or remote
No timestamp, no level, no caller infoShows level, time, module automatically
Can't be selectively disabledTurn off DEBUG in production with one env var

Basic Setup

import logging

# Simplest setup — outputs to stderr
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s %(levelname)s %(name)s: %(message)s"
)

log = logging.getLogger(__name__)

log.debug("detail only needed during debugging")
log.info("normal operation — task started")
log.warning("something unexpected but not fatal")
log.error("an operation failed")
log.critical("system-level failure")

Output:

2024-01-15 10:30:01,234 INFO mymodule: normal operation — task started
2024-01-15 10:30:01,235 WARNING mymodule: something unexpected but not fatal

Log Levels — When to Use Each

LevelValueWhen to use
DEBUG10Step-by-step details, variable values — too noisy for production
INFO20Normal events: started, processed N items, finished
WARNING30Unexpected but recoverable: file missing, retry needed
ERROR40An operation failed but the program continues
CRITICAL50System-level failure — may need to shut down

Set the minimum level via environment variable:

import os
import logging

level = os.environ.get("LOG_LEVEL", "INFO").upper()
logging.basicConfig(level=getattr(logging, level, logging.INFO))
LOG_LEVEL=DEBUG python3 script.py # verbose
LOG_LEVEL=WARNING python3 script.py # quiet

Logger Names and Hierarchy

Use __name__ — it gives the logger the module's dotted path:

# In myapp/utils.py
log = logging.getLogger(__name__)
# logger name: "myapp.utils"

# In myapp/main.py
log = logging.getLogger(__name__)
# logger name: "myapp.main"

This lets you silence one module:

logging.getLogger("myapp.utils").setLevel(logging.WARNING)

Logging to a File

import logging

logging.basicConfig(
level=logging.INFO,
format="%(asctime)s %(levelname)s %(name)s: %(message)s",
handlers=[
logging.FileHandler("app.log", encoding="utf-8"),
logging.StreamHandler() # also print to stderr
]
)

Structured Logging for Log Aggregators

When logs go to tools like Datadog, Splunk, or CloudWatch, machine-readable JSON is far easier to search and alert on:

import logging
import json
import sys
from datetime import datetime


class JsonFormatter(logging.Formatter):
def format(self, record: logging.LogRecord) -> str:
log_obj = {
"timestamp": datetime.utcnow().isoformat() + "Z",
"level": record.levelname,
"logger": record.name,
"message": record.getMessage(),
}
if record.exc_info:
log_obj["exception"] = self.formatException(record.exc_info)
return json.dumps(log_obj)


handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(JsonFormatter())
logging.root.addHandler(handler)
logging.root.setLevel(logging.INFO)

log = logging.getLogger(__name__)
log.info("Server started on port 8080")

Output:

{"timestamp": "2024-01-15T10:30:01Z", "level": "INFO", "logger": "main", "message": "Server started on port 8080"}

What to Log — Practical Guidelines

Log at the start and end of important operations

log.info("Starting import of %d records from %s", len(records), source)
# ... do work ...
log.info("Import complete: %d processed, %d skipped, %d errors", ok, skipped, errors)

Log errors with enough context to diagnose

try:
response = api_call(user_id)
except Exception as e:
log.error("API call failed: user_id=%s error=%s", user_id, e)
raise

Use lazy formatting (% style, not f-strings)

# ✅ Lazy — string only built if message is logged
log.debug("Processing record %d of %d", i, total)

# ❌ Eager — string always built even if DEBUG is off
log.debug(f"Processing record {i} of {total}")

Add context with extra

log.info("User action", extra={"user_id": 42, "action": "login", "ip": "1.2.3.4"})

Logging Exceptions

try:
result = process(data)
except ValueError as e:
log.exception("Processing failed for data=%r", data)
# log.exception() automatically includes the traceback
raise

log.exception() is equivalent to log.error(..., exc_info=True).

Complete Setup for a Script

import logging
import os
import sys


def setup_logging() -> None:
log_level = os.environ.get("LOG_LEVEL", "INFO").upper()
log_format = "%(asctime)s %(levelname)-8s %(name)s: %(message)s"
date_format = "%Y-%m-%d %H:%M:%S"

logging.basicConfig(
level=getattr(logging, log_level, logging.INFO),
format=log_format,
datefmt=date_format,
stream=sys.stderr,
)


log = logging.getLogger(__name__)


def main() -> int:
setup_logging()
log.info("Script started")

try:
# ... do work ...
log.info("Script completed successfully")
return 0
except Exception as e:
log.exception("Script failed: %s", e)
return 1


if __name__ == "__main__":
sys.exit(main())

Common Mistakes

MistakeFix
Using print() for status messagesUse log.info()
Using print() for errorsUse log.error()
Calling logging.debug() directlyGet a named logger: log = logging.getLogger(__name__)
f-string in log messagesUse %s style: log.info("x=%s", x)
Logging sensitive data (passwords, tokens)Redact before logging
No logging in production scriptsAdd at minimum start/end + errors

Quick Reference

import logging
import os

# Setup (call once at startup)
logging.basicConfig(
level=getattr(logging, os.environ.get("LOG_LEVEL", "INFO"), logging.INFO),
format="%(asctime)s %(levelname)-8s %(name)s: %(message)s",
)

# Get logger per module
log = logging.getLogger(__name__)

# Log levels
log.debug("detail: %s", value)
log.info("normal: processed %d items", count)
log.warning("unexpected: %s", msg)
log.error("failed: %s", err)
log.exception("exception occurred") # includes traceback

# Log to file + stderr
logging.basicConfig(handlers=[
logging.FileHandler("app.log"),
logging.StreamHandler(),
])

What's Next

Lesson 3: Retries, Timeouts, and Cleanup