Why `PYTHONUNBUFFERED` Exists: Seeing stdout When Python Crashes

Seeing stdout when a program crashes
Python normally buffers writes to stdout and stderr.
This means some output may remain in memory instead of being written to the OS immediately.
PYTHONUNBUFFERED=1 (or python -u) disables this buffering.
In Python ≥3.14, -u and PYTHONUNBUFFERED=1 behave equivalently: they make the standard streams unbuffered.
The difference becomes visible when a program terminates abnormally, such as during a segmentation fault.
Exception vs crash
An uncaught Python exception is not a crash.
When Python exits due to an exception, the interpreter performs normal shutdown steps, including flushing stdout and stderr.
Because of this, buffered output is usually written successfully.
A segmentation fault (or other fatal crash) terminates the process immediately.
In that situation, Python never gets the opportunity to flush its buffers.
Exception and binary stdout
You do not need -u when a Python exception occurs.
❯ python -c "import sys; sys.stdout.buffer.write(b'hello'); raise Exception"
Traceback (most recent call last):
File "<string>", line 1, in <module>
Exception
hello%
The hello output appears because Python flushes stdout during interpreter shutdown after the exception.
Segfault and print
Example:
> python -c "print(b'hello'); eval((lambda:0).__code__.replace(co_consts=()))"
b'hello'
[1] segmentation fault (core dumped)
Why does this appear even without -u?
When stdout is connected to a terminal (TTY), it is typically line buffered.
A newline written by print() often triggers an automatic flush.
However, this behavior depends on the environment. If stdout is redirected or piped, the output may remain buffered and be lost during a crash.
Segfault and logging
> python -c "import logging; logging.warning(b'hello'); eval((lambda:0).__code__.replace(co_consts=()))"
WARNING:root:b'hello'
[1] segmentation fault (core dumped)
This usually works without -u because Python’s default logging.StreamHandler flushes the stream after each log record.
Therefore the output reaches the OS before the crash occurs.
Segfault and binary stdout
Binary writes using sys.stdout.buffer.write() demonstrate the buffering behavior clearly.
Without -u
python -c "import sys; sys.stdout.buffer.write(b'hello'); eval((lambda:0).__code__.replace(co_consts=()))"
[1] segmentation fault (core dumped)
No output appears.
The write remains in Python’s buffered stream and is lost when the process crashes.
With -u
> python -u -c "import sys; sys.stdout.buffer.write(b'hello'); eval((lambda:0).__code__.replace(co_consts=()))"
hello[1] segmentation fault (core dumped)
Now the output is visible.
Because buffering is disabled, the bytes are written directly to the operating system before the crash happens.
Performance
Unbuffered output can be slower because every write may result in a system call.
Example benchmark:
❯ python -m timeit "import sys; sys.stdout.buffer.write(b'')"
1000000 loops, best of 5: 293 nsec per loop
❯ python -u -m timeit "import sys; sys.stdout.buffer.write(b'')"
500000 loops, best of 5: 923 nsec per loop
The slowdown occurs because buffering normally allows multiple writes to be combined into fewer system calls.
Actual performance differences depend on:
- operating system
- terminal vs pipe
- write sizes
- workload
When PYTHONUNBUFFERED=1 is useful
The main use case is programs that may terminate abruptly, such as:
- segmentation faults
os._exit- fatal signals
- native extension crashes
In these situations, buffered stdout/stderr data may be lost.
Disabling buffering ensures that each write reaches the OS immediately.
When it provides little benefit
For many applications, it offers no advantage:
- programs that exit normally
- programs that explicitly flush output
- code using logging handlers that flush automatically
In these cases, enabling unbuffered mode may only reduce I/O performance.
Conclusion
PYTHONUNBUFFERED=1 is mainly useful when a Python process may terminate without running its normal shutdown sequence.
In such cases it ensures that stdout and stderr output is not lost in Python’s internal buffers.
For typical applications that exit normally, buffering already works correctly and disabling it may slightly reduce performance.
Related
- https://github.com/docker-library/python/issues/604
- line buffering vs. block buffering (intermediate) anthony explains #285
A note about “unbuffered” output
Buffering exists at several layers in a typical program:
- Python’s I/O layer (
io.BufferedWriter,TextIOWrapper) - the C stdio layer (
FILEbuffers) - the operating system (pipe, terminal, or socket buffers)
python -u (or PYTHONUNBUFFERED=1) disables Python’s buffered I/O and configures the standard streams so writes are flushed to the operating system immediately.
However, this does not eliminate buffering entirely.
After a write() system call succeeds, the bytes are stored in the kernel’s buffers (for example, pipe or terminal buffers). The receiving process or terminal may still read them later.
So in practice, “unbuffered” means:
Python sends the output to the operating system immediately, but the operating system may still buffer it before another process reads it.