Link Search Menu Expand Document

Testing for Performance Degradation Between Changes

Source

There are many ways to benchmark code in Python. The standard library provides the timeit module, which can time functions a number of times and give you the distribution. This example will execute test() 100 times and print() the output:

import random
from functools import reduce
    
def add(numbers):
    total = reduce(lambda x, y: x + y, numbers)
    return total

if __name__ == '__main__':
    import timeit
    numbers = [random.random() for i in range(100000)]
    print(timeit.timeit(f"add({numbers})", setup="from __main__ import add", number=100))
0.5682077080000454

Another option, if you decided to use pytest as a test runner, is the pytest-benchmark plugin. This provides a pytest fixture called benchmark. You can pass benchmark() any callable, and it will log the timing of the callable to the results of pytest.

You can install pytest-benchmark from PyPI using pip:

pip install pytest-benchmark

Then, you can add a test that uses the fixture and passes the callable to be executed:

def test_my_function(benchmark):
    result = benchmark(test)

More information is available at the Documentation Website

See the benchmark folder for a runnable example.