pytest: Expand test inputs using dynamic fixtures

When writing tests in Python, I always choose the pytest test framework. It’s concise, feature-rich has a great ecosystem of plugins, is widely used, and supported in the community. One aspect which makes it blend seamlessly with the code under test is how test input can be passed to it. Within this part of the usual arrange-act-assert structure of tests, let’s explore one particular feature: parametrization.

Accompanying example

Before we dive into pytest, let’s build a concrete example to eventually write tests for. Consider we are building a recommendation method for day activities with friends: Given an activity, said method shall recommend us a friend with whom we enjoy doing the particular activity with (and hopefully vice versa, too ;-)). In case we don’t have an idea for a particular activity, the method shall still recommend something reasonable. Let’s quickly such a method:

from typing import NamedTuple, Optional


class Activity(NamedTuple):
    whom: str
    what: str


def recommend(activity: Optional[Activity]) -> str:
    if activity is None:
        return "Take a break and enjoy the day! :)"

    return f"What about {activity.what} with {activity.whom} today?"

Naive testing: list all test inputs inline

Our recommendation method is in good shape now, ready to be tested. Probably one (if not the) first attempt to test this method is writing one test with exactly one pair (activity, friend), asserting the correct recommendation to be built:

from .friendship import Activity, recommend


def test_recommend():
    # arrange
    input = Activity("Alice", "wind surfing")

    # act
    recommendation = recommend(input)

    # assert
    assert input.whom in recommendation
    assert input.what in recommendation

The usual next step usually is expanding on the list of test inputs to be certain to catch potential bugs. So we continue to naively add two lists of friends and activities to iterate over their cartesian product, obtaining one recommendation for each input, and finally asserting each one’s correctness:

import itertools

from .friendship import Activity, recommend


def test_recommend():
    friends = ["Alice", "Bob", "Claire"]
    activities = ["wind surfing", "hiking", "board games"]

    inputs = [Activity(friend, activity) for friend, activity in itertools.product(friends, activities)]
    for input in inputs:
        recommendation = recommend(input)

        assert input.whom in recommendation
        assert input.what in recommendation

This approach comes with numerous downsides. A very prominent one being that any failure will stop these tests, leaving the other examples untested until the fix the preceding erroneous test input. This pattern reoccurs until you got all the tests fixed. It’s imminent how laborious of a task this can get to see patterns in test failures when you only have a single failing one to inspect at any given time once the number of combinations grows.

Parametrize: individual runs for each test input

pytest offers a better way to execute our assertions individually for each test input rather than as one block: by extracting our inputs into a pytest.mark.parametrize decorator:

import pytest

from .friendship import Activity, recommend


@pytest.mark.parametrize(
    ["friend", "activity"],
    [
        ("Alice", "wind surfing"),
        ("Alice", "hiking"),
        # ... more activities with Alice...
        ("Bob", "wind surfing"),
        # ... more activities with Bob...
        ("Claire", "wind surfing"),
        # ... more activities with Claire...
    ]
)
def test_recommend(friend, activity):
    recommendation = recommend(Activity(friend, activity))

    assert friend in recommendation
    assert activity in recommendation

Running this test gives us the desired result of one dedicated test run per pair of test inputs:

$ pytest test_friendship_parametrized.py
...
collected 4 items                                                                                                                                                                                        

test_friendship_parametrized.py::test_recommend[Alice-wind surfing] PASSED                                                                                                       [ 25%]
test_friendship_parametrized.py::test_recommend[Alice-hiking] PASSED                                                                                                             [ 50%]
test_friendship_parametrized.py::test_recommend[Bob-wind surfing] PASSED                                                                                                         [ 75%]
test_friendship_parametrized.py::test_recommend[Claire-wind surfing] PASSED

Cartesian product of inputs by stacking decorators

It is easy to envision how enumerating all test inputs becomes unmaintainable even with only a few different input parameters. The way to go is to let pytest do the heavy lifting, building the (cartesian) product of input parameters for us:

import pytest

from .friendship import Activity, recommend


@pytest.mark.parametrize("friend", ["Alice", "Bob", "Claire"])
@pytest.mark.parametrize("activity", ["wind surfing", "hiking", "board games"])
def test_recommend(friend, activity):
    recommendation = recommend(Activity(friend, activity))

    assert friend in recommendation
    assert activity in recommendation

Much cleaner than before.

We can one step further in separating our test inputs from their actual usage by moving the data generated for friend and activity into dedicated test fixtures. This enables us to reuse these fixtures as data factories in other tests as well. In order to achieve multiple invocations of any test using our new fixtures, we pass our sample data to the params parameter of pytest.fixture. The fixture-version of our friend test input then looks as follow:

@pytest.fixture(params=["Alice", "Bob", "Claire"])
# Use pytest's `request` fixture to introspect the current fixture
def friend(request):
    # The `request` fixture in particular contains the `params` data!
    return request.param

A similar refactoring would apply to the activity test input. Once we refactored the test inputs into dedicated fixtures, the pytest.mark.parametrize decorators can be removed—with the test run itself staying as-is.

But there is still one last thing we could do: adding test inputs not generated by building the product of several sub-inputs.

Beyond products: Generating special test inputs

We currently generate the cartesian product of friends and activities. This works as long as all our test inputs are combinations of their individual parts. Keeping this pattern, how could we achieve passing a None to the recommend method as our test input? In its current form, our test_recommend function takes its test inputs from two fixtures: friend and activity. One conceivable approach is to combine the two fixtures into an intermittent one, pairing, and using this one instead in our test function:

@pytest.fixture()
def pairing(friend, activity) -> Activity:
    return Activity(friend, activity)

Changing our test function to use the above pairing fixture won’t change the generated test inputs—just as expected. So what’s the deal anyway? Well, this artificially-looking fixture paves us the way to our final adjustment: permitting None as one additional test input. We used params before inside fixture definition, so let’s try this right away:

@pytest.fixture(params=[None, ???])
def maybe_pairing(request) -> Optional[Activity]:
    return request.param

Well, but how to pass our pairing fixture? Consulting the pytest documentation leads us to a method to dynamically retrieve fixtures by name, so we try that:

@pytest.fixture(params=[None, "pairing"])
def maybe_pairing(request) -> Optional[Activity]:
    if request.param:
        return request.getfixturevalue(request.param)
    else:
        return request.param

While working for None, it sadly fails for our indirectly invoked pairing fixture with the cryptical error message

The requested fixture has no parameter defined for test: ...

The issue is: maybe_pairing is a parametrized fixture, not supported by plain pytest. Sigh. 😐 We are lucky anyway. What helps us out of this dead-end is a little pytest-plugin: pytest-lazy-fixture. In its simplest form, this plugin spares us the labor of manually loading dynamic fixtures. In our case, however, it does even more heavy lifting—which, however, is worth a post on its own. So let’s give our maybe_pairing a final rewrite:

@pytest.fixture(params=[None, pytest.lazy_fixture("pairing")])
def maybe_pairing(request) -> Optional[Activity]:
    return request.param

Everything put together

Our tests came a long way from manually iterating over the product of friends and activities to generating fixtures other tests might use as well. Our final version now looks like this:

from typing import Optional

import pytest

from .friendship import Activity, recommend


@pytest.fixture(params=["Alice", "Bob", "Claire"])
def friend(request):
    return request.param


@pytest.fixture(params=["wind surfing", "hiking", "board games"])
def activity(request):
    return request.param


@pytest.fixture()
def pairing(friend, activity) -> Activity:
    return Activity(friend, activity)


@pytest.fixture(params=[None, pytest.lazy_fixture("pairing")])
def maybe_pairing(request) -> Optional[Activity]:
    return request.param


def test_recommend(maybe_pairing):
    recommendation = recommend(maybe_pairing)

    if maybe_pairing is None:
        assert "Take a break" in recommendation
    else:
        assert maybe_pairing.whom in recommendation
        assert maybe_pairing.what in recommendation

Outlook

We did use dynamic pytest fixtures but struggled to get it fully working in our example. Rather than digging deeper into the mechanics of how pytest resolves fixtures and generates the values underneath, we quickly moved to the lazy-fixture plugin to do the heavy-work for us. In one of the next posts we will cover exactly the former points by dissecting the lazy-fixture plugin.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.