Writing Your Own Plugins

Many third-party plugins contain quite a bit of code. That’s one of the reasons we use them—to save us the time to develop all of that code ourselves. However, for your specific coding domain, you’ll undoubtedly come up with special fixtures and modifications that help you test. Even a handful of fixtures that you want to share between a couple of projects can be shared easily by creating a plugin. You can share those changes with multiple projects—and possibly the rest of the world—by developing and distributing your own plugins. It’s pretty easy to do so. In this section, we’ll develop a small modification to pytest behavior, package it as a plugin, test it, and look into how to distribute it.

Plugins can include hook functions that alter pytest’s behavior. Because pytest was developed with the intent to allow plugins to change quite a bit about the way pytest behaves, a lot of hook functions are available. The hook functions for pytest are specified on the pytest documentation site.[11]

For our example, we’ll create a plugin that changes the way the test status looks. We’ll also include a command-line option to turn on this new behavior. We’re also going to add some text to the output header. Specifically, we’ll change all of the FAILED status indicators to “OPPORTUNITY for improvement,” change F to O, and add “Thanks for running the tests” to the header. We’ll use the --nice option to turn the behavior on.

To keep the behavior changes separate from the discussion of plugin mechanics, we’ll make our changes in conftest.py before turning it into a distributable plugin. You don’t have to start plugins this way. But frequently, changes you only intended to use on one project will become useful enough to share and grow into a plugin. Therefore, we’ll start by adding functionality to a conftest.py file, then, after we get things working in conftest.py, we’ll move the code to a package.

Let’s go back to the Tasks project. In Expecting Exceptions, we wrote some tests that made sure exceptions were raised if someone called an API function incorrectly. Looks like we missed at least a few possible error conditions.

Here are a couple more tests:

ch5/a/tasks_proj/tests/func/test_api_exceptions.py
 import​ pytest
 import​ tasks
 from​ tasks ​import​ Task
 
 
 @pytest.mark.usefixtures(​'tasks_db'​)
 class​ TestAdd():
 """Tests related to tasks.add()."""
 
 def​ test_missing_summary(self):
 """Should raise an exception if summary missing."""
 with​ pytest.raises(ValueError):
  tasks.add(Task(owner=​'bob'​))
 
 def​ test_done_not_bool(self):
 """Should raise an exception if done is not a bool."""
 with​ pytest.raises(ValueError):
  tasks.add(Task(summary=​'summary'​, done=​'True'​))

Let’s run them to see if they pass:

 $ ​​cd​​ ​​/path/to/code/ch5/a/tasks_proj
 $ ​​pytest
 ===================== test session starts ======================
 collected 57 items
 
 tests/func/test_add.py ...
 tests/func/test_add_variety.py ............................
 tests/func/test_add_variety2.py ............
 tests/func/test_api_exceptions.py .F.......
 tests/func/test_unique_id.py .
 tests/unit/test_task.py ....
 
 =========================== FAILURES ===========================
 __________________ TestAdd.test_done_not_bool __________________
 
 self = <func.test_api_exceptions.TestAdd object at 0x103a71a20>
 
  def test_done_not_bool(self):
  """Should raise an exception if done is not a bool."""
  with pytest.raises(ValueError):
 > tasks.add(Task(summary='summary', done='True'))
 E Failed: DID NOT RAISE <class 'ValueError'>
 
 tests/func/test_api_exceptions.py:20: Failed
 ============= 1 failed, 56 passed in 0.28 seconds ==============

Let’s run it again with -v for verbose. Since you’ve already seen the traceback, you can turn that off with --tb=no.

And now let’s focus on the new tests with -k TestAdd, which works because there aren’t any other tests with names that contain “TestAdd.”

 $ ​​cd​​ ​​/path/to/code/ch5/a/tasks_proj/tests/func
 $ ​​pytest​​ ​​-v​​ ​​--tb=no​​ ​​test_api_exceptions.py​​ ​​-k​​ ​​TestAdd
 ===================== test session starts ======================
 collected 9 items
 
 test_api_exceptions.py::TestAdd::test_missing_summary PASSED
 test_api_exceptions.py::TestAdd::test_done_not_bool FAILED
 
 ====================== 7 tests deselected ======================
 ======= 1 failed, 1 passed, 7 deselected in 0.07 seconds =======

We could go off and try to fix this test (and we should later), but now we are focused on trying to make failures more pleasant for developers.

Let’s start by adding the “thank you” message to the header, which you can do with a pytest hook called pytest_report_header().

ch5/b/tasks_proj/tests/conftest.py
 def​ pytest_report_header():
 """Thank tester for running tests."""
 return​ ​"Thanks for running the tests."

Obviously, printing a thank-you message is rather silly. However, the ability to add information to the header can be extended to add a username and specify hardware used and versions under test. Really, anything you can convert to a string, you can stuff into the test header.

Next, we’ll change the status reporting for tests to change F to O and FAILED to OPPORTUNITY for improvement. There’s a hook function that allows for this type of shenanigans: pytest_report_teststatus():

ch5/b/tasks_proj/tests/conftest.py
 def​ pytest_report_teststatus(report):
 """Turn failures into opportunities."""
 if​ report.when == ​'call'​ ​and​ report.failed:
 return​ (report.outcome, ​'O'​, ​'OPPORTUNITY for improvement'​)

And now we have just the output we were looking for. A test session with no --verbose flag shows an O for failures, er, improvement opportunities:

 $ ​​cd​​ ​​/path/to/code/ch5/b/tasks_proj/tests/func
 $ ​​pytest​​ ​​--tb=no​​ ​​test_api_exceptions.py​​ ​​-k​​ ​​TestAdd
 ===================== test session starts ======================
 Thanks for running the tests.
 collected 9 items
 
 test_api_exceptions.py .O
 
 ====================== 7 tests deselected ======================
 ======= 1 failed, 1 passed, 7 deselected in 0.06 seconds =======

And the -v or --verbose flag will be nicer also:

 $ ​​pytest​​ ​​-v​​ ​​--tb=no​​ ​​test_api_exceptions.py​​ ​​-k​​ ​​TestAdd
 ===================== test session starts ======================
 Thanks for running the tests.
 collected 9 items
 
 test_api_exceptions.py::TestAdd::test_missing_summary PASSED
 test_api_exceptions.py::TestAdd::test_done_not_bool OPPORTUNITY for improvement
 
 ====================== 7 tests deselected ======================
 ======= 1 failed, 1 passed, 7 deselected in 0.07 seconds =======

The last modification we’ll make is to add a command-line option, --nice, to only have our status modifications occur if --nice is passed in:

ch5/c/tasks_proj/tests/conftest.py
 def​ pytest_addoption(parser):
 """Turn nice features on with --nice option."""
  group = parser.getgroup(​'nice'​)
  group.addoption(​"--nice"​, action=​"store_true"​,
  help=​"nice: turn failures into opportunities"​)
 
 
 def​ pytest_report_header():
 """Thank tester for running tests."""
 if​ pytest.config.getoption(​'nice'​):
 return​ ​"Thanks for running the tests."
 
 
 def​ pytest_report_teststatus(report):
 """Turn failures into opportunities."""
 if​ report.when == ​'call'​:
 if​ report.failed ​and​ pytest.config.getoption(​'nice'​):
 return​ (report.outcome, ​'O'​, ​'OPPORTUNITY for improvement'​)

This is a good place to note that for this plugin, we are using just a couple of hook functions. There are many more, which can be found on the main pytest documentation site.[12]

We can manually test our plugin just by running it against our example file. First, with no --nice option, to make sure just the username shows up:

 $ ​​cd​​ ​​/path/to/code/ch5/c/tasks_proj/tests/func
 $ ​​pytest​​ ​​--tb=no​​ ​​test_api_exceptions.py​​ ​​-k​​ ​​TestAdd
 ===================== test session starts ======================
 collected 9 items
 
 test_api_exceptions.py .F
 
 ====================== 7 tests deselected ======================
 ======= 1 failed, 1 passed, 7 deselected in 0.07 seconds =======

Now with --nice:

 $ ​​pytest​​ ​​--nice​​ ​​--tb=no​​ ​​test_api_exceptions.py​​ ​​-k​​ ​​TestAdd
 ===================== test session starts ======================
 Thanks for running the tests.
 collected 9 items
 
 test_api_exceptions.py .O
 
 ====================== 7 tests deselected ======================
 ======= 1 failed, 1 passed, 7 deselected in 0.07 seconds =======

And with --nice and --verbose:

 $ ​​pytest​​ ​​-v​​ ​​--nice​​ ​​--tb=no​​ ​​test_api_exceptions.py​​ ​​-k​​ ​​TestAdd
 ===================== test session starts ======================
 Thanks for running the tests.
 collected 9 items
 
 test_api_exceptions.py::TestAdd::test_missing_summary PASSED
 test_api_exceptions.py::TestAdd::test_done_not_bool OPPORTUNITY for improvement
 
 ====================== 7 tests deselected ======================
 ======= 1 failed, 1 passed, 7 deselected in 0.06 seconds =======

Great! All of the changes we wanted are done with about a dozen lines of code in a conftest.py file. Next, we’ll move this code into a plugin structure.