Is testing hard?
All good engineers validate their work to ensure that it behaves as expected. As
a software engineer, that means writing automated tests which can easily be run
whenever your code changes.
While developers strive to create tests for their code, few enjoy doing so and
even fewer enjoy fixing their tests. This therefore creates a problem, as people
will tend to avoid doing things they don't enjoy.
If we can make the creation and fixing of tests easier then it seems likely that
more tests will be created, leading to software that is better checked in how it
behaves.
Note: this post is mostly a write-up of a lightning talk
I gave at PyCon UK 2017 at the end of
October, so if you'd rather consume this as four minute talk then head over to
the video on
YouTube.
Some bad tests
In the following examples, we assume that we already have a
make_request
function defined which makes a web request of a local
client and that it's the local client we're testing. The tests themselves appear
to fail which is fine as it's the way that the tests themselves that we're
interested in looking at.
These tests don't provide much in the way of useful output. even though we've
added messages to our assertions:
FFF
======================================================================
FAIL: test_invalid_post (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<ipython-input-2-438368b7f979>", line 14, in test_invalid_post
self.assertTrue(200 == response, "Bad status code")
AssertionError: Bad status code
======================================================================
FAIL: test_page_loads (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<ipython-input-2-438368b7f979>", line 6, in test_page_loads
self.assertTrue(200 == response, "Bad status code")
AssertionError: Bad status code
======================================================================
FAIL: test_valid_post (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<ipython-input-2-438368b7f979>", line 10, in test_valid_post
self.assertTrue(200 == response, "Bad status code")
AssertionError: Bad status code
----------------------------------------------------------------------
Ran 3 tests in 0.003s
FAILED (failures=3)
Adding longMessage
The standard
unittest
library provides a mechanism to get a bit more information from assertion messages in the form of the
longMessage
attribute which you can set on your classes.
This improves the results slightly as we can now see what the expected and actual values are:
FFF
======================================================================
FAIL: test_invalid_post (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<ipython-input-3-fd71f2c85adb>", line 16, in test_invalid_post
self.assertTrue(200 == response, "Bad status code")
AssertionError: False is not true : Bad status code
======================================================================
FAIL: test_page_loads (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<ipython-input-3-fd71f2c85adb>", line 8, in test_page_loads
self.assertTrue(200 == response, "Bad status code")
AssertionError: False is not true : Bad status code
======================================================================
FAIL: test_valid_post (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<ipython-input-3-fd71f2c85adb>", line 12, in test_valid_post
self.assertTrue(200 == response, "Bad status code")
AssertionError: False is not true : Bad status code
----------------------------------------------------------------------
Ran 3 tests in 0.002s
FAILED (failures=3)
However due to the way that the assertions are constructed that isn't actually
very helpful just yet.
Using assertEqual
Thankfully unittest
provides more useful assertion helpers. These
include a number of more advanced comparisons (in particular for collections),
though for now we'll just look at assertEqual
.
By using assertEqual
we let unittest
do the
comparison, which means that it can also generate a more descriptive error
message:
FFF
======================================================================
FAIL: test_invalid_post (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<ipython-input-4-1e8aaded9316>", line 16, in test_invalid_post
self.assertEqual(200, response, "Bad status code")
AssertionError: 200 != (500, "Oops! Here's a stack trace...") : Bad status code
======================================================================
FAIL: test_page_loads (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<ipython-input-4-1e8aaded9316>", line 8, in test_page_loads
self.assertEqual(200, response, "Bad status code")
AssertionError: 200 != (200, '<h1>This is the good page</h1>') : Bad status code
======================================================================
FAIL: test_valid_post (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<ipython-input-4-1e8aaded9316>", line 12, in test_valid_post
self.assertEqual(200, response, "Bad status code")
AssertionError: 200 != (200, '<h2>Your submission was invalid</h2>') : Bad status code
----------------------------------------------------------------------
Ran 3 tests in 0.004s
FAILED (failures=3)
This failure now shows us that the response is not in the format we expected, which explains some of the failures we've got.
Fixing that leads to our first passing tests:
F..
======================================================================
FAIL: test_invalid_post (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<ipython-input-5-5f85b03559e5>", line 16, in test_invalid_post
self.assertEqual(200, status_code, "Bad status code")
AssertionError: 200 != 500 : Bad status code
----------------------------------------------------------------------
Ran 3 tests in 0.004s
FAILED (failures=1)
While that's clearly a step forward, we're still interested more in the failure
than the passing tests. As we can see, we now know how it's failing
(the wrong status code), though we don't really understand why it's
failing.
Assert more things
On the way to making our tests provide clearer explanations of what's wrong, we
should aim to have them check as much as would be useful to know has broken.
This is partly as we want to constrain the behaviour of the system under test,
but also because we can usually extract a better understanding of the system if
we know more about the failure.
Having done this, we can see that one of the tests which was previously passing now catches a bug:
F.F
======================================================================
FAIL: test_invalid_post (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<ipython-input-6-5c8355f13e67>", line 18, in test_invalid_post
self.assertEqual(200, status_code, "Bad status code")
AssertionError: 200 != 500 : Bad status code
======================================================================
FAIL: test_valid_post (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<ipython-input-6-5c8355f13e67>", line 14, in test_valid_post
self.assertIn('submission succeeded', body)
AssertionError: 'submission succeeded' not found in '<h2>Your submission was invalid</h2>'
----------------------------------------------------------------------
Ran 3 tests in 0.003s
FAILED (failures=2)
Extract helper assertions
Having added a large number of assertions to each of your tests, you'll find
that there is quite a lot of duplicated assertions. We can therefore apply the
usual practise of extracting the common parts to a helper so that we can achieve
both clarity and space improvements:
This results in the same assertions being run and the same test failures being
found, though now it's much clearer to anyone reading the test code what the
intent of those assertions is.
Summary
There are a number of things which you can do to make it easier to find &
fix issues in your code, and test code is no different. To make it easier to
work with your unittest
tests, you should make use of the
longMessage
attribute and extract custom assertions where they will
improve clarity.