Continuing in Python's unittest when an assertion fails
EDIT: switched to a better example, and clarified why this is a real problem.
I'd like to write unit tests in Python that continue executing when an assertion fails, so that I can see multiple failures in a single test. For example:
class Car(object):
def __init__(self, make, model):
self.make = make
self.model = make # Copy and paste error: should be model.
self.has_seats = True
self.wheel_count = 3 # Typo: should be 4.
class CarTest(unittest.TestCase):
def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
self.assertEqual(car.make, make)
self.assertEqual(car.model, model) # Failure!
self.assertTrue(car.has_seats)
self.assertEqual(car.wheel_count, 4) # Failure!
Here, the purpose of the test is to ensure that Car's __init__
sets its fields correctly. I could break it up into four methods (and that's often a great idea), but in this case I think it's more readable to keep it as a single method that tests a single concept ("the object is initialized correctly").
If we assume that it's best here to not break up the method, then I have a new problem: I can't see all of the errors at once. When I fix the model
error and re-run the test, then the wheel_count
error appears. It would save me time to see both errors when I first run the test.
For comparison, Google's C++ unit testing framework distinguishes between between non-fatal EXPECT_*
assertions and fatal ASSERT_*
assertions:
The assertions come in pairs that test the same thing but have different effects on the current function. ASSERT_* versions generate fatal failures when they fail, and abort the current function. EXPECT_* versions generate nonfatal failures, which don't abort the current function. Usually EXPECT_* are preferred, as they allow more than one failures to be reported in a test. However, you should use ASSERT_* if it doesn't make sense to continue when the assertion in question fails.
Is there a way to get EXPECT_*
-like behavior in Python's unittest
? If not in unittest
, then is there another Python unit test framework that does support this behavior?
Incidentally, I was curious about how many real-life tests might benefit from non-fatal assertions, so I looked at some code examples (edited 2014-08-19 to use searchcode instead of Google Code Search, RIP). Out of 10 randomly selected results from the first page, all contained tests that made multiple independent assertions in the same test method. All would benefit from non-fatal assertions.
Solution 1:
Another way to have non-fatal assertions is to capture the assertion exception and store the exceptions in a list. Then assert that that list is empty as part of the tearDown.
import unittest
class Car(object):
def __init__(self, make, model):
self.make = make
self.model = make # Copy and paste error: should be model.
self.has_seats = True
self.wheel_count = 3 # Typo: should be 4.
class CarTest(unittest.TestCase):
def setUp(self):
self.verificationErrors = []
def tearDown(self):
self.assertEqual([], self.verificationErrors)
def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
try: self.assertEqual(car.make, make)
except AssertionError, e: self.verificationErrors.append(str(e))
try: self.assertEqual(car.model, model) # Failure!
except AssertionError, e: self.verificationErrors.append(str(e))
try: self.assertTrue(car.has_seats)
except AssertionError, e: self.verificationErrors.append(str(e))
try: self.assertEqual(car.wheel_count, 4) # Failure!
except AssertionError, e: self.verificationErrors.append(str(e))
if __name__ == "__main__":
unittest.main()
Solution 2:
One option is assert on all the values at once as a tuple.
For example:
class CarTest(unittest.TestCase):
def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
self.assertEqual(
(car.make, car.model, car.has_seats, car.wheel_count),
(make, model, True, 4))
The output from this tests would be:
======================================================================
FAIL: test_init (test.CarTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\temp\py_mult_assert\test.py", line 17, in test_init
(make, model, True, 4))
AssertionError: Tuples differ: ('Ford', 'Ford', True, 3) != ('Ford', 'Model T', True, 4)
First differing element 1:
Ford
Model T
- ('Ford', 'Ford', True, 3)
? ^ - ^
+ ('Ford', 'Model T', True, 4)
? ^ ++++ ^
This shows that both the model and the wheel count are incorrect.
Solution 3:
Since Python 3.4 you can also use subtests:
def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
with self.subTest(msg='Car.make check'):
self.assertEqual(car.make, make)
with self.subTest(msg='Car.model check'):
self.assertEqual(car.model, model)
with self.subTest(msg='Car.has_seats check'):
self.assertTrue(car.has_seats)
with self.subTest(msg='Car.wheel_count check'):
self.assertEqual(car.wheel_count, 4)
(msg
parameter is used to more easily determine which test failed.)
Output:
======================================================================
FAIL: test_init (__main__.CarTest) [Car.model check]
----------------------------------------------------------------------
Traceback (most recent call last):
File "test.py", line 23, in test_init
self.assertEqual(car.model, model)
AssertionError: 'Ford' != 'Model T'
- Ford
+ Model T
======================================================================
FAIL: test_init (__main__.CarTest) [Car.wheel_count check]
----------------------------------------------------------------------
Traceback (most recent call last):
File "test.py", line 27, in test_init
self.assertEqual(car.wheel_count, 4)
AssertionError: 3 != 4
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (failures=2)
Solution 4:
What you'll probably want to do is derive unittest.TestCase
since that's the class that throws when an assertion fails. You will have to re-architect your TestCase
to not throw (maybe keep a list of failures instead). Re-architecting stuff can cause other issues that you would have to resolve. For example you may end up needing to derive TestSuite
to make changes in support of the changes made to your TestCase
.