The testscenarios
library is designed to take the test methods you write, and some data scenarios, to produce individual test cases for each combination of (test-method × scenario).
For instance:
import unittest
import testscenarios
from .. import system_under_test
class foo_TestCase(testscenarios.WithScenarios, unittest.TestCase):
""" Test cases for `foo` function. """
scenarios = [
('purple': {
'wibble': "purple",
'expected_count': 2,
'expected_output': "Purple",
}),
('orange': {
'wibble': "orange",
'expected_count': 3,
'expected_output': "Orange",
}),
('red': {
'wibble': "red",
'expected_count': 1,
'expected_output': "Red",
}),
]
def test_has_expected_vowel_count(self):
""" Should give the expected count of vowels. """
vowel_count = system_under_test.foo(self.wibble)
self.assertEqual(self.expected_count, vowel_count)
def test_sets_expected_output(self):
""" Should set the output to the expected value. """
system_under_test.foo(self.wibble)
self.assertEqual(self.expected_output, system_under_test.output)
The foo_TestCase
class will, when test discovery occurs, generate 6 tests for the test runner. They will have names annotated with the scenario names, so the resulting test report distinguishes each specific test case:
foo_TestCase.test_has_expected_vowel_count (purple)
foo_TestCase.test_has_expected_vowel_count (orange)
foo_TestCase.test_has_expected_vowel_count (red)
foo_TestCase.test_sets_expected_output (purple)
foo_TestCase.test_sets_expected_output (orange)
foo_TestCase.test_sets_expected_output (red)
More flexibility is available if you need it – see the testscenarios
documentation – but the above demonstrates how I use it most of the time.