Question

Let's say that I would like to change my NUnit parametrized test method to a theory. As far as theories go they should define all assumptions/preconditions under which assertions will pass. As per NUnit documentation:

[when comparing theory to parametrized test] A theory, on the other hand, makes a general statement that all of its assertions will pass for all arguments satisfying certain assumptions.

But as I understand it this means that called PUT's code should be basically translated to assumptions. Completely.

What's the point having theories then? Because our algorithm would be written twice. First as testable code and second as theory assumptions. So if we'd intro a bug in the algorithm both our code and test would likely have the same bug. What's the point then?

Example for better understanding

Let's say we're having a checksum method that only supports digits and we'd like to test it using a theory. Let's write a theory:

static Regex rx = new Regex(@"^\d+$", RegexOptions.Compiled);

[Theory]
public void ChecksumTheory(string value)
{
    Assume.That(!string.IsNotNullOrWhiteSpace(value));
    Assume.That(value.Length > 1); // one single number + checksum = same number twice
    Assume.That(rx.IsMatch(value));

    var cc = new ChecksumValidator();

    bool result = cc.ValidateValue(value);

    Assert.IsTrue(result); // not really as algorithm assumptions are missing
}

This is a pretty nice theory, except that without actually implementing the tested code algorithm and expressing it as a set of assumptions its assertions still won't pass because without explicit algorithm assumptions we can't know what the outcome of the validation will be.

Additional info

Theories seem rather trivial and concise when we only need to provide assumptions on input state namely checking that particular values are being set correctly or that their combination is relevant:

[Theory]
public void Person_ValidateState(Person input)
{
    Assume.That(input.Age < 110);
    Assume.That(input.Birth < input.Death || input.Death == null);
    ...
}

Questions

  1. Why write unit test theories if one needs to provide enough assumptions for all asserts to pass?
  2. If we don't want to reinvent the wheel by providing all algorithm assumptions, how do we provide correct assumptions?
  3. If that's not the case, how should I rewrite my theory to make it a good example of NUnit theories?
  4. What is the intended use (by their creators) of test theories anyway?
Was it helpful?

Solution

Theories vs. parameterized tests

I am also aiming at introducing assumptions in my tests instead of using parameterized tests. But still I haven't started it due to similar thoughts.

The goal of assumptions is to describe the given input as a subset from an uncountable - or say vast but complete - set of values by applying a filter. By this your code above is absolutely correct, nevertheless in this case you would have to write several similar tests for negative result testing - eg. when the outcome of cc.ValidateValue(...) is false. Once again - for comprehensibility - I would still rely on a good choice of hand-picked parameters for a parameterized test of this trivial function.

On the other hand assumptions may be useful for tests of more complex business logic. Imagine you have a garage full of fancy cars and you feel like smashing the gas on some remote terrain - also let's imagine this is a business requirement so you need to write tests for it (how cool would this be!). Then you could write a test like this:

[Theory]
public void CarCanDriveOnMuddyGround(Car car)
{
    Assume.That(car.IsFourWheelDrive);
    Assume.That(car.HasMotor);
    Assume.That(car.MaxSpeed > 50);
    Assume.That(car.Color != "white");

    bool result = car.DriveWithGivenSpeedOn<MuddyGround>(50);

    Assert.IsTrue(result);
}

See how this is strongly related to the BDD approach? Like you I am also not that much convinced about using assumptions for plain unit tests. But I am certain that it's a good idea to use different approaches for test functions (parameterized, assertions) according to the different test levels (unit, integration, system, user acceptance).

About algorithm details in assumptions

Thought about your specific problem again. Now I've got your point. In my words: You would need to assume that a given value will give a positive result before you can assert that it gives a positive result. Right? I think you found a pretty good example why theories do not always work.

I tried to solve it anyway in a slightly simpler example (for readability). But I admit it's not very convincing:

public class TheoryTests
{
    [Datapoints]
    public string[] InvalidValues = new[] { null, string.Empty };

    [Datapoints]
    public string[] PositiveValues = new[] { "good" };

    [Datapoints]
    public string[] NegativeValues = new[] { "Bad" };

    private bool FunctionUnderTest(string value)
    {
        return value.ToLower().Equals(value);
    }

    [Theory]
    public void PositiveTest(string value)
    {
        Assume.That(!string.IsNullOrEmpty(value));

        var result = FunctionUnderTest(value);

        Assert.True(result);
    }

    [Theory]
    public void PassingPositiveTest(string value)
    {
        Assume.That(!string.IsNullOrEmpty(value));
        Assume.That(!NegativeValues.Contains(value));

        var result = FunctionUnderTest(value);

        Assert.True(result);
    }
}

PositiveTest will fail obviously because the algorithm assumption is missing. See the second line in the body of PassingPositiveTest which prevents the test from failing. The downside is of course that this actually is an example-based test and not a pure theory-based test. Better ideas welcome.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top