Question

In statically typed languages or dynamically typed languages that use a type checking system you can guarantee that the input type is the type that you specified in the contract.

However, in dynamically typed languages without type checking systems in place you don't know how other people will call your function.

In these situations, does it make sense to verify the types and write unit tests that unexpected types are dealt with properly?

Was it helpful?

Solution

does it make sense to verify the types and write unit tests that unexpected types are dealt with properly

Sort of - replace "unit tests" by "automatic tests", and then the answer becomes yes.

If you have a function A expecting an object B as input, and a component C calls A, in a statically typed language the compiler will complain if C tries to pass something which is not B (or a derivation of B).

In a dynamically typed language, you won't get a compiler error if the passed object is not a valid replacement for B. Such errors can stay undetected until the code is executed. In such a situation, an automated test for C (using A) can indeed work as an alternative for the missing compiler checks. (If that test is more a unit test for C, or an integration test for C&A may depend on what A and C are in your system, but it does not really matter how you call it).

If you should do this, how much effort you should invest into these additional tests, is a different question. This cannot be answered in general and depends heavily

  • on the size and complexity of the system,
  • the test process for the system in general
  • non-functional requirements like maintainability, evolvability or reliability
  • financial impact of errors which slip into production

and so on. Maybe there are other, coarser integration tests which cover the execution of the components in stake. Maybe there is a manual testing process which is sufficient to catch most of the bugs. Maybe you can afford when the parts fail in production, since when this happens, the dev team gets noted and can fix this early enough. In the end, this is always a trade-off.

Note that you have to deal with the question of tests and errors in production even when using a statically typed language - type safety and/or additional tests can only reduce the probability of production failures to a smaller value, but never to zero.

OTHER TIPS

I think in this case it really depends on what you want to do - both options (what you suggest/not doing anything at all) seem sensible to me.

I wouldn't really speak about "verify the types" in a language that is interpreted/compiled using dynamic typechecking. In those cases, everything the interpreter (compiler) cares about is that what you access exists (as someone said in the comments - ducktyping).

Hence, any input validation you want to do should have to do with - does this argument come with the things that I expect?

If I were to make this choice, I would leave it up to whoever uses my code - the whole point of such languages is that you shouldn't worry about types!

Such checks serve as executable documentation, aid debugging (by failing early if unexpected inputs are provided) and are super easy to test for. In most cases, they are a great idea. For example:

def system_under_test(name):
    assert isinstance(name, str)  # <-- a check!!
    return "Hello, " + name

import pytest

def test_it_works():
    assert system_under_test("World") == "Hello, World"

def test_it_rejects_invalid_input():
    for name in (None, 42, True, object()):
        with pytest.raises(AssertionError):
            system_under_test(name)

Things are more difficult if that code is very dynamic, for example if it makes use of Duck Typing. But the solution is simple: then don't check the types. Safety measures don't have to be perfect. They are already valuable when they can prevent some problems.

If you are using strict TDD, you should not write any code incl such type checks unless they are required by a test. Therefore, adding a small test (which is really easy for invalid inputs!) seems sensible. There is the caveat that tests only test examples and are therefore unable to ensure that the function accepts all valid input and rejects all invalid input. But again: partial safety measures are better than none. In a TDD viewpoint, you add more examples to the tests when the need arises – my above test is already excessive because it loops through multiple examples without clear need.

Licensed under: CC-BY-SA with attribution
scroll top