Question

This question is probably language-agnostic, but I'll focus on the specified languages.

While working with some legacy code, I often saw examples of the functions, which (to my mind, obviously) are doing too much work inside them. I'm talking not about 5000 LoC monsters, but about functions, which implement prerequisity checks inside them.

Here is a small example:

void WorriedFunction(...) {
   // Of course, this is a bit exaggerated, but I guess this helps
   // to understand the idea.
   if (argument1 != null) return;
   if (argument2 + argument3 < 0) return;
   if (stateManager.currentlyDrawing()) return;

   // Actual function implementation starts here.

   // do_what_the_function_is_used_for
}

Now, when this kind of function is called, the caller doesn't have to worry about all that prerequisities to be fulfilled and one can simply say:

// Call the function.
WorriedFunction(...);

Now - how should one deal with the following problem?

Like, generally speaking - should this function only do what it is asked for and move the "prerequisity checks" to the caller side:

if (argument1 != null && argument2 + argument3 < 0 && ...) {
   // Now all the checks inside can be removed.
   NotWorriedFunction();
}

Or - should it simply throw exceptions per every prerequisity mismatch?

if (argument1 != null) throw NullArgumentException;

I'm not sure this problem can be generalized, but still I want to here your thoughts about this - probably there is something I can rethink.

If you have alternative solutions, feel free to tell me about them :)

Thank you.

Was it helpful?

Solution

Every function/method/code block should have a precondition, which are the precise circumstances under which it is designed to work, and a postcondition, the state of the world when the function returns. These help your fellow programmers understand your intentions.

By definition, the code is not expected to work if the precondition is false, and is considered buggy if the postcondition is false.

Whether you write these down in your head, on paper in a design document, in comments, or in actual code checks is sort of a matter of taste.

But a practical issue, long-term, life is easier if you code the precondition and post-condition as explicit checks. If you code such checks, because the code is not expected to work with a false precondition, or is buggy with a false postcondition, the pre- and post- condition checks should cause the program to report an error in a way that makes it easy to discover the point of failure. What code should NOT do is simply "return" having done nothing, as your example shows, as this implies that it has somehow executed correctly. (Code may of course be defined to exit having done nothing, but if that's the case, the pre- and post- conditions should reflect this.)

You can obviously write such checks with an if statement (your example comes dangerously close):

if (!precondition) die("Precondition failure in WorriedFunction"); // die never comes back

But often the presence of a pre- or post-condition is indicated in the code by defining a special function/macro/statement... for the language called an assertion, and such special construct typically causes a program abort and backtrace when the assertion is false.

The way the code should have been written is as follows:

void WorriedFunction(...)  
 {    assert(argument1 != null); // fail/abort if false [I think your example had the test backwards]
      assert(argument2 + argument3 >= 0);
      assert(!stateManager.currentlyDrawing());
      /* body of function goes here */ 
 }  

Sophisticated functions may be willing to tell their callers that some condition has failed. That's the real purpose of exceptions. If exceptions are present, technically the postcondition should say something to the effect of "the function may exit with an exception under condition xyz".

OTHER TIPS

That's an interesting question. Check the concept of "Design By Contract", you may find it helpful.

It depends.

I'd like to seperate my answer between case 1, 3 and case 2.

case 1, 3

If you can safely recover from an argument problem, don't throw exceptions. A good example are the TryParse methods - if the input is wrongly formatted, they simply return false. Another example (for the exception) are all LINQ methods - they throw if the source is null or one of the mandatory Func<> are null. However, if they accept a custom IEqualityComparer<T> or IComparer<T>, they don't throw, and simply use the default implementation by EqualityComparer<T>.Default or Comparer<T>.Default. It all depends on the context of usage of the argument and if you can safely recover from it.

case 2

I'd only use this way if the code is in an infrastructure like environment. Recently, I started reimplementing the LINQ stack, and you have to implement a couple of interfaces - those implementations will never be used outside my own classes and methods, so you can make assumptions inside them - the outside will always access them via the interface and can't create them on their own.

If you make that assumption for API methods, your code will throw all sorts of exceptions on wrong input, and the user doesn't have a clue what is happening as he doesn't know the inside of your method.

"Or - should it simply throw exceptions per every prerequisity mismatch?"

Yes.

You should do checks before calling and function and if you own the function, you should make it throw exceptions if arguments passed are not as expected.

In your calling code these exceptions should be handled. Ofcourse arguments passed should be verified before the call.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top