Question

I've always been one to err on the side of preventing exception conditions by never taking an action unless I am certain that there will be no errors. I learned to program in C and this was the only way to really do things.

Working with C# I frequently see more reactive programing - try to do something and handle exceptions. To me this seems like using exceptions as control statements. The first few times I saw this I dismissed it as bad practice. But over the past few months I've seen it all over the place and just have to wonder - is this accepted/efficient or just an epidemic?

Update: For a bit of clarification most of the exception handling I am seeing is things like

try
{
    //open file
}
catch
{
    //message box for file not found
}

or even worse

try
{
    //open xml
    //modify xml (100+ lines of code)
}
catch
{
    //message for 'unspecified error'
}

I understand there are times when exception handling is very good to use (such as database connections) but I'm referring to the use of exceptions in place of more 'traditional' control. I asked this because I felt like this programming style was using exceptions as a crutch instead of as a recovery method and wanted to know if this was just something I'd have to learn to expect in the C# world.

Was it helpful?

Solution

You can certainly misuse exceptions is c#. In my view, you should never get an ArgumentNullException, since you should always test for null first. However, there are also many cases where you can't range check your way out of an exception. Anything that interacts with "the outside world" (connecting to a web server, database, etc) may throw an exception.

Do prevent as much as possible, but you still need the ability to react to everything else.

OTHER TIPS

As usual, the answer is "it depends", but I subscribe to the "fail fast" philosophy in general.

I prefer to use try/finally (sans catch) unless I can actually do something useful to recover from an exception in a particular block of code. Catching every possible exception isn't worth it. In general, failing fast is preferable to failing silently.

If, on the other hand, you know how to recover from a particular exception, then yes, go do that.

Say you have a file transfer library. It will probably throw an exception if the transfer is interrupted due to a timeout or network failure. That's reasonable. You'll be annoyed if the library just fails silently; checking for a return code is far more error-prone, and not necessarily more readable. But perhaps you have a business rule for sending a bunch of files to a server that you should make at least 3 attempts to transfer the file before giving up and asking for user intervention. In that case, the business logic should handle the exception, try to recover, then do whatever it's supposed to do when the automatic solution fails (alert the user, schedule a later attempt, or whatever).

If you find code that does this:

try
{
    // do something that could throw
    // ...
}
catch {} //swallow the exception

or:

catch { return null; }

That's probably broken. Sure, sometimes code that you call can throw an exception that you really don't care about. But I often see people do this just so they don't have to "handle" the exception upstream; the practice makes things harder to debug.

Some people consider allowing exceptions to cascade up the chain of responsibility to be bad because you're just "hoping" someone upstream will "miraculously" know what to do. Those people are wrong. Upstream code is often the only place that can know what to do.

Occasionally, I'll try/catch and throw a different, more appropriate exception. However, when possible, a guard clause is better. e,g. if (argument==null) throw new ArgumentNullException(); is better than allowing a NullReferenceException to propagate up the call stack, because it's clearer what went wrong.

Some conditions "should never happen" or "I didn't know that could happen" should probably be logged (see, for example, jboss logging), but can be swallowed before they bring down your application, at least in some cases.

ETA: It is probably broken to take a specific exception and then display a general, ambiguous error message. For your second example above, that sounds bad to me. For your first, "File not found", that may be more reasonable (if you actually catch that specific exception, and not just "everything"), unless you have a better way to deal with that condition somewhere else. Modal Messageboxes are usually a bad "interaction design smell" to me, but that's mostly beside the point.

There are some cases where you are forced to be reactive: Check and operate doesn't always work.

Accessing a file on a disk : is the disk working? does the file exist? can I get read access to the file?

Accessing a database : is the server accepting connections? Are my credentials good? Do the database objects exist? Are the columns named/typed appropriately?

All of these things can change between the check and the operation.

In my C++ COM days, I learned not to bend over backwards to handle "possible" errors - that is, I don't routinely check the return value of every function call. That's because I know I can't succeed at handling unknown conditions:

  • I don't know when it will fail, or what that means.

  • I can't make it happen, so I can't test my error handling code. Code that isn't tested is code that doesn't work.

  • I don't know what the user would want me to do.

  • Routine error handling may let the program continue running but not in a reliable, predictable way that the user would benefit from.

  • Failing quickly & dramatically (instead of slowly and subtly) doesn't lull the user in to a false sense of security, and gives the programmer a better chance at diagnosing the problem.

Obviously, I'm not saying "never handle errors", I'm saying "only handle errors when you can add value".

The same principles apply to C#. I think it's a great idea to catch an exception if you can wrap it in a more relevant exception & throw that instead. And if you know for sure how your user would benefit from you handling an exception, then go for it. But otherwise, leave it alone.

I believe you are correct that using exceptions as control statements is a bad practice. Exceptions in .Net are slow. It's always much better to do your own checks rather than using exceptions to catch "bad" values, such as Nulls, as Jon B mentioned.

I found out first hand in a .Net 1.1 app that created delimited text files of lab data. I was using exceptions to catch fields that had invalid data, but once I replaced the exceptions with appropriate checking code, the application sped up exponentially.

The clue is in the term "Exception".

If the argument being null is a valid business condition, then by definition, a null argument is not an exception, and should be tested prior to using it. Using exceptions to control program logic in such business conditions is sloppy coding, and the perpetrator should be slapped repeatedly with a wet haddock until he repents.

However, performance aside, always blindly testing for, in this example, null arguments, is no better than always trapping an exception when one is thrown.

If it should be impossible for an argument to be null, testing for null should not be done - except at the very top layer of the application where unhandled exceptions are trapped, stored, emailed to the support team, and hidden from the user to whom a suitable message should be displayed by way of apology.

I once had to debug an app where every conceivable damn exception had been handled, and it was literally impossible to debug. It had to be pulled out of production, into a dev envt, along with its database, and all the handlers ripped out. After that it was trivial.

Entering a try-block in C++ is an expensive operation (in terms of CPU cycles) so you should minimize using them for that reason. For C#, entering the try-block is cheap, so we can use it in a different way.

Now; catching exceptions in C# is expensive, and still should still be used for exceptional cases and not for general program logic, as others have stated here already...

C/C++ can be just as reactive. How many times did you see something attempted such as a fopen to be followed by a NULL check? The advantage to exceptions are that they allow more information to be given than a NULL or False return.

Most operations that throw exceptions in any language that supports them there is no way to "Test" before hand to make sure the operation will succeed.

Now runtime exceptions are something all together different. These are normally caused by invalid data or a misuse of data (division by zero). These kind of exceptions in my view should never be caught as a way of dealing with them.

It's hard to discuss the best use of exceptions without an overall discussion of exception strategy. For instance, your strategy could be any one of the following:

  • handle all exceptions as close to the point of failure as possible
  • log an exception, then rethrow to the caller
  • rollback to a pre-exception state, and try to continue
  • translate the exception to an appropriate error message and display to the user

Exception handling can often be complicated by the fact that several developers working on the same project may not even have the same strategy or, even worse, not even know that one exists. So it is important that everyone on a team knows and understands what the strategy is.

For a good starting point on exception handling and strategy, see Ned Batchelder's blog post Exceptions in the Rainforest.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top