Question

I have come across the following type of code many a times, and I wonder if this is a good practice (from Performance perspective) or not:

try
{
    ... // some code
}
catch (Exception ex)
{
    ... // Do something
    throw new CustomException(ex);
}

Basically, what the coder is doing is that they are encompassing the exception in a custom exception and throwing that again.

How does this differ in Performance from the following two:

try
{
    ... // some code
}
catch (Exception ex)
{
    .. // Do something
    throw ex;
}

or

try
{
    ... // some code
}
catch (Exception ex)
{
    .. // Do something
    throw;
}

Putting aside any functional or coding best practice arguments, is there any performance difference between the 3 approaches?

Was it helpful?

Solution

@Brad Tutterow

The exception is not being lost in the first case, it is being passed in to the constructor. I will agree with you on the rest though, the second approach is a very bad idea because of the loss of stack trace. When I worked with .NET, I ran into many cases where other programmers did just that, and it frustrated me to no end when I needed to see the true cause of an exception, only to find it being rethrown from a huge try block where I now have no idea where the problem originated.

I also second Brad's comment that you shouldn't worry about the performance. This kind of micro optimization is a HORRIBLE idea. Unless you are talking about throwing an exception in every iteration of a for loop that is running for a long time, you will more than likely not run into performance issues by the way of your exception usage.

Always optimize performance when you have metrics that indicate you NEED to optimize performance, and then hit the spots that are proven to be the culprit.

It is much better to have readable code with easy debugging capabilities (IE not hiding the stack trace) rather than make something run a nanosecond faster.

A final note about wrapping exceptions into a custom exception... this can be a very useful construct, especially when dealing with UIs. You can wrap every known and reasonable exceptional case into some base custom exception (or one that extends from said base exception), and then the UI can just catch this base exception. When caught, the exception will need to provide means of displaying information to the user, say a ReadableMessage property, or something along those lines. Thus, any time the UI misses an exception, it is because of a bug you need to fix, and anytime it catches an exception, it is a known error condition that can and should be handled properly by the UI.

OTHER TIPS

Obviously you incur in the penalty of creating new objects (the new Exception) so, exactly as you do with every line of code that you append to your program, you must to decide if the better categorization of exceptions pays for the extra work.

As a piece of advice to make that decision, if your new objects are not carrying extra information about the exception then you can forget constructing new exceptions.

However, in other circumstances, having a hierarchy of exceptions is very convenient for the user of your classes. Suppose you're implementing the Facade pattern neither of the so far considered scenarios is good:

  1. is not good that you raise every exception as an Exception object because you're losing (probably) valuable information
  2. is not good neither to raise every kind of object that you catch because doing so you're failing in creating the facade

In this hypothetical case, the better thing to do is to create a hierarchy of exception classes that, abstracting your users from the inner complexities of the system, allows them to know something about the kind of exception produced.

As a side note:

I personally dislike the use of exceptions (hierarchies of classes derived from the Exception class) to implement logic. Like in the case:

try {
        // something that will raise an exception almost half the time
} catch( InsufficientFunds e) {
        // Inform the customer is broke
} catch( UnknownAccount e ) {
        // Ask for a new account number
}

Like David, I suppose that the second and third perform better. But would any one of the three perform poorly enough to spend any time worrying about it? I think there are larger problems than performance to worry about.

FxCop always recommends the third approach over the second so that the original stack trace is not lost.

Edit: Removed stuff that was just plain wrong and Mike was kind enough to point out.

Don't do:

try
{
    // some code
}
catch (Exception ex) { throw ex; }

As this will lose the stack trace.

Instead do:

try
{
    // some code
}
catch (Exception ex) { throw; }

Just the throw will do, you only need to pass the exception variable if you want it to be the inner exception on a new custom exception.

As others have stated, the best performance comes from the bottom one since you are just rethrowing an existing object. The middle one is least correct because it looses the stack.

I personally use custom exceptions if I want to decouple certain dependencies in code. For example, I have a method that loads data from an XML file. This can go wrong in many different ways.

It could fail to read from the disk (FileIOException), the user could try to access it from somewhere where they are not allowed (SecurityException), the file could be corrupt (XmlParseException), data could be in the wrong format (DeserialisationException).

In this case, so its easier for the calling class to make sense of all this, all these exceptions rethrow a single custom exception (FileOperationException) so that means the caller does not need references to System.IO or System.Xml, but can still access what error occurred through an enum and any important information.

As stated, don't try to micro-optimize something like this, the act of throwing an exception at all is the slowest thing that occurs here. The best improvement to make is to try avoiding an exception at all.

public bool Load(string filepath)
{
  if (File.Exists(filepath)) //Avoid throwing by checking state
  {
    //Wrap anyways in case something changes between check and operation
    try { .... }
    catch (IOException ioFault) { .... }
    catch (OtherException otherFault) { .... }
    return true; //Inform caller of success
  }
  else { return false; } //Inform caller of failure due to state
}

The throw in your first example has the overhead of the creation of a new CustomException object.

The re-throw in your second example will throw an exception of type Exception.

The re-throw in your third example will throw an exception of the same type that was thrown by your "some code".

So the second and third examples use less resources.

Wait.... why do we care about performance if an exception is thrown? Unless we're using exceptions as part of normal application flow (which is WAYYYY against best practise).

I've only seen performance requirements in regards to success but never in regards to failure.

From a purely performance stand-point I'd guess that the third case is most performant. The other two need to extract a stack-trace and construct new objects, both of which are potentially fairly time-consuming.

Having said that these three blocks of code have very different (external) behaviors so comparing them is like asking whether QuickSort is more efficient than Adding an item to a red-black tree. It's not as important as selecting the right thing to do.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top