Question

With almost every software there are errors and those must be given levels. Grave errors may simply stop your program while simple notices can be resolved with a click. I've always proceeded by giving them a certain numeric degree of importance. But is there a "general rule" between programmers on how to chose these degrees?

  • Should a higher degree of importance be represented by a larger number (e.g. 500) or a smaller one (e.g. 5)? Is there a reason why?
  • Should error levels be widely spaced (100, 200, 300, ...) or closer to each other (100, 101, 102)? And again, are there any advantages to this technique?
Was it helpful?

Solution

Don't add unnecessary levels to your error codes. You could spend ages pointlessly discussing whether a given error should be a level 83 or just a level 82. All the end-user cares about is whether the system is working properly or if it's broken.

So I would use something like INFO (not an error), WARNING (the system is working fine, but might not be doing what you expected), ERROR (something went wrong, but the system recovered) and FATAL (the system is broken, and isn't working any more). You could perhaps split ERROR into MAJOR and MINOR, depending on how the error has affected the output.

Whether error codes increase or decrease with severity is entirely arbitrary. Just make sure it's well documented.

OTHER TIPS

The design question is: What are people going to do with the level numbers?

Mostly: filtering the log messages. For that, five levels suffice. See e.g. Android logging levels. (If you need more selective filtering, it'll most likely be more effective to filter on other criteria like the source module.)

Do pick a rule for deciding which level to use when, and if you don't pick a simple rule, you'll spend way more time deciding which level to use each time than the value you'll get from that.

The Android levels are based on severity (impact), which makes it fairly easy to decide which level to use.

Don't display error level numbers to the software's users unless it gives them clear utility such as enabling a build tool to decide whether to proceed. Otherwise, a usability test would likely show that users are annoyed to have to read past those numbers.

Error levels are used for debugging, i.e. determining what goes into the log. Pick a direction for level of importance and stick with it.

It's often best if you can change what error level the log is reporting at run time.

>

typedef enum  
{  
    E_LOG_NONE=0,  /// never log - unit test only  
    E_LOG_ERROR,   /// highest priority - always report this error  
    E_LOG_WARN,    /// Show warnings + errors. Not an error, but a strange situation  
    E_LOG_INFO,    /// Anything appropriate for production logging  
    E_LOG_VERBOSE, /// Development info - function calls or message receipts  
    E_LOG_DEBUG,   /// super noisy - print hex output, etc  
} LOG_VERBOSITY_LEVELS;  

So normally the log is only reporting at level "X" but if you want to you can increase the verbosity. Note ideally you'll have the ability to set different sections of the code to different error verbosity levels.

You can also give errors specific numbers, i.e. "error number" for the purpose of data mining the log, i.e. "grep [error number string] [log file] | wc" will report how many instances of error number X occurred, obviously this can be automated.

Licensed under: CC-BY-SA with attribution
scroll top