Question

Assume you have a function that normally can never fail, for example:

std::string convert_integer_to_string(int x);

In pricipal, this would be a candidate for noexcept. However, the implementation most likely involves involves dynamic memory management, so it could always throw a std::bad_alloc when allocating memory with the new operator.

Is it recommended to annotate the function as noexcept?

From a practical point of view, it is extremely difficult to handle out-of-memory situations in a reasonable way. Most programs just assume that there is enough memory available. Calling std::terminate, as it would happen if a noexcept function throws std::bad_alloc, seems to be reasonable in that case.

For me noexcept is some form of documentation. It is a promise that you (or the optimizer) can safely assume that this function will never throw. If you are programming an application that doesn't care about out-of-memory situations, it is still a valid assumption.

I guess the safest recommendation is to never use noexcept if a std::bad_alloc exception could be thrown. On the other hand, I wonder if there are advantages to use noexcept anyway, assuming that you don't care about out-of-memory situations (i.e., if std::terminate is OK).

Was it helpful?

Solution

If a function can throw an exception for whatever reason, even if it is std::bad_alloc, you should not declared it as noexcept. There are relatively few functions which really can't throw an exception and where it actually also matters. The primary need for noexcept functions is to allow the detection of the available error recovery options in case of an exception: For example, std::vector<T, A> can use move construction when inserting an object assuming that move construction doesn't throw. If move construction can throw, moving objects cannot be used to recover for an exception when implementing strong exception safe operations. Thus, if move construction can fail for the type T an instantiation of std::vector<T, A> cannot move objects but needs to copy them.

In particular, do not use noexcept as false documentation: it is a breach of contract if the function actually can throw. The fact that the system reacts with some level of defined behavior in case of this breach doesn't mean that you should take advantage of it. ... and while simple programs probably won't recover and just die when running out of memory, real programs may at least need to store sufficient state to recover the mess they leave behind when dieing, i.e., it is not acceptable for any function to make a decision about killing the program (unless, of course, this is documented intend of the function).

OTHER TIPS

I am not sure I'd worry about out of memory exceptions much.

Under some os's (linux at least) the default behaviour when you run out of memory is to be killed by the os (by the oom killer). This happens when you write to the memory (not when you allocate it) and you won't be given a chance to run any cleanup code. This feature is called memory overcommit

Even you do get the information that you've ran out off memory it's pretty hard to deal with those errors properly: you need to make absolutely sure that your exception handler doesn't allocate memory. That includes all the functions you from that error handler, you also need to make sure that any generic exception handler that might have been triggered along the way (e.g. logging) doesn't use any memory. The best you can usually hope for is some simple cleanup before you shut your program down.

Note that you can also use std::nothrow to check the result of an allocation without using exceptions (that is, providing your OS actually tells you that information at allocation time). It might makes sense to do so when you doing a large allocation that you think might fail. This also has the nice property that instead of dealing with (potentially) uncaught exceptions you'll get a nullptr that will be fairly easy to debug.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top