Question

What negative/undefined behaviour could arise, from calling a save function (ala boost-serialize) within a class's ~dtor?

Was it helpful?

Solution

You have two concerns, one of which is a consequence of the other:

1) You should not allow any exception to escape the destructor. If you do, and if the destructor is being called as part of stack unwinding, then the runtime will terminate() your program. This is not undefined behaviour, but it's pretty negative.

Because of this (and also of course because destructors don't return a value):

2) There's no reasonable way for your destructor to indicate success or failure ("reasonable" meaning, without building some kind of separate error-reporting system). Since the user of your class might want to know whether the save happened or not, preferably with a sensible API to do so, this means that destructors can only save data on a "best effort" basis. If the save fails then the object still gets destroyed, and so presumably its data is lost.

There is a strategy for such situations, used for example by file streams. It works like this:

  • have a flush() (or in your case save()) function that saves the data
  • call this function from the destructor if the object has not already been saved/flushed (or more likely: call it unconditionally but have the function itself know whether it needs to do any real work or not). In the case of file streams this happens via close(). Catch any exceptions it can throw and ignore any errors.

That way, users who need to know whether the save succeeded or not call save() to find out. Users who don't care (or who wouldn't mind it succeeding if possible in the case that an exception is thrown and the object is destroyed as part of stack unwinding) can let the destructor try.

That is, your destructor can attempt to do something that might fail, as a last-ditch effort, but you should additionally provide a means for users to do that same thing "properly", in a way that informs them of success or failure.

And yes, this does incidentally mean that using streams without flushing them and checking the stream state for failure is not using them "properly", because you have no way of knowing whether the data was ever written or not. But there are situations where that's good enough, and in the same kinds of situation it might be good enough for your class to save in its destructor.

OTHER TIPS

The issue is that boost-serialize can throw an exception. That means if the destructor is being called because an exception is propagating and is cleaning up the stack as it unwinds then your application will terminate if the destructor of the object throws another exception.

So to summarize, you always only want one exception propagating at a time. If you end up with more then one then your application will close which defeats the purpose of exceptions.

It is a bad idea.

  1. A destructor should never throw, IO operations are very likely to throw since wether IO succeeds or not is basically out of your control.
  2. To me at least it's extremely unintuitive
    a. for one it ensures that every object of that type will be serialized (unless the destructor has checks to prevent that)
    b. destructors have a very clear purpose, to clean up, storing data is basically the opposite of cleaning up.

So I just want to make one more point, what do you have to gain by serializing in the destructor.


you know the serialization will run even if there is an exception, if you are making use of RAII. But this isn't so much of a benefit because even though the destructor will run you can't guarantee the serialize will run since it throws (in this case at least). Also you lose a lot of the ability to properly handle a failure.

No ,it's not a bad idea ,but it isn't a terribly good idea either ! But sometimes it's right thing to do.

As long as you protect your destructor from throwing exceptions ,there is nothing against it.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top