Question

I see a lot of code like the example below in C++, which is usually recommended as a convention:

class Foo
{
public:

    Foo()
    {
        bar = new int[100];
        size = 100;
    }

    // ... copy/assignment stuff ...

    ~Foo()
    {
        if (bar) // <--- needed?
        {
            delete[] bar;
            bar = nullptr; // <--- needed?
        }

        size = 0; // <--- needed?
    }

private:

    int* bar;
    int size;
};

To me, the three statements if (bar) { ... }, bar = nullptr;, and size = 0; are redundant for two reasons:

  1. delete nullptr; is prefectly safe, and it doesn't do anything.
  2. If the object is destroyed and the memory is released, I shouldn't worry about safety to set bar to nullptr and size to 0.

Are these justifications correct? Are these statement truly redundant? If so, why do people keep using and suggesting them? I would like to see some potential problems that could be solved by keeping this convention.

Was it helpful?

Solution

You're right, those aren't needed and some compilers will optimize them away anyway.

However - the reason people usually do this is to help spot out problems. For example, say you don't set the pointer to null. The object is destroyed, but then you incorrectly attempt to access the (what once was) pointer. Since the runtime will probably not clear it, you'll still see something valid there. This is only valuable from debugging, and it's still undefined behavior, but it sometimes pays off.

OTHER TIPS

Where is this usually recommended as a convention?

What's usually recommended is to NOT manually manage memory allocation. If you used a std::vector (or smart pointer if you really want) to implement this, all the problems go away completely and you don't need to write a destructor at all.

However if you're really insisting on doing this (and you've written copy constructor/copy assignment you haven't shown us, for correctness), then I find that the extra work in the destructor actually hides what's really happening and provides little value at the expense of code obfuscation.

you should use delete[] (array)

It's true that the if is useless. Impact in terms of performance can usually be neglected. When you debug, You are sometime happy to have things in a safe state (like prevent to look at some freed memory and scratch your head). When it's hidden deep inside a tree, it helps (at least reinitializing the size and bar).

BTW, better way of writing it would be (RAII : http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization )

Foo(): 
  bar (new int[100]),
  size(100)
{
}

EDIT :

  • old habits die hard, for this you should used a scoped_ptr and be happy with it (or even better, a vector) and kill plain old dumb pointers.
  • older guys written too much C or swap between too many languages : better safe than sorry when it's cheap

As mentioned in many other answers, the if(bar) is rather silly.

Setting freed pointers to nullptr and resetting size = 0 has its uses though. Some of those redundancies are good in OOP when you have virtual constructors, virtual destructors, and class hierarchies. During a destructor, what happens if a child class frees a pointer but doesn't set it to nullptr, and then a base class tries to free it as well? In a similar case, what happens if a base class assumes that a pointer is valid if size > 0?

There are minor bookkeeping details like these that can create subtle bugs that are extremely hard to debug. Might as well be pedantic in some of these cases and let the compiler take away what is not necessary.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top