Question

In this page there's a piece of code:

class MyString
{
    private:
    char *m_pchString;
    int m_nLength;

public:
    MyString(const char *pchString="")
    {
        // Find the length of the string
        // Plus one character for a terminator
        m_nLength = strlen(pchString) + 1;

        // Allocate a buffer equal to this length
        m_pchString = new char[m_nLength];

        // Copy the parameter into our internal buffer
        strncpy(m_pchString, pchString, m_nLength);

        // Make sure the string is terminated
        m_pchString[m_nLength-1] = '\0';
    }

    ~MyString() // destructor
    {
        // We need to deallocate our buffer
        delete[] m_pchString;

        // Set m_pchString to null just in case
        m_pchString = 0;
    }

    char* GetString() { return m_pchString; }
    int GetLength() { return m_nLength; }
};

In the destructor, the writer sets the m_pchString to null and says just in case. What can happen if we don't set it to null? We have already deallocated specified memory and class members would be killed upon exit. What is the benefit of doing so?

Was it helpful?

Solution

If the pointer is set to NULL, a bug where the deleted object is used, for example through a dangling pointer to the object, might actually be hidden because the buggy code may check for NULL before trying to use the data. In one way of thinking that might be consider 'safe' behavior, but what's happening is that you're actually hiding a defect that has already occurred.

Keep in mind that hiding bugs is not the same as fixing bugs. That dangling pointer might actually be used again after the memory has been reallocated, with a valid pointer placed into that same memory location. At that point the buggy code will start using the new, valid pointer, but for incorrect reasons.

So it actually might be better to set the pointer to something that will cause a crash if it's improperly used:

m_pchString = (char*) 0xdeaddead;

Now if something tries to use the member pointer of the deleted object (which is a bug), it'll fast fail and the bug will be caught rather than be hidden.

In debug builds With MSVC (and possibly other toolchains) you'll already get that behavior with the debug heap. The MSVC debug heap fills memory freed via free() or operator delete with the value 0xdd: https://stackoverflow.com/a/370362/12711

OTHER TIPS

Beside the answer Michael Burr provided, with the help of the link Tietbohl provided, this can be a good answer too. I quote it:

It can help catch many references to free'd memory (assuming your platform faults on a deref of a null pointer).

It won't catch all references to free'd memory, for example, if you have a copy of the pointer lying around. But some is better than none.

It will mask a double-delete, but I find those are far less common than accesses to already freed memory.

In many cases the compiler is going to optimize it away. So the argument that it's unnecessary doesn't persuade me.

If you're already using RAII, the there aren't many deletes in your code to begin with, so the argument that the extra assignment causes clutter doesn't persuade me.

It's often convenient, when debugging, to see the null value rather than a stale pointer.

If this still bothers you, use a smart pointer or a reference instead.

There is no harm if you don't set it to NULL. After the object is deleted, you shouldn't be accessing its member data anyway.

well when you have already deleted the memory location the destructor is only a good programing practice it just helps in case any hidden bug is present .

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top