Question

If we assume that std::shared_ptr stores a reference count (which I realize the standard does not require, but I am unaware of any implementations that don't), that reference count has a limited number of bits, and that means there is a maximum number of references that are supported. That leads to two questions:

  • What is this maximum value?
  • What happens if you try to exceed it (e.g., by copying a std::shared_ptr that refers to an object with the maximum reference count)? Note that std::shared_ptr's copy constructor is declared noexcept.

Does the standard shed any light on either of these questions? How about common implementations, e.g., gcc, MSVC, Boost?

Was it helpful?

Solution

We can get some information from the shared_ptr::use_count() function. §20.7.2.2.5 says:

long use_count() const noexcept;

Returns: the number of shared_ptr objects, *this included, that share ownership with *this, or 0 when *this is empty.

[Note: use_count() is not necessarily efficient.—end note ]

At first sight the long return type seems to answer the first question. However the note seems to imply that shared_ptr is free to use any type of reference counting it wants to, including things like a list of references. If this were the case then theoretically there would be no maximum reference count (although there would certainly be a practical limit).

There is no other reference to limits on the number of references to the same object that I could find.

It's interesting to note that use_count is documented to both not throw and (obviously) to report the count correctly; unless the implementation does use a long member for the count I don't see how both of these can be theoretically guaranteed at all times.

OTHER TIPS

I'm not sure what the standard suggests, but look at it practically:

The reference count is most likely some sort of std::size_t variable. This variable can hold values up to -1+2^32 in 32-Bit environments and up to -1+2^64 in 64-Bit environments.

Now Image what would have to happen for this variable to reach this value: you would need 2^32 or 2^64 shared_ptr instances. That's a lot. In fact, that's so many that all memory would be exhausted long before you reach this number, since a one shared_ptr is about 8/16 bytes large.

Therefor, you are very unlikely to be able to reach the limit of the reference count if the size of the refcount variable is large enough.

The standard doesn't say; as you say, it doesn't even require reference counting. On the other hand, there is (or was) a statement in the standard (or at least in the C standard) that exceeding implementation limits is undefined behavior. So that's almost certainly the official answer.

In practice, I would expect most implementations to maintain the count as a size_t or a ptrdiff_t. On machines with flat addressing, this pretty much means that you cannot create enough references to cause an overflow. (On such machines, a single object could occupy all of the memory, and size_t or ptrdiff_t have the same size as a pointer. Since every reference counted pointer has a distinct address, there can never be more than would fit in a pointer.) On machines with segmented architectures, however, overflow is quite conceivable.

As Jon points out, the standard also requires std::shared_ptr::use_count() to return a long. I'm not sure what the rationale is here: either size_t or ptrdiff_t would make more sense here. But if the implementation uses a different type for the reference count, presumably, the rules for conversion to long would apply: "the value is unchanged if it can be represented in the destination type (and bit-field width); otherwise, the value is implementation-defined." (The C standard makes this somewhat clearer: the "implementation-defined value" can be a signal.)

You can find out what will happen by instantiating shared pointers using placement new and never deleting them. You can then hit the 32-bit limit easily.

The C++11 standard specifies long as the return type of the use_count() observer function, but doesn't explicitly specify if an implementation must support up to 2^(sizeof(long)*8-1)-1 shared ownerships.

It also doesn't specify what happens when the reference counter overflows.

The boost::shared_ptr implementation (e.g. 1.58 on Fedora 23, x86-64) internally uses 32 Bit counter and doesn't check for overflows.

That means:

  1. the maximum reference count is 2^31-1.
  2. if you have an overflow and release ownership you may end up with some use-after-free issues

Since boost uses different low-level specializations for different platforms you can verify the details via setting a breakpoint in *add_ref_lock - on Fedora 23/x86-64 you'll stop here:

/usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp
[..]
int use_count_;        // #shared
int weak_count_;       // #weak + (#shared != 0)
[..]
bool add_ref_lock() // true on success
{
    return atomic_conditional_increment( &use_count_ ) != 0;
}

See also:

The GNU STL (libstdc++) shared_pointer implementation is based on Boost 1.32 one and also has this issue (on Fedora 23/x86-64) - there the _Atomic_word type is used for reference counting. It is also 'only' 32 bit and isn't checked for overflow.

In contrast, the LLVM libc++ shared_ptr implementation uses a long as reference counter, i.e. on LP64 platforms like x86-64 you can share an object between up to 2^63-1 owners.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top