adding (push_back) new objects may invalidate previous pointers ...
No, this operation doesn't invalidate any previous pointers, unless you are refering to addresses inside the vectors internal data management (which clearly isn't your scenario).
If you store raw pointers, or std::shared_ptr
's there, those will be simply copied, and not get invalid.
As mentioned in comments a std::vector
isn't very suitable to guarantee thread safety for producer / consumer patterns for a number of reasons. Neither storing raw pointers to reference the alive instances is!
A Queue will be much better to support this. As for standards you can use the std::deque
to have ceratain access points (front()
,back()
) for the producer / consumer.
To make these access point's thread safe (for pushing/popping values) you can easily wrap them with your own class and use a mutex along, to secure insertion/deletion operations on the shared queue reference.
The other (and major, as from your question) point is: manage ownership and lifetime of the contained/referenced instances. You may also transfer ownership to the consumer, if that's suitable for your use case (thus getting off from the overhead with e.g. std::unique_ptr
), see below ...
Additionally you may have a semaphore (condition variable), to notify the consumer thread, that new data is available.
'1. Using atomic or mutexes is not enough? If I push back from one thread, another thread handling an object via pointer may end up having an invalid object?'
The lifetime (and thus thread safe use) of the instances stored to the queue (shared container) need's to be managed separately (e.g. using smart pointers like std::shared_ptr
or std::unique_ptr
stored there).
'2. Is there a library ...'
It can be achieved all well with the existing standard library mechanisms IMHO.
As for point 3. see what's written above. As what I can tell further about this, it sounds like you're asking for something like a rw_lock
mutex. You may provide a surrogate for this with a suitable condition variable.
Feel free to ask for more clarification ...