Your setting is very DBish. you have a table represented by the Clients
class, and several instances of CLIENTOBJ
each acting like a row of the table, with id
acting as the primary key. From what I understand however, each client is actually a data queue.
The model used by databases can be described roughly as delegating any access to the data to a dedicated activity (thread or process) within the db, and sending commands to it using SQL. Synchronization issues are handled with transactions and SQL clauses (an update may well affect no row whatsoever if the sought id
doesn't exist, but that command won't fail, it will just return 0 row updated). Perhaps a similar model would be interesting in your case: just one global mutex to represent the transaction, and each thread lock the whole data structure, operate it, and unlock. However, this might not be very efficient.
The asynchronous equivalent is to have each command return a std::future
instead of the real result. From then on, the thread only needs to wait on the future
, and act upon it when it's completed (or broken with an exception).
Within the Clients
instance, any method call is transformed into a future
and a promise
. The promise is pushed to a promise queue, and the calling threads either get the future back from the method call, or immediately waits on it.
From the DB process point of view, it's a sequential work: you have one promise queue to which all other threads push data bundled with the client id it must go to. Then the resulting promise is satisfied by the DB thread in order:
- create a new client
- delete a client
- If it's a store, the DB thread checks if any read is pending, and satisfies it, or simply put that data in the client queue
- If it's a read and there's data, pull it from the client queue and give it to the thread, or push it to the pending read queue of the client, to be satisfied later when data becomes available.
With the above solution, all the dependencies are separated and the task is streamlined.
You may also dedicate one thread per CLIENTOBJ
. Then the DB thread becomes a triaging thread, which simply distribute the promises to each client. Each client owns the pending read and data queue of a given id
, so there's no lock involved in the processing of promises.
Each queue must be guarded with a mutex, which means 1 mutex for the main promise queue, 1 mutex for each client promise queue, and as many condition variables as there are threads using the Clients
methods.
Update:
My answer initially proposed the following:
In other words, you could replace the future/promise mechanism by a simple condition variable associated to each non DB threads (future and promise are probably implemented using a cond. variable, but here you would save the creation and destruction overhead).
But it makes some implicit assumptions to the way the CLIENTS
object is being used. The safest road is indeed the std::future
one.