Domanda

Consider that you have a list:

class CLIENTS
{
public:

    CLIENTS();
    ~CLIENTS();

    bool addClient();
    bool removeClient();
    bool getDataFromClientObj(unsigned int id);
    bool storeDataInClientObj(unsigned int id);
private:

    // vector, that contains all the clients
    boost::ptr_vector<CLIENTOBJ> clients;

    // mutex for the client-list
    boost::mutex mutex;
};

Consider further, that getDataFromClientObj() acquires a shared lock (mutex, private). In addition to that, you want to be able to get data from a client via getDataFromClient().

If a client has no data at all in his queue, getDataFromClient() shall wait on a condition variable for that client until it has new data to be read.

Well, here's my problem:
As long as getDataFromClient(); waits, (since this is a multiple readers / single writer list), I am not able to add new clients or delete a client, because getDataFromClient() holds the mutex lock.

How would you exactly solve the scenario, if you have a list, that shall be threadsafe + wait on a condition variable for a specific client + while waiting on a client, be able to delete or add any of the clients hold within the list?

So here once again are the facts:

  • Threadsafe List (multiple readers / single writer)
  • Being able to add a client / delete a client at any time
  • Being able to wait on a (specific) condition for each client specifically (one client might have stored new data in his own queue, while another hasn't; then getDataFromClient() shall wait until new data is there to be read)

The problem I think is, that, given that there is one condition for each client (pseudo-code: if(clientsqueue.isEmpty() -> wait) there have to be multiple condition variables (am I wrong?)

Further information:
OS: Windows 7
Language: C++
Compiler VS2010

È stato utile?

Soluzione

Your setting is very DBish. you have a table represented by the Clients class, and several instances of CLIENTOBJ each acting like a row of the table, with id acting as the primary key. From what I understand however, each client is actually a data queue.

The model used by databases can be described roughly as delegating any access to the data to a dedicated activity (thread or process) within the db, and sending commands to it using SQL. Synchronization issues are handled with transactions and SQL clauses (an update may well affect no row whatsoever if the sought id doesn't exist, but that command won't fail, it will just return 0 row updated). Perhaps a similar model would be interesting in your case: just one global mutex to represent the transaction, and each thread lock the whole data structure, operate it, and unlock. However, this might not be very efficient.

The asynchronous equivalent is to have each command return a std::future instead of the real result. From then on, the thread only needs to wait on the future, and act upon it when it's completed (or broken with an exception).

Within the Clients instance, any method call is transformed into a future and a promise. The promise is pushed to a promise queue, and the calling threads either get the future back from the method call, or immediately waits on it.

From the DB process point of view, it's a sequential work: you have one promise queue to which all other threads push data bundled with the client id it must go to. Then the resulting promise is satisfied by the DB thread in order:

  • create a new client
  • delete a client
  • If it's a store, the DB thread checks if any read is pending, and satisfies it, or simply put that data in the client queue
  • If it's a read and there's data, pull it from the client queue and give it to the thread, or push it to the pending read queue of the client, to be satisfied later when data becomes available.

With the above solution, all the dependencies are separated and the task is streamlined.

You may also dedicate one thread per CLIENTOBJ. Then the DB thread becomes a triaging thread, which simply distribute the promises to each client. Each client owns the pending read and data queue of a given id, so there's no lock involved in the processing of promises.

Each queue must be guarded with a mutex, which means 1 mutex for the main promise queue, 1 mutex for each client promise queue, and as many condition variables as there are threads using the Clients methods.

Update:

My answer initially proposed the following:

In other words, you could replace the future/promise mechanism by a simple condition variable associated to each non DB threads (future and promise are probably implemented using a cond. variable, but here you would save the creation and destruction overhead).

But it makes some implicit assumptions to the way the CLIENTS object is being used. The safest road is indeed the std::future one.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top