Question

In the docs (http://atlassian.gridgain.com/wiki/display/GG60/Distributed+Queues), it is mentioned that jobs on a cache queue can be consumed by worker grids offering an alternative means of load balancing .

I investigated this concept thus:

  1. Implemented a producer that enqueues GridRunnable jobs on a partitioned cache (distributed blocking queue). I had to use a partitioned queue (since the local queue, by definition, isn't suited to multi-grid access).
  2. Implemented a consumer that , on startup, spawned n listener threads that perpetually block upon jobs on the queue, consuming (take() operation) jobs from it and submitting them to the Grid, before blocking for the next take() operation.

Questions:

  1. Is (2) the correct way to implement a listener (an alternative might involve submitting listener jobs instead of listener threads within the consumer)
  2. While this architecture offers dynamic load balancing, It doesn't appear that provides many of the benefits that the regular gridgain model offers such as failover, affinity collocation (the cache is distributed - so data has to be moved to the computation)

Are my observations correct? Thanks

Was it helpful?

Solution

It looks like you are on the right path, that is if you need to use load balancing based on GridGain distributed queue.

However, it looks like you would be better off simply sending your GridRunnable jobs to the grid using standard GridCompute API. GridGain will load balance these jobs automatically in the background.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top