What you are doing it fine as long as you properly count up (maybe using a shared AtomicInteger
?) all of the requests done by the different threads.
However, I would switch your code around a bit and submit 100,000 (or something) random-ids and then time how long it takes for your threads to handle all of those ids. That's a more realistic test since it will better show your task submission overhead.
Then you just put a startTimeMillis
and calculate the difference from the end to the start and then calculate 100,000 (or whatever your number was) divided by the diff to give you your average iteration/millis.
Something like:
long startTimeMillis = System.currentTimeMillis();
int numIterations = 100000;
for (int i = 0; i < numIterations; i++) {
double randomNumber = random.nextDouble() * 100.0;
final String id = generateRandomId(random);
service.submit(new CassandraReadTask(id, columnFamilyList));
}
service.shutdown();
service.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
long diff = System.currentTimeMillis() - startTimeMillis;
System.out.println("Average time per iteration is " + (numIterations / diff));
Then it's easy to play around with the number of threads and the number of iterations to maximize your throughput.