Question

we are running redis and doing hundreds of increments per second of keys in a sorted set, and at the same time doing thousands of reads on the sorted set every second as well.

This seems to be working well but during peak load cpu usage gets pretty high, 80% of a single core. The sorted set itself is a small memory footprint of a few thousand keys.

is the cpu usage increase likely to be due to the hundreds of increments per second or the thousands of reads? understand both impact performance but which has the larger impact?

given this what are some of the best metrics to monitor on my production instance to review these bottlenecks?

Was it helpful?

Solution

One point to check is whether the sorted sets are small enough to be serialized by Redis or not. For instance the "debug object" could be applied on a sample of sorted sets to check if they are encoded as ziplist or not.

ziplist usage trade memory against CPU, especially when the size of the sorted set is close to threshold (zset-max-ziplist-entries, zset-max-ziplist-value, in the configuration file).

Supposing the sorted sets are not ziplist encoded, I would say CPU usage is likely due to the thousands of reads per sec rather than the hundreds of updates per sec. An update of a zset is a log(n) operation. It is very fast, and there is no locking related latency with Redis. A read of the zset items is a O(n) operation, and may result in a large buffer to build and return to the client.

To be sure, you may want to generate the read only traffic, check the CPU, then stop it, generate the update traffic, check the CPU again and compare.

The zset read operations performance should be close to the LRANGE performance you can find in the Redis benchmark. A few thousands of TPS for zsets featuring a thousand of items seem to be in line with typical Redis performance.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top