Question

Let's assume you've got a nosql database - redis, cassandra, mongodb. And you need to check the overall performance for this database - various platforms, operation systems, even programming languages which are used for test. It's not tied to a specific application or schema.

  • What tests you want to see? Can you please help me to form requirements?
    • How database operates in cluster?
    • In broken cluster?
    • In cloud env?
    • How it can perform queries when 10k connections opened?
  • What tools you will use?
    • Is it something like JMeter->http server->database?
    • Jmeter->tcp app->database?
    • Other?

All material I've found about database performance testing is like testing database as a part of some product (specific scheme, specific env). Have you thought about database performance testing when database is product itself?

Looking forward for you help.

-vova

Was it helpful?

Solution

In NoSQL benchmarks and performance evaluations I've put together a list of the benchmarks that are correct in the sense that they clearly define the purpose of the benchmark and compare similar features (apples-to-apples comparisons); there are way too many benchmarks out there that are failing at at least one of these fundamental requirements of a benchmark. Going through those you'll be able to extract the bits that are interesting for your own benchmark plus learn what tools have been used and get some benchmarking code too.

So far the most generic NoSQL benchmark is YCSB (Yahoo Cloud Servicing Benchmark). Recently the Cubrid blog posted the results of running this benchmark against some of the most popular NoSQL solutions and that might give you an idea of how to interpret results.

OTHER TIPS

  • check the overall performance for this database

Unless you need to do it for fun, or you just want to get a benchmark for the sake of getting a benchmark, I would recommend to tailor a performance benchmark to the actual problem/requirements.

For example do you really need crazy fast writes? Are you ok with losing data? Do you mind spending time on configuring fail over? Do you plan to scale up or out? Are you planning for TBs of data? etc..

From the examples you gave => Redis, Cassandra and MongoDB are quite different:

Redis is mostly cache, and it is really fast, but being just a cache it would not help you much in doing medium complexity aggregation. However it is currently the best cache (my opinion) out there. "Redis + a killer DB" is an ideal combination. It also has a built in benchmark tool you can try.

Cassandra is a solid product modelled after Google Big Table (but I am sure you already know that). It scale writes well if you have lots of nodes, but if you reach TBs of data for example, it can take days to add nodes. It is also not a simplest one to get. But if you are ok to pay, there are excellent guys from Datastax who can take all the complexity away. I have a very simple Cassandra Bombardier that may help you to start off.

MongoDB is a great DB for multiple reasons: very sexy and simple query language, good documentation, huge community, etc.. Not so great in other aspects: need to spend time sharding it correctly, and then resharding it again [compare to e.g. Riak, where it is done automatically]. It is very fast (writes) if the data [not just the index] fits in RAM, it starts slow down very quickly if it does not. There is a ongoing speculation that you may lose data (from one of the Basho engineers: "I had personally spent some time finding out ways to demonstrate that MongoDB will lose writes in the face of failure"), aggregation queries may take a while given a not so large dataset. I have a Mongo Performance Playground that you may find useful.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top