Question

I need ideas to implement a (really) high performance in-memory Database/Storage Mechanism. In the range of storing 20,000+ objects, with each object updated every 5 or so seconds. I would like a FOSS solution.

What is my best option? What are your experiences?

I am working primarily in Java, but I need the datastore to have good performance so the datastore solution need not be java centric.

I also need like to be able to Query these objects and I need to be able to restore all of the objects on program startup.

Was it helpful?

Solution

SQLite is an open-source self-contained database that supports in-memory databases (just connect to :memory:). It has bindings for many popular programming languages. It's a traditional SQL-based relational database, but you don't run a separate server – just use it as a library in your program. It's pretty quick. Whether it's quick enough, I don't know, but it may be worth an experiment.

Java driver.

OTHER TIPS

are you updating 20K objects every 5 seconds or updating one of the 20K every 5 seconds?

What kind of objects? Why is a traditional RDBMS not sufficient?

Check out HSQLDB and Prevayler. Prevayler is a paradigm shift from traditional RDBMS - one which I have used (the paradigm, that is, not specifically Prevayler) in a number of projects and found it to have real merit.

Depends exactly how you need to query it, but have you looked into memcached?

http://www.danga.com/memcached/

Other options could include MySQL MEMORY Tables, the APC Cache if you're using PHP.

Some more detail about the project/requirements would be helpful.

An in-memory storage ?

1) a simple C 'malloc' array where all your structures would be indexed.

2) berkeleyDB: http://www.oracle.com/technology/products/berkeley-db/index.html. It is fast because you build your own indexes (secondary database) and there is no SQL expression to be evaluated.

Look at some of the products listed here: http://en.wikipedia.org/wiki/In-memory_database

What level of durability do you need? 20,000 updates every 5 seconds will probably be difficult for most IO hardware in terms of number of transactions if you write the data back to disc for every one.

If you can afford to lose some updates, you could probably flush it to disc every 100ms with no problem with fairly cheap hardware if your database and OS support doing that.

If it's really an in-memory database that you don't want to flush to disc often, that sounds pretty trivial. I've heard that H2 is pretty good, but SQLite may work as well. A properly tuned MySQL instance could also do it (But may be more convoluted)

Chronicle Map is an pure Java key-value store

  • it has really high performance, sustaining 1 million writes/second from a single thread. It's a myth that a fast database couldn't be written in Java.
  • Seamlessly stores and loads any serializable Java objects, provides a simple Map interface
  • LGPLv3

Since you don't have many "tables" a full-blown SQL database could be an overkill solution, indexes & queries could be implemented with a handful of distinct key-value stores which are updated manually by vanilla Java code. Chronicle Map provides mechanisms to make such updates concurrently isolated from each other, if you need it.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top