문제

I've been using python pandas for the last year and I'm really impressed by its performance and functionalities, however pandas is not a database yet. I've been thinking lately on ways to integrate the analysis power of pandas into a flat HDF5 file database. Unfortunately HDF5 is not designed to deal natively with concurrency.

I've been looking around for inspiration into locking systems, distributed task queues, parallel HDF5, flat file database managers or multiprocessing but I still don't have a clear idea on where to start.

Ultimately, I would like to have a RESTful API to interact with the HDF5 file to create, retrieve, update and delete data. A possible use case for this could be building a time series store where sensors can write data and analytical services can be implemented on top of it.

Any ideas about possible paths to follow, existing similar projects or about the convenience/inconvenience of the whole idea will be very much appreciated.

PD: I know I can use a SQL/NoSQL database to store the data instead but I want to use HDF5 because I haven't seen anything faster when it comes to retrieve large volumes of the data.

도움이 되었습니까?

해결책

HDF5 works fine for concurrent read only access.
For concurrent write access you either have to use parallel HDF5 or have a worker process that takes care of writing to an HDF5 store.

There are some efforts to combine HDF5 with a RESTful API from the HDF Group intself. See here and here for more details. I am not sure how mature it is.

I recommend to use a hybrid approach and expose it via a RESTful API.
You can store meta-information in a SQL/NoSQL database and keep the raw data (time series data) in one or multiple HDF5 files.

There is one public REST API to access the data and the user doesn't have to care what happens behind the curtains.
That's also the approach we are taking for storing biological information.

다른 팁

I know the following is not a good answer to the question, but it is perfect for my needs, and I didn't find it implemented somewhere else:

from pandas import HDFStore
import os
import time

class SafeHDFStore(HDFStore):
    def __init__(self, *args, **kwargs):
        probe_interval = kwargs.pop("probe_interval", 1)
        self._lock = "%s.lock" % args[0]
        while True:
            try:
                self._flock = os.open(self._lock, os.O_CREAT |
                                                  os.O_EXCL |
                                                  os.O_WRONLY)
                break
            except FileExistsError:
                time.sleep(probe_interval)

        HDFStore.__init__(self, *args, **kwargs)

    def __exit__(self, *args, **kwargs):
        HDFStore.__exit__(self, *args, **kwargs)
        os.close(self._flock)
        os.remove(self._lock)

I use this as

result = do_long_operations()
with SafeHDFStore('example.hdf') as store:
    # Only put inside this block the code which operates on the store
    store['result'] = result

and different processes/threads working on a same store will simply queue.

Notice that if instead you naively operate on the store from multiple processes, the last closing the store will "win", and what the others "think they have written" will be lost.

(I know I could instead just let one process manage all writes, but this solution avoids the overhead of pickling)

EDIT: "probe_interval" can now be tuned (one second is too much if writes are frequent)

HDF Group has a REST service for HDF5 out now: http://hdfgroup.org/projects/hdfserver/

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top