Question

I'm using LevelDB in my application with 5 databases. Each database is opened with the option max_open_files = 64.

ulimit -Sn shows the operating system has a limit of 1024 files. Setting the limit to 2048 fixes the problem. Because I'm distributing this application to people, it should have defaults that work out of the box without requiring people to configure their operating system.

leveldb::Status status = db_spends_->Get(
    leveldb::ReadOptions(), spent_slice, &raw_spend);
if (!status.ok())
{
    std::cerr << "fetch_spend: " << status.ToString() << std::endl;
    return false;
}

I get lots of these errors and cannot read at all.

"fetch_spend: IO error: XXXX.sst: Too many open files"

There are 5 databases in one subdirectory called database:

$ ls
addr  block  block_hash  spend  tx
$ du -sh .
16G .
$ du -sh *
2.6G    addr
653M    block
7.2M    block_hash
2.6G    spend
9.4G    tx
$ for i in `ls`; do echo $i; ls $i | wc -l; done
addr
1279
block
333
block_hash
10
spend
1433
tx
5252

I would like to change the 2 MB limit inside LevelDB for each .sst file, but it doesn't seem adjustable and I only saw this patch on Google: https://github.com/basho/leveldb/pull/7

I'm using Ubuntu 13.04 64bit.

Here is the code I use for opening the databases. If I display the open_options.max_open_files before the call to leveldb::DB::Open(), it displays 64 (as expected).

bool open_db(const std::string& prefix, const std::string& db_name,
    std::unique_ptr<leveldb::DB>& db, leveldb::Options open_options)
{
    using boost::filesystem::path;
    path db_path = path(prefix) / db_name;
    leveldb::DB* db_base_ptr = nullptr;
    leveldb::Status status =
        leveldb::DB::Open(open_options, db_path.native(), &db_base_ptr);
    if (!status.ok())
    {
        log_fatal(LOG_BLOCKCHAIN) << "Internal error opening '"
            << db_name << "' database: " << status.ToString();
        return false;
    }
    // The cointainer ensures db_base_ptr is now managed.
    db.reset(db_base_ptr);
    return true;
}

...

// Create comparator for blocks database.
depth_comparator_.reset(new depth_comparator);
// Open LevelDB databases
const size_t cache_size = 1 << 20;
// block_cache, filter_policy and comparator must be deleted after use!
open_options_.block_cache = leveldb::NewLRUCache(cache_size / 2);
open_options_.write_buffer_size = cache_size / 4;
open_options_.filter_policy = leveldb::NewBloomFilterPolicy(10);
open_options_.compression = leveldb::kNoCompression;
open_options_.max_open_files = 64;
open_options_.create_if_missing = true;
// The blocks database options needs its depth comparator too.
leveldb::Options blocks_open_options = open_options_;
blocks_open_options.comparator = depth_comparator_.get();
if (!open_db(prefix, "block", db_blocks_, blocks_open_options))
    return false;
if (!open_db(prefix, "block_hash", db_blocks_hash_, open_options_))
    return false;
if (!open_db(prefix, "tx", db_txs_, open_options_))
    return false;
if (!open_db(prefix, "spend", db_spends_, open_options_))
    return false;
if (!open_db(prefix, "addr", db_address_, open_options_))
    return false;

Even if I set max_open_files = 20 I still get the same problem.

Was it helpful?

Solution

Until recently there could be some pathological behavior when max_option_files was set to < 74, where hundreds of file descriptors were used instead of what the option said. (The latest version of leveldb clamps gives the limit a floor of 74.) Does setting it to ~80 have any effect? If not, and you still get the problem, could you run the appropriate incantation of lsof when you see the bad behavior? That will tell us where the file descriptors are going.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top