Question

I'm trying to insert ~ 56,249,000 items in a berkeleydb-JE. I ran DbCacheSize to get some stats about my database:

java -jar je-5.0.34.jar  DbCacheSize -records 56248699 -key 8 -data 20 

=== Environment Cache Overhead ===

3,155,957 minimum bytes

To account for JE daemon operation and record locks,
a significantly larger amount is needed in practice.

=== Database Cache Size ===

 Minimum Bytes    Maximum Bytes   Description
---------------  ---------------  -----------
  1,287,110,736    1,614,375,504  Internal nodes only
  4,330,861,264    4,658,126,032  Internal nodes and leaf nodes

=== Internal Node Usage by Btree Level ===

 Minimum Bytes    Maximum Bytes      Nodes    Level
---------------  ---------------  ----------  -----
  1,269,072,064    1,592,660,160     632,008    1
     17,837,712       21,473,424       7,101    2
        198,448          238,896          79    3
          2,512            3,024           1    4

I asked this question 2 years ago Optimizing a BerkeleyDB JE Database but I'm still not sure how I should configure my environment from those statistics ?

When the data will be loaded, I'll be the only user having an access to the database: should I use a transaction ?

My env is currently opened as follow:

EnvironmentConfig cfg=(...)
cfg.setTransactional(true);
cfg.setAllowCreate(true);
cfg.setReadOnly(false);
cfg.setCachePercent(80);
cfg.setConfigParam(EnvironmentConfig.LOG_FILE_MAX,"250000000");

database:

cfg.setAllowCreate(true);
cfg.setTransactional(true);
cfg.setReadOnly(false);

and I read/insert the items the following way:

Transaction txn= env.beginTransaction(null, null);
//open db with transaction 'txn'
Database db=env.open(...txn)

Transaction txn2=this.getEnvironment().beginTransaction(null, null);
long record_id=0L;
while((item=readNextItem(input))!=null)
    {
    (...)
    ++record_id;

    db.put(...); //insert record_id/item into db
    /** every 100000 items commit and create a new transaction.
       I found it was the only way to avoid an outOfMemory exception */
    if(record_id%100000==0)
        {
        txn2.commit();
        System.gc();
        txn2=this.getEnvironment().beginTransaction(null, null);
        }
    }

txn2.commit();
txn.commit();

but things are getting slower and slower. I ran the program from eclipse without setting anything for the JVM.

100000 / 56248699 ( 0.2 %).  13694.9 records/seconds.  Time remaining:68.3 m Disk Usage: 23.4 Mb. Expect Disk Usage: 12.8 Gb Free Memory : 318.5 Mb.
200000 / 56248699 ( 0.4 %).  16680.6 records/seconds.  Time remaining:56.0 m Disk Usage: 49.5 Mb. Expect Disk Usage: 13.6 Gb Free Memory : 338.3 Mb.
(...)
6600000 / 56248699 (11.7 %).  9658.2 records/seconds.  Time remaining:85.7 m Disk Usage: 2.9 Gb. Expect Disk Usage: 24.6 Gb Free Memory : 165.0 Mb.
6700000 / 56248699 (11.9 %).  9474.5 records/seconds.  Time remaining:87.2 m Disk Usage: 2.9 Gb. Expect Disk Usage: 24.7 Gb Free Memory : 164.8 Mb.
6800000 / 56248699 (12.1 %).  9322.6 records/seconds.  Time remaining:88.4 m Disk Usage: 3.0 Gb. Expect Disk Usage: 24.8 Gb Free Memory : 164.8 Mb.
(Ctrl-C... abort...)

How can I make things faster ?

Update:

MemTotal:        4021708 kB
MemFree:          253580 kB
Buffers:           89360 kB
Cached:          1389272 kB
SwapCached:           56 kB
Active:          2228712 kB
Inactive:        1449096 kB
Active(anon):    1793592 kB
Inactive(anon):   596852 kB
Active(file):     435120 kB
Inactive(file):   852244 kB
Unevictable:           0 kB
Mlocked:               0 kB
HighTotal:       3174028 kB
HighFree:          57412 kB
LowTotal:         847680 kB
LowFree:          196168 kB
SwapTotal:       4085756 kB
SwapFree:        4068224 kB
Dirty:             16320 kB
Writeback:             0 kB
AnonPages:       2199056 kB
Mapped:           111280 kB
Shmem:            191272 kB
Slab:              58664 kB
SReclaimable:      41448 kB
SUnreclaim:        17216 kB
KernelStack:        3792 kB
PageTables:        11328 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     6096608 kB
Committed_AS:    5069728 kB
VmallocTotal:     122880 kB
VmallocUsed:       18476 kB
VmallocChunk:      81572 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       10232 kB
DirectMap2M:      903168 kB

update 2:

Max. Heap Size (Estimated): 872.94M
Ergonomics Machine Class: server
Using VM: Java HotSpot(TM) Server VM

update 3:

using Jerven's advice, I get the following performance:

    (...)
    6800000 / 56248699 (12.1 %).  13144.8 records/seconds.  Time remaining:62.7 m Disk Usage: 1.8 Gb. Expect Disk Usage: 14.6 Gb Free Memory : 95.5 Mb.
    (...)

vs my previous result:

6800000 / 56248699 (12.1 %).  9322.6 records/seconds.  Time remaining:88.4 m Disk Usage: 3.0 Gb. Expect Disk Usage: 24.8 Gb Free Memory : 164.8 Mb.
Was it helpful?

Solution

First thing is I would remove your explicit call to System.gc(); If you notice that this aids performance consider going to a different GC algorithm. For example G1GC will perform better when bdb/je cache usage is consistently near to 70% of the available heap.

Secondly at some point the B+ index updating is going to be n log n performance and will decrease insert time.

Not using transactions will be faster. Especially, if you can restart an import from scratch if it fails.

Just remember to do an environment.sync() and a checkpoint at the end. While doing this import you might want to disable the BDB/je checkpointing and BDB/je GC threads.

config.setConfigParam(EnvironmentConfig.ENV_RUN_CLEANER,  "false");
config.setConfigParam(EnvironmentConfig.ENV_RUN_CHECKPOINTER, "false);
config.setConfigParam(EnvironmentConfig.ENV_RUN_IN_COMPRESSOR, "false");

After loading you should then call a method like this.

public void checkpointAndSync()
    throws ObjectStoreException
{
            env.sync();
    CheckpointConfig force = new CheckpointConfig();
    force.setForce(true);
    try
    {
        env.checkpoint(force);
    } catch (DatabaseException e)
    {
        log.error("Can not chekpoint db " + path.getAbsolutePath(), e);
        throw new ObjectStoreException(e);
    }
}

You might consider, turning on keyprefixing as well.

For the rest your internal node cache size should be at least 1.6 GB, which means a larger than 2GB heap to start with.

You can also consider merging records. For example if your keys naturally increment you can store 16 values under one key. But if you think this an interesting approach you might start with increasing the B tree fanout setting.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top