Domanda

I am using hsqldb 2.3.2 in server memory mode. I am experiencing a situation where I insert some information to hsqldb and the heap space used (even after GC) by HSQLDB is about 3 to 4 larger than if I kept it in a plain HashMap/LinkedList in java heap.

Here is code that inserts multiple records into a in-memory hsqldb server:

    Connection c = DriverManager.getConnection("jdbc:hsqldb:hsql://localhost","sa","");
    c.setAutoCommit(false);
    PreparedStatement ps = c.prepareStatement("set database sql syntax ora true");
    ps.execute();
    ps.close();

    ps = c.prepareStatement("create table t (x long)");
    ps.execute();
    ps.close();
    c.close();
    String x = "insert into t values(?)";
    c = DriverManager.getConnection("jdbc:hsqldb:hsql://localhost","sa","");
    c.setAutoCommit(false);
    ps = c.prepareStatement(x);
    for(int i=0;i<1000*1000*10000;i++){
        long z = new Random().nextLong();
        ps.setLong(1, z);
        ps.addBatch(); 
            if(i%1000==0){
            ps.executeBatch();
       ps.clearParameters();
       ps.clearBatch();
       ps.close();
       ps = null;
       ps = c.prepareStatement(x);
       }
       c.commit();
           c.close();           
       if(i%100000 == 0){
         System.out.println(i);//print number of rows inserted
             c.commit();
         c.close();
         c = DriverManager.getConnection("jdbc:hsqldb:hsql://localhost","sa","");
     c.setAutoCommit(false);
     ps = c.prepareStatement(x);
       }
     }

I track the server HSQLDB process using JVisutalVM and I run the code above up until a OutOfMemory occurrs

I tried different heap sizes & I stopped the code above - what is disturbing is I always see that I have inserted about 13,800,000 rows - in server with 3GB available heap - and the heap is full after terminating the code above and performing GC 0 the heap takes 2500MB -

which means that every row takes up about 180 bytes - a single long takes 8 bytes - so that is 22 time more heavier.

This is of course only a test and real tables don't usually have only one field - but the reason I explored this was that when I try to copy 1GB of memory from oracle to HSQLDB - in HSQLDB it ends up holding 4GB! (The table structure is identical)

Now, the questions:

  1. What is going on? Does my test seem correct?

  2. How can I reduce memory consumption in HSQLDB?

  3. If there is no easy way, what other similar products may have reasonable memory usage? How is H2 in this context?

Thank you

È stato utile?

Soluzione

At the end I chose H2 which has takes up about 1.2 of the original size (in my case). And it also has additional compression-in-memory modes.

Altri suggerimenti

This is not a proper use case for a relational database. You have a table with only one column and are storing some long (BIGINT) values. You do not even have a primary key on this to allow you to search for the values you have inserted.

This usage does not require even a HashMap. Just use a HashSet implementation from any library.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top