Question

I am using SQLite as an In-Memory database in my Java program using the sqlite-jdbc driver.

The query I am using is rather complex. Nevertheless, it is pretty fast with usually < 10 ms. Unfortunately, iterating the ResultSet afterwards is rather costly, even with a large fetch size:

Class.forName("org.sqlite.JDBC");
connection = DriverManager.getConnection("jdbc:sqlite::memory:");

long startTime = System.currentTimeMillis();

Statement stmt = connection.createStatement();
stmt.setFetchSize(1000);
ResultSet rs = stmt.executeQuery(query);

System.out.println("query time " + (System.currentTimeMillis() - startTime));

ResultSetMetaData metaData = rs.getMetaData();
int columns = metaData.getColumnCount();

System.out.println("metadata time " + (System.currentTimeMillis() - startTime));

HashMap<String, String> record = new HashMap<String, String>();
while (rs.next())
{
    System.out.println("get records time " + (System.currentTimeMillis() - startTime));
    for (int i = 1; i <= columns; i++)
    {

        if (metaData.getColumnName(i) != null && rs.getObject(i) != null)
        {
            record.put(metaData.getColumnName(i), rs.getObject(i).toString());
        }
    }
    al.add(record);
}
System.out.println("all time " + (System.currentTimeMillis() - startTime));

The output of this little program shows where the time is consumed:

query time 5
metadata time 5
get records time 5
get records time 5
get records time 6
get records time 6
get records time 15
get records time 15
get records time 26
get records time 26
all time 52

I know that if the fetch size is too small, the ResultSet usually goes via network to the DB to move the cursor and get more data. However, in my case the fetch size is fairly large (1000), the DB is in memory and the result set contains a small amount of entries (<20).

Any ideas what I can do to make it faster? Does anybody know how SQLite handles selects?

Thanks in advance. Sebastian

Was it helpful?

Solution

SQLite is an embedded database and does not do any network communication; it computes result records on the fly and ignores the fetch size.

If your query is complex, searching and computing the next record will be relatively slow.

OTHER TIPS

The for loop is very inefficient. Instead of re-retrieving the column names each time, do this:

ResultSetMetaData metaData = rs.getMetaData();
int columns = metaData.getColumnCount();
ArrayList<String> columnNames = new ArrayList<String>();
for (int i = 1; i <= columns; i++) {
    columnNames.add(metaData.getColumnName(i));
}

Then you can simplify the original for loop into this (this is assuming that your column names aren't null, which they never should be):

for (int i = 0; i < columns; i++)
{
    Object cellObject = rs.getObject(columnNames.get(i));
    if( cellObject != null )
        record.put(cellObject.toString());
}

Also, I'm not sure why you're calling al.add(record); each time, when record is always pointing to the same HashMap (which in turn is being overwritten each time you go through the while loop). Take a careful look at how you're using record.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top