Question

I have a situation where my HSQL database has stopped working citing that it has exceeded the size limit

org.hsqldb.HsqlException: error in script file line: 41 file input/output errorerror       org.hsqldb.HsqlException: wrong database file version: requires large database support opening     file - file hsql/mydb.data;

When i checked the size of the data file using "du -sh" it was only around 730MB, but when i performed a "ls -alh" it gave me a shocking size of 16G which explains why HSQL probably reports it as a 'large database'. So, the data file seems to be a "sparse file"

But,nobody did change the data file to a sparse file, does HSQL maintain the data file as a sparse file? Or has the file system marked it as sparse?

How do i work around this to get back my HSQL database without corrupting data in it? I was thinking on using the hsqldb.cache_file_scale property, but it would still mean that I would hit the problem again when the file grows to 64G

Just in case if it matters I am running it on a Debian 3.2 box and it runs on Java 7u25.

Was it helpful?

Solution

You need to perform CHECKPOINT DEFRAG to compact the file from time to time.

When a lot of data is deleted, the space in the .data file is lost. The above command rewrites the current .data file to a much smaller new file.

If the file has already grown very large, or if you need to have a huge database, you can connect with the hsqldb.large_data=true property that enables very large databases.

http://hsqldb.org/doc/2.0/guide/dbproperties-chapt.html

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top