Question

I have an application written in Java that uses a jar file(it uses more than one, but that's not the point).

The fact is, the Jar file I'm using contains files that I absolutely MUST extract to the filesystem.

So far I'm using Class.getResourceAsStream and FileOutputStream, but this method is somewhat slow. Note that some of these files are text-based, but others are simply binary.

So apart from trying to reduce the need to extract files from JARs, are there any optimizations(such as more adequated functions) for these tasks.

Note that my application is Java 6-based and I would like to reduce external dependencies to a minimum.

EDIT: For future reference, my OLD(inefficent) code was:

int c;
while((c = is.read())!=-1){
    fos.write(c);
}

For the new, much faster code, see the accepted reply.

Was it helpful?

Solution

Do you have control over the jar file? If you create it uncompressed, that may make it faster. Obviously it'll make the jar file bigger though...

Another thing to check - how are you extracting the file? For instance, if you're doing it byte by byte it will be painfully slow. Use something like this:

public static void copyStream(InputStream input, OutputStream output)
     throws IOException
{
    // Reads up to 8K at a time. Try varying this.
    byte[] buffer = new byte[8192];
    int read;

    while ((read = input.read(buffer)) != -1)
    {
        output.write(buffer, 0, read);
    }
}

If you're already doing this, could you give us more information? How slow is "somewhat slow"? How does it compare with, say, using the jar utility to extract the jar file?

OTHER TIPS

Err. I'm not sure what you really want to do - but have you thought about using winzip?

Obviously if you need to extract the files dynamically at run time this won't work - but I'm not sure why you'd need to do this - how often does this jar file change?.

Surely you can extract them once and then distribute them with the application?

I concur with Jon. 2 things will make extraction faster:

  1. Decreasing compression level.
  2. Increasing the buffer sized used when copying/extracting.

I am supposing the need to extract is due to a requirement to write/re-read from the file. If the files are small enough, memory is large enough and the persistent nature of files isn't a requirement; you might consider mapping the entire resource to memory instead of using the disk as storage.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top