Question

I get some very odd errors when using org.apache.commons.compress to read embedded archive files and I suspect it's my inexperience that is haunting me.

When running my code I get a variety of truncated zip file errors (along with other truncated file errors). I suspect it's my use of ArchiveInputStream

private final void handleArchive(String fileName, ArchiveInputStream ais) {
   ArchiveEntry archiveEntry = null;

   try {
      while((archiveEntry = ais.getNextEntry()) != null) {

         byte[] buffer = new byte[1024];

         while(ais.read(buffer) != -1) {
            handleFile(fileName + "/" + archiveEntry.getName(), archiveEntry.getSize(), new ByteArrayInputStream(buffer));
   } catch(IOException ioe) {
      ioe.printStackTrace();
   }
}

When I do this archiveEntry = ais.getNextEntry() does this effectively close my ais, and is there any way to read the bytes of embedded archive files using commons compress?

Was it helpful?

Solution

You re doing some wierd stuff it seems? For each archieve entry while your reading your archieve you re recursively calling your read archieve method which results in opening the next archieve while your parent code is still handling your previous archieve.

You should loop entirely through your archieve entry before handling any new archieve entry in your compressed file. Something like

ArArchiveEntry entry = (ArArchiveEntry) arInput.getNextEntry();
byte[] content = new byte[entry.getSize()];
LOOP UNTIL entry.getSize() HAS BEEN READ {
    arInput.read(content, offset, content.length - offset);
}

as stated in the examples on the apache site

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top