You encode the file in whatever chunks come from the disk I/O:
int step = (int) (bytesLeft > 150000 ? 150000 : bytesLeft);
byte[] buffer = new byte[step];
fileStream.read(buffer);
buffer = cipher.encryptFile(buffer, key);
But you decode the file in whatever chunks come from the network I/O:
bytesRead = input.read(buffer);
if (bytesRead >= 0) {
buffer = cipher.decryptFile(buffer, key);
outStream.write(buffer, 0, bytesRead);
counter += bytesRead;
}
These chunks are likely to disagree. The disk I/O may always give you full chunks (lucky for you), but the network I/O will likely give you packet-sized chunks (1500 bytes minus header).
The cipher should get an offset into the already encoded/decoded data (or encode/decode everything at once), and use that to shift the key appropriately, or this may happen:
original: ...LOREM IPSUM...
key : ...abCde abCde...
encoded : ...MQUIR JRVYR...
key : ...abCde Cdeab... <<note the key got shifted
decoded : ...LOREM GNQXP... <<output wrong after the first chunk.
Since the packet data size is (for Ethernet-sized TCP/IP packets) aligned at four bytes, a key of length four is likely to be always aligned.
another issue is that you are ignoring the number of bytes read from disk when uploading the file. While disk I/O is likely to always give you full-sized chunks (the file's likely to be memory-mapped or the underlying native API does provide this guarantee), nothing's taken for granted. Always use the amount of bytes actually read: bytesRead = fileStream.read(buffer);