Question

I get this strange error in some of my mapreduce jobs

java.io.IOException: invalid distance too far back
    at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
    at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
    at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:89)
    at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:83)
    at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:71)
    at java.io.DataInputStream.readByte(DataInputStream.java:248)
    at com.contextin.io.VersionedWritable.readFields(VersionedWritable.java:60)
    at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:73)
    at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:44)
    at org.apache.hadoop.io.SequenceFile$Reader.deserializeValue(SequenceFile.java:2180)
    at org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2164)
    at org.apache.hadoop.mapred.SequenceFileRecordReader.getCurrentValue(SequenceFileRecordReader.java:103)
    at org.apache.hadoop.mapred.SequenceFileRecordReader.next(SequenceFileRecordReader.java:78)
    at com.contextin.model.workflow.AggregateFilesMR$CustomKeyValueLineRecordReader.next(AggregateFilesMR.java:632)
    at com.contextin.model.workflow.AggregateFilesMR$CustomKeyValueLineRecordReader.next(AggregateFilesMR.java:595)
    at org.apache.hadoop.mapred.lib.CombineFileRecordReader.next(CombineFileRecordReader.java:61)
    at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:215)
    at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:200)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)

Some facts (may or may not be relevant):

  • The job uses CustomCombineFileInputFormat - that is, each mapper may handle more then one file

  • The error does reoccur quite often, but not always (so it doesn't appear to be purely a bug or software error)

  • Some datasets cause this error more often then others (so it appears to be at least related to the data)

  • But for the same data set the job may sometime succeed and sometimes fail - so it's NOT strictly a data problem.

  • Some of the jobs that run on the machine (not specifically the failing job) have high memory requirements, which cause some tasks to fail because of memory problems. although this specific error doesn't appear to be memory related.

Was it helpful?

Solution

From further testing, it appears to be a data problem. Reading some of the files individually (not mapreduce) caused the problem consistently.

I don't have a good explanation why the number of task failures change between runs

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top