Question

i have used hive-0.11.0, hadoop 2.0.3, and mysql 5.6 for metadata

i can successful run the statement like SELECT * FROM records,which not create a map/reduce task.

But when i try run SELECT * FROM records where year='1949' the map/reduce task always get some error

the hadoop give me Diagnostics:
Application application_1382680612829_0136 failed 1 times due to AM Container for appattempt_1382680612829_0136_000001 exited with exitCode: -1000 due to: java.io.FileNotFoundException: File /tmp/hadoop-hadoop/nm-local-dir/filecache does not exist at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:492) at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:996) at org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:150) at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:187) at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:730) at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:726) at org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2379) at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:726) at org.apache.hadoop.yarn.util.FSDownload.createDir(FSDownload.java:88) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:274) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:51) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) .Failing this attempt.. Failing the application.

what i should do? thanks

Was it helpful?

Solution

In summary, as solution they recommend to create the parent folders, there is also a patch that is supposed to be fixed at 2.0.3 [1]

Tom White added a comment - 30/Nov/12 21:36 This fixes the problem by creating parent directories if they don't already exist. Without the fix the test would fail about 4 times in 10; with the fix I didn't see a failure.

It looks like the most similar issue that I could find in the hadoop bugs database

It's also related to [2] and [3] if you want to take a look

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top