Frage

Fallen Sie dieses Tutorial https://templates.prediction.io/predictionio/template-scala-parallel-universal-recommendation, Wenn ich versuche zu trainieren, erhalte ich die Ausfallfehlermeldung

[ERROR] [TaskSetManager] Task 0 in stage 23.0 failed 1 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 23.0 failed 1 times, most recent failure: Lost task 0.0 in stage 23.0 (TID 17, localhost): java.lang.NegativeArraySizeException
    at org.apache.mahout.math.DenseVector.<init>(DenseVector.java:57)
    at org.apache.mahout.sparkbindings.SparkEngine$$anonfun$5.apply(SparkEngine.scala:78)
    	at org.apache.mahout.sparkbindings.SparkEngine$$anonfun$5.apply(SparkEngine.scala:77)
    	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:706)
    	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:706)
    	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    	at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    	at org.apache.spark.scheduler.Task.run(Task.scala:88)
    	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
    	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
    	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
    	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    	at scala.Option.foreach(Option.scala:236)
    	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
    	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
    	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
    	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
    	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
    	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1822)
    	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1942)
    	at org.apache.spark.rdd.RDD$$anonfun$reduce$1.apply(RDD.scala:1003)
    	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
    	at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
    	at org.apache.spark.rdd.RDD.reduce(RDD.scala:985)
    	at org.apache.mahout.sparkbindings.SparkEngine$.numNonZeroElementsPerColumn(SparkEngine.scala:86)
    at org.apache.mahout.math.drm.CheckpointedOps.numNonZeroElementsPerColumn(CheckpointedOps.scala:37)
    at org.apache.mahout.math.cf.SimilarityAnalysis$.sampleDownAndBinarize(SimilarityAnalysis.scala:286)
    	at org.apache.mahout.math.cf.SimilarityAnalysis$.cooccurrences(SimilarityAnalysis.scala:66)
    at org.apache.mahout.math.cf.SimilarityAnalysis$.cooccurrencesIDSs(SimilarityAnalysis.scala:141)
    	at a.test.URAlgorithm.calcAll(URAlgorithm.scala:143)
    	at a.test.URAlgorithm.train(URAlgorithm.scala:117)
    	at a.test.URAlgorithm.train(URAlgorithm.scala:102)
    	at io.prediction.controller.P2LAlgorithm.trainBase(P2LAlgorithm.scala:46)
    	at io.prediction.controller.Engine$$anonfun$18.apply(Engine.scala:688)
    at io.prediction.controller.Engine$$anonfun$18.apply(Engine.scala:688)
    	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    	at scala.collection.immutable.List.foreach(List.scala:318)
    	at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
    	at scala.collection.AbstractTraversable.map(Traversable.scala:105)
    	at io.prediction.controller.Engine$.train(Engine.scala:688)
    	at io.prediction.controller.Engine.train(Engine.scala:174)
    	at io.prediction.workflow.CoreWorkflow$.runTrain(CoreWorkflow.scala:65)
    	at io.prediction.workflow.CreateWorkflow$.main(CreateWorkflow.scala:247)
    	at io.prediction.workflow.CreateWorkflow.main(CreateWorkflow.scala)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:497)
    	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NegativeArraySizeException
    at org.apache.mahout.math.DenseVector.<init>(DenseVector.java:57)
    at org.apache.mahout.sparkbindings.SparkEngine$$anonfun$5.apply(SparkEngine.scala:78)
    	at org.apache.mahout.sparkbindings.SparkEngine$$anonfun$5.apply(SparkEngine.scala:77)
    	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:706)
    	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:706)
    	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    	at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    	at org.apache.spark.scheduler.Task.run(Task.scala:88)
    	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Wie kann ich HBase richtig konfigurieren?

War es hilfreich?

Lösung

Dies passiert mit mir, wenn ich versuche, ohne Ereignisse für ein bestimmtes Ereignis zu trainieren engine.json . Überprüfe dein engine.json Um welche herausfinden Eventnames Sie erwarten und stellen sicher, dass Sie für jeden mindestens ein Ereignis importiert haben.

Wenn Sie dem Vorlagen -Tutorial folgen, importieren Sie Ihre Daten mit:

python examples/import_handmade.py

Stellen Sie sicher, dass Sie diesen Befehl ausführen, bevor Sie Ihr Modell trainieren.

Einige Befehle, mit denen ich Probleme mit PIO debuggen, die hilfreich sein könnten:

  • pio export --appid {your_app_id} --output ./data.out Dumpelt alle App -Ereignisse in JSON auf Data.out -Verzeichnis
  • pio import --appid {your_app_id} --input ./data/test_data.json Importiert Ereignisse aus test_data im JSON -Format
  • pio status Lassen Sie sich wissen, ob alles läuft
  • pio app data-delete {your_app_name} Löscht alle von App gespeicherten Daten (hilfreich, wenn Sie eine Teilmenge von Daten debuggen möchten)

Versuchen Sie in Ihrem Fall, die Daten zu exportieren, um sicherzustellen, dass nichts fehlt, dann die Daten löschen, den neuen Satz importieren und trainieren.

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit datascience.stackexchange
scroll top