0

我已经成功运行了几个月的应用程序。最近它开始失败,因为一个接收 pyspark 数据帧、对其进行分组并输出响应的方法以某种方式输出了一个损坏的数据帧。

这是当一切都失败时我正在做的示例代码:

from pyspark.sql.functions import sum, avg
group_by = pyspark_df_in.groupBy("dimension1", "dimension2", "dimension3")
pyspark_df_out = group_by.agg(sum("metric1").alias("MyMetric1"), sum("metric2").alias("MyMetric2")

如果我 print print(pyspark_df_in.head(1)),我正确地得到了我的数据集的第一行。按几个维度分组并打印print(pyspark_df_out.head(2))后,出现以下错误。我在尝试对这个新的按数据集分组的新数据集做任何事情时遇到了类似的错误(我知道分组依据应该会产生数据,因为我已经确定了它)。

19/10/08 15:13:00 WARN TaskSetManager: Stage 9 contains a task of very large size (282 KB). The maximum recommended task size is 100 KB.
19/10/08 15:13:00 ERROR Executor: Exception in task 1.0 in stage 9.0 (TID 40)
java.util.NoSuchElementException
    at java.util.ArrayList$Itr.next(ArrayList.java:862)
    at org.apache.arrow.vector.VectorLoader.loadBuffers(VectorLoader.java:76)
    at org.apache.arrow.vector.VectorLoader.load(VectorLoader.java:61)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$2.nextBatch(ArrowConverters.scala:167)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$2.<init>(ArrowConverters.scala:144)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$.fromBatchIterator(ArrowConverters.scala:143)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anonfun$3.apply(ArrowConverters.scala:203)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anonfun$3.apply(ArrowConverters.scala:201)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
19/10/08 15:13:00 ERROR Executor: Exception in task 2.0 in stage 9.0 (TID 41)
java.util.NoSuchElementException
    at java.util.ArrayList$Itr.next(ArrayList.java:862)
    at org.apache.arrow.vector.VectorLoader.loadBuffers(VectorLoader.java:76)
    at org.apache.arrow.vector.VectorLoader.load(VectorLoader.java:61)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$2.nextBatch(ArrowConverters.scala:167)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$2.<init>(ArrowConverters.scala:144)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$.fromBatchIterator(ArrowConverters.scala:143)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anonfun$3.apply(ArrowConverters.scala:203)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anonfun$3.apply(ArrowConverters.scala:201)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 1.0 in stage 9.0 (TID 40, localhost, executor driver): java.util.NoSuchElementException
    at java.util.ArrayList$Itr.next(ArrayList.java:862)
    at org.apache.arrow.vector.VectorLoader.loadBuffers(VectorLoader.java:76)
    at org.apache.arrow.vector.VectorLoader.load(VectorLoader.java:61)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$2.nextBatch(ArrowConverters.scala:167)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$2.<init>(ArrowConverters.scala:144)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$.fromBatchIterator(ArrowConverters.scala:143)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anonfun$3.apply(ArrowConverters.scala:203)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anonfun$3.apply(ArrowConverters.scala:201)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

19/10/08 15:13:00 ERROR TaskSetManager: Task 1 in stage 9.0 failed 1 times; aborting job
19/10/08 15:13:00 WARN TaskSetManager: Lost task 0.0 in stage 9.0 (TID 39, localhost, executor driver): TaskKilled (Stage cancelled)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 3.0 in stage 9.0 (TID 42, localhost, executor driver): TaskKilled (Stage cancelled)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 10.0 in stage 9.0 (TID 49, localhost, executor driver): TaskKilled (Stage cancelled)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 7.0 in stage 9.0 (TID 46, localhost, executor driver): TaskKilled (Stage cancelled)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 6.0 in stage 9.0 (TID 45, localhost, executor driver): TaskKilled (Stage cancelled)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 11.0 in stage 9.0 (TID 50, localhost, executor driver): TaskKilled (Stage cancelled)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 9.0 in stage 9.0 (TID 48, localhost, executor driver): TaskKilled (Stage cancelled)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 8.0 in stage 9.0 (TID 47, localhost, executor driver): TaskKilled (Stage cancelled)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 4.0 in stage 9.0 (TID 43, localhost, executor driver): TaskKilled (Stage cancelled)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 5.0 in stage 9.0 (TID 44, localhost, executor driver): TaskKilled (Stage cancelled)

关于我的环境的信息:

  • Spark 上下文版本 = 2.4.3
  • Python 版本 = 3.7
  • 操作系统 = Linux CentOS 7

有没有人遇到这个问题或有任何想法我可以调试/修复它?

4

0 回答 0