1

我在每台机器有 15G 内存和 8G 磁盘的集群上运行一个小作业。

作业总是陷入死锁,最后一条错误消息是:

java.io.IOException: No space left on device
    at java.io.FileOutputStream.writeBytes(Native Method)
    at java.io.FileOutputStream.write(FileOutputStream.java:345)
    at org.apache.spark.storage.DiskBlockObjectWriter$TimeTrackingOutputStream$$anonfun$write$3.apply$mcV$sp(BlockObjectWriter.scala:86)
    at org.apache.spark.storage.DiskBlockObjectWriter.org$apache$spark$storage$DiskBlockObjectWriter$$callWithTiming(BlockObjectWriter.scala:221)
    at org.apache.spark.storage.DiskBlockObjectWriter$TimeTrackingOutputStream.write(BlockObjectWriter.scala:86)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
    at org.xerial.snappy.SnappyOutputStream.dumpOutput(SnappyOutputStream.java:300)
    at org.xerial.snappy.SnappyOutputStream.rawWrite(SnappyOutputStream.java:247)
    at org.xerial.snappy.SnappyOutputStream.write(SnappyOutputStream.java:107)
    at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1876)
    at java.io.ObjectOutputStream$BlockDataOutputStream.writeByte(ObjectOutputStream.java:1914)
    at java.io.ObjectOutputStream.writeFatalException(ObjectOutputStream.java:1575)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:350)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:42)
    at org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala:195)
    at org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4$$anonfun$apply$2.apply(ExternalSorter.scala:751)
    at org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4$$anonfun$apply$2.apply(ExternalSorter.scala:750)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4.apply(ExternalSorter.scala:750)
    at org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4.apply(ExternalSorter.scala:746)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at org.apache.spark.util.collection.ExternalSorter.writePartitionedFile(ExternalSorter.scala:746)
    at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:68)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    at org.apache.spark.scheduler.Task.run(Task.scala:56)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

当它发生时,随机写入大小为 0.0B,输入大小为 3.4MB。不知道有什么操作可以快速吃掉整个 5G 的可用磁盘空间。

此外,整个作业的存储级别仅限于 MEMORY_ONLY_SERIALIZED 并且检查点完全禁用。

4

1 回答 1

0

如果您知道 shuffle 操作适合内存,则可以尝试将 spark.shuffle.spill 设置为 false。(否则你会得到OOM)。在http://spark.apache.org/docs/latest/configuration.html,您可以看到有关 shuffle 行为的选项和其他公共配置选项。

MEMORY_ONLY_SERIALIZED 适用于 RDD。

于 2015-02-11T09:32:16.853 回答