1

I'm running a spark job on EMR but need to create a checkpoint. I tried using s3 but got this error message

17/02/24 14:34:35 ERROR ApplicationMaster: User class threw exception: 
java.lang.IllegalArgumentException: Wrong FS: s3://spark-
jobs/checkpoint/31d57e4f-dbd8-4a50-ba60-0ab1d5b7b14d/connected-
components-e3210fd6/2, expected: hdfs://ip-172-18-13-18.ec2.internal:8020
java.lang.IllegalArgumentException: Wrong FS: s3://spark-
jobs/checkpoint/31d57e4f-dbd8-4a50-ba60-0ab1d5b7b14d/connected-
components-e3210fd6/2, expected: hdfs://ip-172-18-13-18.ec2.internal:8020

Here is my sample code

...
val sparkConf = new SparkConf().setAppName("spark-job")
  .set("spark.default.parallelism", (CPU * 3).toString)
  .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
  .registerKryoClasses(Array(classOf[Member], classOf[GraphVertex], classOf[GraphEdge]))
  .set("spark.dynamicAllocation.enabled", "true")


implicit val sparkSession = SparkSession.builder().config(sparkConf).getOrCreate()

sparkSession.sparkContext.setCheckpointDir("s3://spark-jobs/checkpoint")
....

How can I checkpoint on AWS EMR?

4

3 回答 3

2

Spark 现在修复了一个错误,这意味着您只能检查点到默认 FS,而不是任何其他的(如 S3)。它已在 master 中修复,不知道 backports。

如果它让您感觉好些,检查点的工作方式:write then rename() 在对象存储上足够慢,您可能会发现自己在本地更好地检查点,然后自己上传到 s3。

于 2017-02-24T23:07:03.677 回答
1

master 分支中有一个修复程序,允许检查点到 s3。我能够针对它进行构建并且它有效,因此这应该是下一个版本的一部分。

于 2017-02-27T00:57:15.913 回答
0

尝试使用 AWS 身份验证,例如:

val hadoopConf: Configuration = new Configuration()
  hadoopConf.set("fs.s3.impl", "org.apache.hadoop.fs.s3native.NativeS3FileSystem")
  hadoopConf.set("fs.s3n.awsAccessKeyId", "id-1")
  hadoopConf.set("fs.s3n.awsSecretAccessKey", "secret-key")

  sparkSession.sparkContext.getOrCreate(checkPointDir, () => 
      { createStreamingContext(checkPointDir, config) }, hadoopConf)
于 2017-02-24T15:55:13.160 回答