3

我正在尝试使用 aws SDK 和 spark 使用 aws 分段上传,文件大小约为 14GB,但出现内存不足错误。它在这一行给出错误 -val bytes: Array[Byte] = IOUtils.toByteArray(is)

我尝试将驱动程序内存和执行程序内存提高到 100 G,并尝试了其他一些 spark 优化。

下面是我正在尝试的代码:-

val tm = TransferManagerBuilder.standard.withS3Client(s3Client).build
      val fs = FileSystem.get(new Configuration())
      val filePath = new Path(hdfsFilePath)
      val is:InputStream = fs.open(filePath)
      val om = new ObjectMetadata()
      val bytes: Array[Byte] = IOUtils.toByteArray(is)
      om.setContentLength(bytes.length)
      val byteArrayInputStream: ByteArrayInputStream = new ByteArrayInputStream(bytes)
      val request = new PutObjectRequest(bucketName, keyName, byteArrayInputStream, om).withSSEAwsKeyManagementParams(new SSEAwsKeyManagementParams(kmsKey)).withCannedAcl(CannedAccessControlList.BucketOwnerFullControl)
      val upload = tm.upload(request)

这是我得到的例外:-

java.lang.OutOfMemoryError
                at java.io.ByteArrayOutputStream.hugeCapacity(ByteArrayOutputStream.java:123)
                at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:117)
                at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
                at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
                at com.amazonaws.util.IOUtils.toByteArray(IOUtils.java:45)
4

1 回答 1

0

PutObjectRequest 接受 File

public PutObjectRequest(String bucketName, String key, File file)

像下面这样的东西应该可以工作(虽然我还没有检查过):

val result = TransferManagerBuilder.standard.withS3Client(s3Client)
  .build
  .upload(
    new PutObjectRequest(
      bucketName,
      keyName,
      new File(new Path(hdfsFilePath))
    )
    .withSSEAwsKeyManagementParams(new SSEAwsKeyManagementParams(kmsKey))
    .withCannedAcl(CannedAccessControlList.BucketOwnerFullControl)
  )
于 2019-06-24T21:35:31.613 回答