1

我正在尝试使用.builtInDualCamera. 我一直在关注 Apple 示例 AVCamFilter (对我有用)。根据我从 WWDC 演示文稿中了解到的情况,您需要做的就是设置一个AVCapturePhotoOutput以启用深度数据捕获并在AVCapturePhotoSettings.

当我运行我的应用程序时,我收到一个一般错误(见下文)。

  • 如果我删除用于捕获深度数据的设置,则不会发生错误并且会捕获照片。
  • 如果我保留用于捕获深度数据的设置但我在AVCapturePhotoCaptureDelegate函数中设置了一个断点photoOutput(_:AVCapturePhotoOutput, didCapturePhotoFor: AVCaptureResolvedPhotoSettings),当我拍照并且应用程序在该断点处停止并且我立即继续运行时,将捕获照片+深度数据(否发生错误)。
  • 我拥有的 iPhone 有一个双摄像头,并且photoOutput.isDepthDataDeliverySupportedtrue
  • iPhone 12 专业版最大;iOS 15.2
  • Xcode 13.2.1
  • macOS 11.6.2

我不知道我错过了什么。

这是我在以下位置设置输入和输出的代码AVCaptureSession

guard let videoDevice = discoverDevice(from: [.builtInDualCamera]) else {
            fatalError("No dual camera.")
        }
guard let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice) else {
    fatalError("Can't create video input.")
}
        
self.session.beginConfiguration()
        
self.session.sessionPreset = .photo
        
guard self.session.canAddInput(videoDeviceInput) else {
    fatalError("Can't add video input.")
}
self.session.addInput(videoDeviceInput)

guard self.session.canAddOutput(photoOutput) else {
    fatalError("Can't add photo output.")
}
self.session.addOutput(photoOutput)
        
photoOutput.isHighResolutionCaptureEnabled = true
        
if photoOutput.isDepthDataDeliverySupported {
    photoOutput.isDepthDataDeliveryEnabled = true
} else {
    fatalError("DepthData is not supported by this camera configuration")
}
        
self.session.commitConfiguration()
        
self.videoDeviceInput = videoDeviceInput

这是我想拍摄照片时调用的代码(取自 AVCamFilter 示例):

sessionQueue.async {
    let photoSettings = AVCapturePhotoSettings(format: [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)])
    if self.photoOutput.isDepthDataDeliveryEnabled {
        photoSettings.isDepthDataDeliveryEnabled = true
        photoSettings.embedsDepthDataInPhoto = false
    }
        
    self.photoOutput.capturePhoto(with: photoSettings, delegate: self)
}

这是我在AVCapturePhotoCaptureDelegate函数中遇到的错误,photoOutput(_:AVCapturePhotoOutput, didFinishProcessingPhoto: AVCapturePhoto, error: Error?)

Error capturing photo: Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-16800), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x2829f9650 {Error Domain=NSOSStatusErrorDomain Code=-16800 "(null)"}}
4

1 回答 1

0

我想我明白了。它与输入设备上的帧速率与可以捕获深度数据的速率有关。在配置输入和输出时,我在提交会话更改之前使用了此功能。

    private func capFrameRate(videoDevice: AVCaptureDevice) {
        if self.photoOutput.isDepthDataDeliverySupported {
            // Cap the video framerate at the max depth framerate.
            if let frameDuration = videoDevice.activeDepthDataFormat?.videoSupportedFrameRateRanges.first?.minFrameDuration {
                do {
                    try videoDevice.lockForConfiguration()
                    videoDevice.activeVideoMinFrameDuration = frameDuration
                    videoDevice.unlockForConfiguration()
                } catch {
                    print("Could not lock device for configuration: \(error)")
                }
            }
        }
    }

一旦我这样做了,我就能够同时获得图像的像素缓冲区和深度数据。

当我在捕获会话中间使用断点时,这现在很有意义(ish),它必须充当帧速率的同步。不管是不是这样,设置activeVideoMinFrameDuration深度数据的速率是关键。

于 2021-12-29T21:13:37.307 回答