24

Apple 的新 CoreML 框架有一个预测功能,它需要一个CVPixelBuffer. 为了分类,UIImage必须在两者之间进行转换。

我从 Apple 工程师那里得到的转换代码:

1  // image has been defined earlier
2
3     var pixelbuffer: CVPixelBuffer? = nil
4   
5     CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_OneComponent8, nil, &pixelbuffer)
6     CVPixelBufferLockBaseAddress(pixelbuffer!, CVPixelBufferLockFlags(rawValue:0))
7   
8     let colorspace = CGColorSpaceCreateDeviceGray()
9     let bitmapContext = CGContext(data: CVPixelBufferGetBaseAddress(pixelbuffer!), width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelbuffer!), space: colorspace, bitmapInfo: 0)!
10  
11    bitmapContext.draw(image.cgImage!, in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))

该解决方案速度很快,适用于灰度图像。必须根据图像类型进行的更改是:

  • 5号线| kCVPixelFormatType_OneComponent8到另一个OSTypekCVPixelFormatType_32ARGBRGB)
  • 8号线| colorSpace到另一个CGColorSpaceCGColorSpaceCreateDeviceRGBRGB)
  • 9号线| bitsPerComponent到内存每个像素的位数(RGB 为 32)
  • 9号线| bitmapInfo到非零CGBitmapInfo属性(kCGBitmapByteOrderDefault是默认值)
4

1 回答 1

42

你可以看看这个教程https://www.hackingwithswift.com/whats-new-in-ios-11,代码在 Swift 4

func buffer(from image: UIImage) -> CVPixelBuffer? {
  let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
  var pixelBuffer : CVPixelBuffer?
  let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)
  guard (status == kCVReturnSuccess) else {
    return nil
  }

  CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
  let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)

  let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
  let context = CGContext(data: pixelData, width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)

  context?.translateBy(x: 0, y: image.size.height)
  context?.scaleBy(x: 1.0, y: -1.0)

  UIGraphicsPushContext(context!)
  image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
  UIGraphicsPopContext()
  CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))

  return pixelBuffer
}
于 2017-06-10T15:58:08.120 回答