How to turn a CVPixelBuffer into a UIImage?
I'm having some problems getting a UIImage
from a CVPixelBuffer
. This is what I am trying:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, imageDataSampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments];
if (attachments)
CFRelease(attachments);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
if (width && height) { // test to make sure we have valid dimensions
UIImage *image = [[UIImage alloc] initWithCIImage:ciImage];
UIImageView *lv = [[UIImageView alloc] initWithFrame:self.view.frame];
lv.contentMode = UIViewContentModeScaleAspectFill;
self.lockedView = lv;
[lv release];
self.lockedView.image = image;
[image release];
}
[ciImage release];
height
and width
are both correctly set to the resolution of the camera. image
is created but I it seems to be black (or maybe transparent?). I can't quite understand where the problem is. Any ideas would be appreciated.
First of all the obvious stuff that doesn't relate directly to your question: AVCaptureVideoPreviewLayer
is the cheapest way to pipe video from either of the cameras into an independent view if that's where the data is coming from and you've no immediate plans to modify it. You don't have to do any pushing yourself, the preview layer is directly connected to the AVCaptureSession
and updates itself.
I have to admit to lacking confidence about the central question. There's a semantic difference between a CIImage
and the other two types of image — a CIImage
is a recipe for an image and is not necessarily backed by pixels. It can be something like "take the pixels from here, transform like this, apply this filter, transform like this, merge with this other image, apply this filter". The system doesn't know what a CIImage
looks like until you chose to render it. It also doesn't inherently know the appropriate bounds in which to rasterise it.
UIImage
purports merely to wrap a CIImage
. It doesn't convert it to pixels. Presumably UIImageView
should achieve that, but if so then I can't seem to find where you'd supply the appropriate output rectangle.
I've had success just dodging around the issue with:
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer))];
UIImage *uiImage = [UIImage imageWithCGImage:videoImage];
CGImageRelease(videoImage);
With gives an obvious opportunity to specify the output rectangle. I'm sure there's a route through without using a CGImage
as an intermediary so please don't assume this solution is best practice.
Try this one in Swift.
Swift 4.2:
import VideoToolbox
extension UIImage {
public convenience init?(pixelBuffer: CVPixelBuffer) {
var cgImage: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, nil, &cgImage)
guard let cgImage = cgImage else {
return nil
}
self.init(cgImage: cgImage)
}
}
Swift 5:
import VideoToolbox
extension UIImage {
public convenience init?(pixelBuffer: CVPixelBuffer) {
var cgImage: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &cgImage)
guard let cgImage = cgImage else {
return nil
}
self.init(cgImage: cgImage)
}
}
Note: This only works for RGB pixel buffers, not for grayscale.