Interpret XMP-Metadata in ALAssetRepresentation
When a user makes some changes (cropping, red-eye removal, ...) to photos in the built-in Photos.app on iOS, the changes are not applied to the fullResolutionImage
returned by the corresponding ALAssetRepresentation
.
However, the changes are applied to the thumbnail
and the fullScreenImage
returned by the ALAssetRepresentation
.
Furthermore, information about the applied changes can be found in the ALAssetRepresentation
's metadata dictionary via the key @"AdjustmentXMP"
.
I would like to apply these changes to the fullResolutionImage
myself to preserve consistency. I've found out that on iOS6+ CIFilter
's filterArrayFromSerializedXMP: inputImageExtent:error:
can convert this XMP-metadata to an array of CIFilter
's:
ALAssetRepresentation *rep;
NSString *xmpString = rep.metadata[@"AdjustmentXMP"];
NSData *xmpData = [xmpString dataUsingEncoding:NSUTF8StringEncoding];
CIImage *image = [CIImage imageWithCGImage:rep.fullResolutionImage];
NSError *error = nil;
NSArray *filterArray = [CIFilter filterArrayFromSerializedXMP:xmpData
inputImageExtent:image.extent
error:&error];
if (error) {
NSLog(@"Error during CIFilter creation: %@", [error localizedDescription]);
}
CIContext *context = [CIContext contextWithOptions:nil];
for (CIFilter *filter in filterArray) {
[filter setValue:image forKey:kCIInputImageKey];
image = [filter outputImage];
}
However, this works only for some filters (cropping, auto-enhance) but not for others like red-eye removal. In these cases, the CIFilter
s have no visible effect. Therefore, my questions:
- Is anyone aware of a way to create red-eye removal
CIFilter
? (In a way consistent with the Photos.app. The filter with the keykCIImageAutoAdjustRedEye
is not enough. E.g., it does not take parameters for the position of the eyes.) - Is there a possibility to generate and apply these filters under iOS 5?
ALAssetRepresentation* representation = [[self assetAtIndex:index] defaultRepresentation];
// Create a buffer to hold the data for the asset's image
uint8_t *buffer = (Byte*)malloc(representation.size); // Copy the data from the asset into the buffer
NSUInteger length = [representation getBytes:buffer fromOffset: 0.0 length:representation.size error:nil];
if (length==0)
return nil;
// Convert the buffer into a NSData object, and free the buffer after.
NSData *adata = [[NSData alloc] initWithBytesNoCopy:buffer length:representation.size freeWhenDone:YES];
// Set up a dictionary with a UTI hint. The UTI hint identifies the type
// of image we are dealing with (that is, a jpeg, png, or a possible
// RAW file).
// Specify the source hint.
NSDictionary* sourceOptionsDict = [NSDictionary dictionaryWithObjectsAndKeys:
(id)[representation UTI], kCGImageSourceTypeIdentifierHint, nil];
// Create a CGImageSource with the NSData. A image source can
// contain x number of thumbnails and full images.
CGImageSourceRef sourceRef = CGImageSourceCreateWithData((CFDataRef) adata, (CFDictionaryRef) sourceOptionsDict);
[adata release];
CFDictionaryRef imagePropertiesDictionary;
// Get a copy of the image properties from the CGImageSourceRef.
imagePropertiesDictionary = CGImageSourceCopyPropertiesAtIndex(sourceRef,0, NULL);
CFNumberRef imageWidth = (CFNumberRef)CFDictionaryGetValue(imagePropertiesDictionary, kCGImagePropertyPixelWidth);
CFNumberRef imageHeight = (CFNumberRef)CFDictionaryGetValue(imagePropertiesDictionary, kCGImagePropertyPixelHeight);
int w = 0;
int h = 0;
CFNumberGetValue(imageWidth, kCFNumberIntType, &w);
CFNumberGetValue(imageHeight, kCFNumberIntType, &h);
// Clean up memory
CFRelease(imagePropertiesDictionary);