Retrieving a pixel alpha value for a UIImage

Solution 1:

If all you want is the alpha value of a single point, all you need is an alpha-only single-point buffer. I believe this should suffice:

// assume im is a UIImage, point is the CGPoint to test
CGImageRef cgim = im.CGImage;
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel, 
                                         1, 1, 8, 1, NULL,
                                         kCGImageAlphaOnly);
CGContextDrawImage(context, CGRectMake(-point.x, 
                                   -point.y, 
                                   CGImageGetWidth(cgim), 
                                   CGImageGetHeight(cgim)), 
               cgim);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
BOOL transparent = alpha < 0.01;

If the UIImage doesn't have to be recreated every time, this is very efficient.

EDIT December 8 2011:

A commenter points out that under certain circumstances the image may be flipped. I've been thinking about this, and I'm a little sorry that I didn't write the code using the UIImage directly, like this (I think the reason is that at the time I didn't understand about UIGraphicsPushContext):

// assume im is a UIImage, point is the CGPoint to test
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel, 
                                             1, 1, 8, 1, NULL,
                                             kCGImageAlphaOnly);
UIGraphicsPushContext(context);
[im drawAtPoint:CGPointMake(-point.x, -point.y)];
UIGraphicsPopContext();
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
BOOL transparent = alpha < 0.01;

I think that would have solved the flipping issue.

Solution 2:

Yes, CGContexts have their y-axis pointing up while in UIKit it points down. See the docs.

Edit after reading code:

You also want to set the blend mode to replace before drawing the image since you want the image's alpha value, not the one which was in the context's buffer before:

CGContextSetBlendMode(context, kCGBlendModeCopy);

Edit after thinking:

You could do the lookup much more efficient by building the smallest possible CGBitmapContext (1x1 pixel? maybe 8x8? have a try) and translating the context to your desired position before drawing:

CGContextTranslateCTM(context, xOffset, yOffset);

Solution 3:

I found this question/answer while researching how to do collision detection between sprites using the alpha value of the image data, rather than a rectangular bounding box. The context is an iPhone app... I am trying to do the above suggested 1 pixel draw and I am still having problems getting this to work, but I found an easier way of creating a CGContextRef using data from the image itself, and the helper functions here:

CGContextRef context = CGBitmapContextCreate(
                 rawData, 
                 CGImageGetWidth(cgiRef), 
                 CGImageGetHeight(cgiRef), 
                 CGImageGetBitsPerComponent(cgiRef), 
                 CGImageGetBytesPerRow(cgiRef), 
                 CGImageGetColorSpace(cgiRef),
                 kCGImageAlphaPremultipliedLast     
    );

This bypasses all the ugly hardcoding in the sample above. The last value can be retrieved by calling CGImageGetBitmapInfo() but in my case, it return a value from the image that caused an error in the ContextCreate function. Only certain combinations are valid as documented here: http://developer.apple.com/qa/qa2001/qa1037.html

Hope this is helpful!

Solution 4:

Do I need to translate the co-ordinates between UIKit and Core Graphics - i.e: is the y-axis inverted?

It's possible. In CGImage, the pixel data is in English reading order: left-to-right, top-to-bottom. So, the first pixel in the array is the top-left; the second pixel is one from the left on the top row; etc.

Assuming you have that right, you should also make sure you're looking at the correct component within a pixel. Perhaps you're expecting RGBA but asking for ARGB, or vice versa. Or, maybe you have the byte order wrong (I don't know what the iPhone's endianness is).

Or have I misunderstood premultiplied alpha values?

It doesn't sound like it.

For those who don't know: Premultiplied means that the color components are premultiplied by the alpha; the alpha component is the same whether the color components are premultiplied by it or not. You can reverse this (unpremultiply) by dividing the color components by the alpha.