I got a Profile view with a ImageView where a User can change their picture. I'm saving my old & new image to compare them. I would like to know if they're the same, so if they are I dont need to push the new one to my Server.

I tried this but it doesn't really work:

+ (NSData*)returnImageAsData:(UIImage *)anImage {
    // Get an NSData representation of our images. We use JPEG for the larger image
    // for better compression and PNG for the thumbnail to keep the corner radius transparency
    float i_width = 400.0f;
    float oldWidth = anImage.size.width;
    float scaleFactor = i_width / oldWidth;

    float newHeight = anImage.size.height * scaleFactor;
    float newWidth = oldWidth * scaleFactor;

    UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
    [anImage drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    NSData *imageData = UIImageJPEGRepresentation(newImage, 0.5f);

    return imageData;
}

+ (BOOL)image:(UIImage *)image1 isEqualTo:(UIImage *)image2
{

    NSData *data1 = [self returnImageAsData:image1];
    NSData *data2 = [self returnImageAsData:image2];

    return [data1 isEqual:data2];
}

Any idea how to check if two images are same?

End result:

+ (NSData*)returnImageAsData:(UIImage *)anImage {
    // Get an NSData representation of our images. We use JPEG for the larger image
    // for better compression and PNG for the thumbnail to keep the corner radius transparency
//    float i_width = 400.0f;
//    float oldWidth = anImage.size.width;
//    float scaleFactor = i_width / oldWidth;
//    
//    float newHeight = anImage.size.height * scaleFactor;
//    float newWidth = oldWidth * scaleFactor;
//    
//    UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
//    [anImage drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
//    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
//    UIGraphicsEndImageContext();

    NSData *imageData = UIImageJPEGRepresentation(anImage, 0.5f);

    return imageData;
}

+ (BOOL)image:(UIImage *)image1 isEqualTo:(UIImage *)image2
{
    CGSize size1 = image1.size;
    CGSize size2 = image2.size;

    if (CGSizeEqualToSize(size1, size2)) {
        return YES;
    }

    NSData *data1 = UIImagePNGRepresentation(image1);
    NSData *data2 = UIImagePNGRepresentation(image2);

    return [data1 isEqual:data2];
}
有帮助吗?

解决方案

If you want to see if the 2 images are pixel-identical, it should be pretty easy.

Saving the images to JPEG is likely to cause problems because JPEG is a lossy format.

As others have suggested, first make sure the height and width of both images match. If not, stop. The images are different.

If those match, use a function like UIImagePNGRepresentation() to convert the image to a lossless data format. Then use isEqual on the NSData objets you get back.

If you want to check if the images LOOK the same (like 2 photographs of the same scene), you have a much, much harder problem on your hands. You might have to resort to a package like OpenCV to compare the images.

EDIT: I don't know if UIImage has a custom implementation of isEqual that you can use to compare two images. I'd try that first.

Looking at the docs, UIImage also conforms to NSCoding, so you could use archivedDataWithRootObject to convert the images to data. That would probably be faster than PNG encoding them.

Finally, you could get a pointer to the images' underlying CGImage objects, get their data providers, and compare their byte-streams that way.

其他提示

step 1, is to shrink the size. Step 2, simplify the color. step 3, is to calculate the average. step 4, compare the pixel gray. Step 5, calculate the hash value.

The following step by step: The first step is to shrink the size. Reduce the size to 8x8 for a total of 64 pixels. The role of this step is to remove the details of the picture, only to retain the structure, light and other basic information, to abandon the different size, the proportion of the picture difference.

-(UIImage * ) OriginImage:(UIImage **)image scaleToSize:(CGSize)size
{
    UIGraphicsBeginImageContext(size);

    [image drawInRect:CGRectMake(0, 0, size.width, size.height)];

    UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return scaledImage;
}

Step 2, simplify the color. Will shrink the picture, to 64 grayscale. That is, all pixels have a total of 64 colors.

    -(UIImage*)getGrayImage:(UIImage*)sourceImage
{

    int width = sourceImage.size.width;
    int height = sourceImage.size.height;
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
    CGContextRef context = CGBitmapContextCreate (nil,width,height,8,0,colorSpace,kCGImageAlphaNone);
    CGColorSpaceRelease(colorSpace);
    if (context == NULL) {
        return nil;
    }
    CGContextDrawImage(context,CGRectMake(0, 0, width, height), sourceImage.CGImage);
    UIImage *grayImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
    CGContextRelease(context);
    return grayImage;
}

The step 3 is to calculate the average. Calculate the gray scale of all 64 pixels.

    -(unsigned char*) grayscalePixels:(UIImage *) image
{
    // The amount of bits per pixel, in this case we are doing grayscale so 1 byte = 8 bits
#define BITS_PER_PIXEL 8
    // The amount of bits per component, in this it is the same as the bitsPerPixel because only 1 byte represents a pixel
#define BITS_PER_COMPONENT (BITS_PER_PIXEL)
    // The amount of bytes per pixel, not really sure why it asks for this as well but it's basically the bitsPerPixel divided by the bits per component (making 1 in this case)
#define BYTES_PER_PIXEL (BITS_PER_PIXEL/BITS_PER_COMPONENT)

    // Define the colour space (in this case it's gray)
    CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceGray();

    // Find out the number of bytes per row (it's just the width times the number of bytes per pixel)
    size_t bytesPerRow = image.size.width * BYTES_PER_PIXEL;
    // Allocate the appropriate amount of memory to hold the bitmap context
    unsigned char* bitmapData = (unsigned char*) malloc(bytesPerRow*image.size.height);

    // Create the bitmap context, we set the alpha to none here to tell the bitmap we don't care about alpha values
    CGContextRef context = CGBitmapContextCreate(bitmapData,image.size.width,image.size.height,BITS_PER_COMPONENT,bytesPerRow,colourSpace,kCGImageAlphaNone);

    // We are done with the colour space now so no point in keeping it around
    CGColorSpaceRelease(colourSpace);

    // Create a CGRect to define the amount of pixels we want
    CGRect rect = CGRectMake(0.0,0.0,image.size.width,image.size.height);
    // Draw the bitmap context using the rectangle we just created as a bounds and the Core Graphics Image as the image source
    CGContextDrawImage(context,rect,image.CGImage);
    // Obtain the pixel data from the bitmap context
    unsigned char* pixelData = (unsigned char*)CGBitmapContextGetData(context);

    // Release the bitmap context because we are done using it
    CGContextRelease(context);

    return pixelData;
#undef BITS_PER_PIXEL
#undef BITS_PER_COMPONENT
}

Return is 0101 string

    -(NSString *) myHash:(UIImage *) img
{
    unsigned char* pixelData = [self grayscalePixels:img];

    int total = 0;
    int ave = 0;
    for (int i = 0; i < img.size.height; i++) {
        for (int j = 0; j < img.size.width; j++) {
            total += (int)pixelData[(i*((int)img.size.width))+j];
        }
    }
    ave = total/64;
    NSMutableString *result = [[NSMutableString alloc] init];
    for (int i = 0; i < img.size.height; i++) {
        for (int j = 0; j < img.size.width; j++) {
            int a = (int)pixelData[(i*((int)img.size.width))+j];
            if(a >= ave)
            {
                [result appendString:@"1"];
            }
            else
            {
                [result appendString:@"0"];
            }
        }
    }
    return result;
}

Step 5, calculate the hash value. Will be the result of the previous comparison, together, it constitutes a 64-bit integer, this is the fingerprint of this picture. The order of the combination is not important, as long as all the pictures are guaranteed to use the same order on the line. Get fingerprints later, you can compare the different pictures to see how many bits in 64 are not the same. In theory, this is equivalent to calculating the "Hamming distance" (Hamming distance). If not the same data bit does not exceed 5, it means that two pictures are very similar; if more than 10, it shows that this is two different pictures.

0111111011110011111100111110000111000001100000011110001101111010 1111111111110001111000011110000111000001100000011110000111111011

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top