Pergunta

I am taking photo using AVCaptureStillImageOutput, AVCaptureSession, AVCaptureVideoPreviewLayer and display it in UIImageView. Its work properly.

But, After display image in UIImage. I am detecting faces using OpenCV from that displayed image.

It detect, but it returns the rotated image. I am not use any code for rotate the image. Its automatically rotate the image.

I want to stop rotating.

Here is my code.

+ (UIImage *) opencvFaceDetect:(UIImage *)originalImage  {
    cvSetErrMode(CV_ErrModeParent);

    IplImage *image = [self CreateIplImageFromUIImage:originalImage];

    // Scaling down
    IplImage *small_image = cvCreateImage(cvSize(image->width/2,image->height/2), IPL_DEPTH_8U, 3);
    cvPyrDown(image, small_image, CV_GAUSSIAN_5x5);
    int scale = 2;

    // Load XML
    NSString *path = [[NSBundle mainBundle] pathForResource:@"haarcascade_frontalface_default" ofType:@"xml"];
    CvHaarClassifierCascade* cascade = (CvHaarClassifierCascade*)cvLoad([path cStringUsingEncoding:NSASCIIStringEncoding], NULL, NULL, NULL);
    CvMemStorage* storage = cvCreateMemStorage(0);

    // Detect faces and draw rectangle on them
    CvSeq* faces = cvHaarDetectObjects(small_image, cascade, storage, 1.2f, 2, CV_HAAR_DO_CANNY_PRUNING, cvSize(20, 20), cvSize(100, 100));
    cvReleaseImage(&small_image);

    NSLog(@"found %d faces in image", faces->total);

    // Create canvas to show the results
    CGImageRef imageRef = originalImage.CGImage;
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef contextRef = CGBitmapContextCreate(NULL, originalImage.size.width, originalImage.size.height, 8, originalImage.size.width * 4,
                                                    colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);
    CGContextDrawImage(contextRef, CGRectMake(0, 45, originalImage.size.width, originalImage.size.height), imageRef);

    CGContextSetLineWidth(contextRef, 4);
    CGContextSetRGBStrokeColor(contextRef, 0.0, 0.0, 1.0, 0.5);

    // Draw results on the iamge
    for(int i = 0; i < faces->total; i++) {
        NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

        // Calc the rect of faces
        CvRect cvrect = *(CvRect*)cvGetSeqElem(faces, i);
        CGRect face_rect = CGContextConvertRectToDeviceSpace(contextRef, 
                                CGRectMake(cvrect.x * scale, cvrect.y * scale, cvrect.width * scale, cvrect.height * scale));
        CGContextStrokeRect(contextRef, face_rect);

        [pool release];
    }

    UIImage *returnImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(contextRef)];
    CGContextRelease(contextRef);
    CGColorSpaceRelease(colorSpace);

    cvReleaseMemStorage(&storage);
    cvReleaseHaarClassifierCascade(&cascade);

    return returnImage;
}

Here is the Screen shots

1) Image for before calling face detection method Image before face detection method called

2) Image for after calling face detection method Image after face detection method called

Foi útil?

Solução

Are you using UIImageJPEGRepresentation to get these images out? If so, then you might be interested in this:

http://blog.logichigh.com/2008/06/05/uiimage-fix/

--> "UIImageJPEGRepresentation(), the easiest (only?) way to convert a UIImage to a JPEG uses the underlying CGImage, and ignores your UIImage imageOrientation, so regardless of camera position when the picture was taken the exported JPEG will always be oriented in landscape or right mode, ending up with pictures that need rotation."

This hints that even if you are not using this function, there is an implicit imageOrientation to consider when using UIImage.

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top