Question

I can't seem to get the pixel alignment to work with AVFoundation in AVCaptureSessionPresetPhoto resolution. Pixel alignment works fine with lower resolution like AVCaptureSessionPreset1280x720 (AVCaptureSessionPreset1280x720_Picture).AVCaptureSessionPreset1280x720_pictureAVCaptureSessionPresetPhoto_picture

Specifically, when I uncomment these lines: if ([captureSession canSetSessionPreset:AVCaptureSessionPresetPhoto]) { [captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
} else { NSLog(@"Unable to set resolution to AVCaptureSessionPresetPhoto"); } I get a missed aligned image as shown in the 2nd image below. Any comments/suggestions are greatly appreciated.

Here's my code to set up the 1) capture session, 2) delegate callback, and 3) save one steaming image to verify pixel alignment.
1. Capture Session set up

    - (void)InitCaptureSession {
captureSession = [[AVCaptureSession alloc] init];
if ([captureSession canSetSessionPreset:AVCaptureSessionPreset1280x720]) {
    [captureSession setSessionPreset:AVCaptureSessionPreset1280x720];        
} else {
    NSLog(@"Unable to set resolution to AVCaptureSessionPreset1280x720");
}

//    if ([captureSession canSetSessionPreset:AVCaptureSessionPresetPhoto]) {
//        [captureSession setSessionPreset:AVCaptureSessionPresetPhoto];        
//    } else {
//        NSLog(@"Unable to set resolution to AVCaptureSessionPresetPhoto");
//    }

captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
videoInput = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:nil];

AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.alwaysDiscardsLateVideoFrames = YES; 

dispatch_queue_t queue;
queue = dispatch_queue_create("cameraQueue", NULL);
[captureOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
    NSNumber* value = [NSNumber     numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];     
[captureOutput setVideoSettings:videoSettings]; 

[captureSession addInput:videoInput];
[captureSession addOutput:captureOutput];
    [captureOutput release];

    previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];

    CALayer *rootLayer = [previewView layer];// self.view.layer; //

[rootLayer setMasksToBounds:YES];
[previewLayer setFrame:[rootLayer bounds]];
[rootLayer addSublayer:previewLayer];
[captureSession startRunning];  
}


- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
   fromConnection:(AVCaptureConnection *)connection 
{ 

NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
static int processedImage = 0;
processedImage++;
if (processedImage==100) {
    [self SaveImage:sampleBuffer]; 
}

[pool drain];
} 

// Create a UIImage CMSampleBufferRef and save for verifying pixel alignment
- (void) SaveImage:(CMSampleBufferRef) sampleBuffer 
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
CVPixelBufferLockBaseAddress(imageBuffer, 0); 
CvSize imageSize;
imageSize.width = CVPixelBufferGetWidth(imageBuffer); ;
imageSize.height = CVPixelBufferGetHeight(imageBuffer); 
IplImage *image = cvCreateImage(imageSize, IPL_DEPTH_8U, 1);     
void *y_channel = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); 
char *tempPointer = image->imageData;
memcpy(tempPointer, y_channel, image->imageSize);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
NSData *data = [NSData dataWithBytes:image->imageData length:image->imageSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
CGImageRef imageRef = CGImageCreate(image->width, image->height,
                                    8, 8, image->width,
                                    colorSpace, kCGImageAlphaNone|kCGBitmapByteOrderDefault,
                                    provider, NULL, false, kCGRenderingIntentDefault);
UIImage *Saveimage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
UIImageWriteToSavedPhotosAlbum(Saveimage, nil, nil, nil);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}
Was it helpful?

Solution

Within SaveImage, the 5th argument of CGImageCreate is bytesPerRow and you should not pass image->width since the number of bytes per row may differ in case of memory alignment. This is the case with the AVCaptureSessionPresetPhoto where width = 852 (w/ an iPhone 4 camera) while the number of bytes per row for the 1-st plane (Y) is 864 since it is 16-bytes aligned.

1/ You should get the bytes per row as follow:

size_t bpr = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);

2/ Then take care to take into account the bytes per row while copying the pixels to your IplImage:

char *y_channel = (char *) CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);

// row by row copy
for (int i = 0; i < image->height; i++)
  memcpy(tempPointer + i*image->widthStep, y_channel + i*bpr, image->width);

You can keep the [NSData dataWithBytes:image->imageData length:image->imageSize]; as is since image->imageSize takes into account the alignment (imageSize = height*widthStep).

3/ At last pass the IplImage width step as CGImageCreate 5-th parameter:

CGImageCreate(image->width, image->height, 8, 8, image->widthStep, ...);
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top