AVCaptureSession只获得了iPhone 3GS的一帧
-
25-09-2019 - |
题
我有一段代码,设置了从照相机捕获会话使用的OpenCV处理帧,然后从所述帧中设置一个UIImageView的图像特性与所生成的UIImage。当应用程序启动时,图像视图的形象是零,没有帧显示,直到我把堆栈上的另一个视图控制器,然后弹出它关闭。然后,直到我再这样做的图像保持不变。 NSLog的报表显示,回调被称为在接近正确的帧速率。任何想法,为什么不显示?我一路降低帧率到2帧的第二。难道不处理速度不够快?
下面的代码:
- (void)setupCaptureSession {
NSError *error = nil;
// Create the session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetLow;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
output.alwaysDiscardsLateVideoFrames = YES;
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
output.minFrameDuration = CMTimeMake(1, 1);
// Start the session running to start the flow of data
[session startRunning];
// Assign session to an ivar.
[self setSession:session];
}
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if (!colorSpace)
{
NSLog(@"CGColorSpaceCreateDeviceRGB failure");
return nil;
}
// Get the base address of the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the data size for contiguous planes of the pixel buffer.
size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);
// Create a Quartz direct-access data provider that uses data we supply
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, baseAddress, bufferSize,
NULL);
// Create a bitmap image from data supplied by our data provider
CGImageRef cgImage =
CGImageCreate(width,
height,
8,
32,
bytesPerRow,
colorSpace,
kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,
provider,
NULL,
true,
kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
// Create and return an image object representing the specified Quartz image
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return image;
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
// Create a UIImage from the sample buffer data
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
[self.delegate cameraCaptureGotFrame:image];
}
解决方案
这可能与穿线尝试:
[self.delegate performSelectorOnMainThread:@selector(cameraCaptureGotFrame:) withObject:image waitUntilDone:NO];
其他提示
这看起来像一个线程问题。你不能比主线程更新任何其他线程的意见。在你的设置,这是很好的,委托函数的 captureOutput:didOutputSampleBuffer:的就是所谓的辅助线程。所以,你不能设置从那里图像视图。艺术吉莱斯皮的答案是解决它,如果你能摆脱坏访问错误的一种方式。
另一种方法是修改示例缓冲器 captureOutput:didOutputSampleBuffer:并通过添加 AVCaptureVideoPreviewLayer 实例到您的捕获会话所示。这当然首选的方式,如果你只修改图像的一小部分,如突出的东西。
BTW:可能出现你的坏访问错误,因为你没有在辅助线程保留所创建的图像,所以它会之前被释放的 cameraCaptureGotFrame 的被称为主线程
<强>更新强>: 为了正确地保留图像,增加的引用计数在 captureOutput:didOutputSampleBuffer:(在次级线程)和在递减它cameraCaptureGotFrame:(在主线程)
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
// increment ref count
[image retain];
[self.delegate performSelectorOnMainThread:@selector(cameraCaptureGotFrame:)
withObject:image waitUntilDone:NO];
}
- (void) cameraCaptureGotFrame:(UIImage*)image
{
// whatever this function does, e.g.:
imageView.image = image;
// decrement ref count
[image release];
}
如果你不增加引用计数,图像通过的 cameraCaptureGotFrame前的第二个线程的自动释放池释放:的是所谓的主线程。如果你没有在主线程递减它,图像被永远不会被释放,你的记忆在几秒钟内用完。
您每一个新的图像属性更新后做的一个UIImageView的setNeedsDisplay?
编辑:
在哪里,你什么时候更新的背景图像属性中你的形象有何看法?