Question

I'm capturing live video with the back camera on the iPhone with AVCaptureSession, applying some filters with CoreImage and then trying to output the resulting video with OpenGL ES. Most of the code is from an example from the WWDC 2012 session 'Core Image Techniques'.

Displaying the output of the filter chain using [UIImage imageWithCIImage:...] or by creating a CGImageRef for every frame works fine. However, when trying to display with OpenGL ES all I get is a black screen.

In the course they use a custom view class to display the output, however the code for that class isn't available. My view controller class extends GLKViewController and the class of it's view is set as GLKView.

I've searched for and downloaded all GLKit tutorials and examples I can find but nothing is helping. In particular I can't get any video output when I try to run the example from here either. Can anyone point me in the right direction?

#import "VideoViewController.h"

@interface VideoViewController ()
{
    AVCaptureSession *_session;

    EAGLContext *_eaglContext;
    CIContext *_ciContext;

    CIFilter *_sepia;
    CIFilter *_bumpDistortion;
}

- (void)setupCamera;
- (void)setupFilters;

@end

@implementation VideoViewController

- (void)viewDidLoad
{
    [super viewDidLoad];

    GLKView *view = (GLKView *)self.view;

    _eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
    [EAGLContext setCurrentContext:_eaglContext];

    view.context = _eaglContext;

    // Configure renderbuffers created by the view
    view.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;
    view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
    view.drawableStencilFormat = GLKViewDrawableStencilFormat8;

    [self setupCamera];
    [self setupFilters];
}

- (void)setupCamera {
    _session = [AVCaptureSession new];
    [_session beginConfiguration];

    [_session setSessionPreset:AVCaptureSessionPreset640x480];

    AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
    [_session addInput:input];

    AVCaptureVideoDataOutput *dataOutput = [AVCaptureVideoDataOutput new];
    [dataOutput setAlwaysDiscardsLateVideoFrames:YES];

    NSDictionary *options;
    options = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] };

    [dataOutput setVideoSettings:options];

    [dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

    [_session addOutput:dataOutput];
    [_session commitConfiguration];
}

#pragma mark Setup Filters
- (void)setupFilters {
    _sepia = [CIFilter filterWithName:@"CISepiaTone"];
    [_sepia setValue:@0.7 forKey:@"inputIntensity"];

    _bumpDistortion = [CIFilter filterWithName:@"CIBumpDistortion"];
    [_bumpDistortion setValue:[CIVector vectorWithX:240 Y:320] forKey:@"inputCenter"];
    [_bumpDistortion setValue:[NSNumber numberWithFloat:200] forKey:@"inputRadius"];
    [_bumpDistortion setValue:[NSNumber numberWithFloat:3.0] forKey:@"inputScale"];
}

#pragma mark Main Loop
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    // Grab the pixel buffer
    CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);

    // null colorspace to avoid colormatching
    NSDictionary *options = @{ (id)kCIImageColorSpace : (id)kCFNull };
    CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer options:options];

    image = [image imageByApplyingTransform:CGAffineTransformMakeRotation(-M_PI/2.0)];
    CGPoint origin = [image extent].origin;
    image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-origin.x, -origin.y)];

    // Pass it through the filter chain
    [_sepia setValue:image forKey:@"inputImage"];
    [_bumpDistortion setValue:_sepia.outputImage forKey:@"inputImage"];

    // Grab the final output image
    image = _bumpDistortion.outputImage;

    // draw to GLES context
    [_ciContext drawImage:image inRect:CGRectMake(0, 0, 480, 640) fromRect:[image extent]];

    // and present to screen
    [_eaglContext presentRenderbuffer:GL_RENDERBUFFER];

    NSLog(@"frame hatched");

    [_sepia setValue:nil forKey:@"inputImage"];
}

- (void)loadView {
    [super loadView];

    // Initialize the CIContext with a null working space
    NSDictionary *options = @{ (id)kCIContextWorkingColorSpace : (id)kCFNull };
    _ciContext = [CIContext contextWithEAGLContext:_eaglContext options:options];
}

- (void)viewWillAppear:(BOOL)animated {
    [super viewWillAppear:animated];

    [_session startRunning];
}

- (void)didReceiveMemoryWarning
{
    [super didReceiveMemoryWarning];
    // Dispose of any resources that can be recreated.
}

@end
Was it helpful?

Solution

Wow, actually figured it out myself. This line of work may suit me after all ;)

First, for whatever reason, this code only works with OpenGL ES 2, not 3. Yet to figure out why.

Second, I was setting up the CIContext in the loadView method, which obviously runs before the viewDidLoad method and thus uses a not yet initialized EAGLContext.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top