Question

I have one (or possibly two) CVPixelBufferRef objects I am processing on the CPU, and then placing the results onto a final CVPixelBufferRef. I would like to do this processing on the GPU using GLSL instead because the CPU can barely keep up (these are frames of live video). I know this is possible "directly" (ie writing my own open gl code), but from the (absolutely impenetrable) sample code I've looked at it's an insane amount of work.

Two options seem to be:

1) GPUImage: This is an awesome library, but I'm a little unclear if I can do what I want easily. First thing I tried was requesting OpenGLES compatible pixel buffers using this code:

@{ (NSString *)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA],
          (NSString*)kCVPixelBufferOpenGLESCompatibilityKey : [NSNumber numberWithBool:YES]};

Then transferring data from the CVPixelBufferRef to GPUImageRawDataInput as follows:

// setup:
_foreground = [[GPUImageRawDataInput alloc] initWithBytes:nil size:CGSizeMake(0,0)pixelFormat:GPUPixelFormatBGRA type:GPUPixelTypeUByte];

// call for each frame:
[_foreground updateDataFromBytes:CVPixelBufferGetBaseAddress(foregroundPixelBuffer)
                            size:CGSizeMake(CVPixelBufferGetWidth(foregroundPixelBuffer), CVPixelBufferGetHeight(foregroundPixelBuffer))];

However, my CPU usage goes from 7% to 27% on an iPhone 5S just with that line (no processing or anything). This suggests there's some copying going on on the CPU, or something else is wrong. Am I missing something?

2) OpenFrameworks: OF is commonly used for this type of thing, and OF projects can be easily setup to use GLSL. However, two questions remain about this solution: 1. can I use openframeworks as a library, or do I have to rejigger my whole app just to use the OpenGL features? I don't see any tutorials or docs that show how I might do this without actually starting from scratch and creating an OF app. 2. is it possible to use CVPixelBufferRef as a texture.

I am targeting iOS 7+.

Was it helpful?

Solution

I was able to get this to work using the GPUImageMovie class. If you look inside this class, you'll see that there's a private method called:

- (void)processMovieFrame:(CVPixelBufferRef)movieFrame withSampleTime:(CMTime)currentSampleTime

This method takes a CVPixelBufferRef as input.

To access this method, declare a class extension that exposes it inside your class

@interface GPUImageMovie ()
-(void) processMovieFrame:(CVPixelBufferRef)movieFrame withSampleTime:(CMTime)currentSampleTime;
@end

Then initialize the class, set up the filter, and pass it your video frame:

GPUImageMovie *gpuMovie = [[GPUImageMovie alloc] initWithAsset:nil];      // <- call initWithAsset even though there's no asset
                                                                          //  to initialize internal data structures

// connect filters...

// Call the method we exposed
[gpuMovie processMovieFrame:myCVPixelBufferRef withSampleTime:kCMTimeZero];

One thing: you need to request your pixel buffers with kCVPixelFormatType_420YpCbCr8BiPlanarFullRange in order to match what the library expects.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top