Question

I have set up an AVCapture session, and in the delegate method, use the following code to try to set a GLKBaseEffect texture with the output, however all I get is black. What should I do to make it work?

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer (CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {

CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);

effect.texture2d0.name = CVOpenGLESTextureGetName(pixelBuffer);
}
Was it helpful?

Solution

TLDR: I don't think you can use camera output with GLKBaseEffect — data from the camera is in biplanar YUV format, so you need a custom fragment shader to convert that to RGB at render time, and GLKBaseEffect doesn't do custom shaders.

It looks like there's some API confusion here, so read on if you'd like more background...

Though the CVOpenGLESTexture type is derived from CVPixelBuffer, the image buffers you get back from CMSampleBufferGetImageBuffer aren't automatically OpenGL ES textures. You need to create textures from image buffers each time you get a frame from the camera using CVOpenGLESTextureCache.

Apple's GLCameraRipple sample code illustrates how to do this. Here's a quick overview:

  1. After setting up a GL context, the setupAVCapture method calls CVOpenGLESTextureCacheCreate to create a texture cache tied to that context.
  2. In captureOutput:didOutputSampleBuffer:fromConnection:, they get the image buffer from the sample buffer using CMSampleBufferGetImageBuffer, then create two OpenGL ES textures from it using the texture cache's CVOpenGLESTextureCacheCreateTextureFromImage function. You need two textures because the image buffer has YUV color data in two bitplanes.
  3. Each texture gets bound for rendering right after being created, using glBindTexture. Generally, you could replace a glBindTexture call with setting a GLKBaseEffect's texture2d0.name and telling it to prepareToDraw, but GLKBaseEffect can't render YUV.
  4. Draw with the two bound textures, using a shader program whose fragment shader first makes a YUV vector by combining components from the Y and UV textures, then does a simple matrix multiply to convert it to an RGB pixel color according to the HDTV color space spec.

Because you need a custom fragment shader for the YUV to RGB conversion, GLKBaseEffect won't help you. However, as the GLCameraRipple sample shows, the custom shaders you'd need to write aren't that scary.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top