Question

For 3 full days, I have been trying to improve the performance of my AVAssetWriter which is based on glReadPixels. I have gone through Apple's RosyWriter and Camera Ripple code and Brad Larson's GPUImage but I am still scratching my head. I've also been trying to use the implementations put down in these links:

rendering-to-a-texture-with-ios-5-texture-cache-api

faster-alternative-to-glreadpixels-in-iphone-opengl-es-2-0

...and many more but no matter what I try, I just can't get it to work. Either the video end up not being processed, or it comes out black or I get various errors. I won't go through all of it here.

To simplify my question, I thought I'd focus around just grabbing a snapshot from my onscreen openGL preview FBO. If I can just get one single implementation of this working, I should be able to work out the rest. I tried the implementation from the first link above which looks something like this:

CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, [glView context], 
                           NULL, &texCacheRef);

CFDictionaryRef empty = CFDictionaryCreate(kCFAllocatorDefault,
                           NULL,
                           NULL,
                           0,
                           &kCFTypeDictionaryKeyCallBacks,
                           &kCFTypeDictionaryValueCallBacks);

CFMutableDictionaryRef attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
                                  1,
                                  &kCFTypeDictionaryKeyCallBacks,
                                  &kCFTypeDictionaryValueCallBacks);

CFDictionarySetValue(attrs,
                     kCVPixelBufferIOSurfacePropertiesKey,
                     empty);

CVPixelBufferRef renderTarget = NULL;
CVPixelBufferCreate(kCFAllocatorDefault,
                    width,
                    height,
                    kCVPixelFormatType_32BGRA,
                    attrs,
                    &renderTarget);

CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
                                              texCacheRef,
                                              renderTarget,
                                              NULL,
                                              GL_TEXTURE_2D,
                                              GL_RGBA,
                                              width,
                                              height,
                                              GL_BGRA,
                                              GL_UNSIGNED_BYTE,
                                              0,
                                              &renderTexture);

CFRelease(attrs);
CFRelease(empty);
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

GLuint renderFrameBuffer;
glGenRenderbuffers(1, &renderFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, renderFrameBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                       GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);

//Is this really how I pull pixels off my context?
CVPixelBufferLockBaseAddress(renderTarget, 0);
buffer = (GLubyte *)CVPixelBufferGetBaseAddress(renderTarget);
CVPixelBufferUnlockBaseAddress(renderTarget, 0);

What exactly is supposed to happen here? My buffer ends up being a bunch of zero's so I guess I need to do something additional to pull the pixels from the context? ...or what am I missing?

All I want to achieve is a faster equivalent of what I am using today:

int pixelsCount = w * h;
buffer = (GLubyte *) malloc(pixelsCount * 4);
glReadPixels(0, 0, w, h, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
Was it helpful?

Solution

As Brad pointed out, I misunderstood the concept and wasn't doing any actual rendering. Worked fine when I added it.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top