Question

I'm using glReadPixels to read data into a CVPixelBufferRef. I use the CVPixelBufferRef as the input into an AVAssetWriter. Unfortunately the pixel formats seem to be mismatched.

I think glReadPixels is returning pixel data in RGBA format while AVAssetWriter wants pixel data in ARGB format. What's the best way to convert RGBA to ARGB?

Here's what I've tried so far:

  • bit manipulation along the lines of argb = (rgba >> 8) | (rgba << 24)
  • using a CGImageRef as an intermediate step

The bit manipulation didn't work because CVPixelBufferRef doesn't seem to support subscripts. The CGImageRef intermediate step does work... but I'd prefer not to have 50 extra lines of code that could potentially be a performance hit.

Was it helpful?

Solution

Better than using the CPU to swap the components would be to write a simple fragment shader to efficiently do it on the GPU as you render the image.

And the best bestest way is to completely remove the copying stage by using an iOS5 CoreVideo CVOpenGLESTextureCache which allows you to render straight to the CVPixelBufferRef, eliminating the call to glReadPixels.

p.s. I'm pretty sure AVAssetWriter wants data in BGRA format (actually it probably wants it in yuv, but that's another story).

UPDATE: as for links, the doco seems to still be under NDA, but there are two pieces of freely downloadable example code available:

GLCameraRipple and RosyWriter

The header files themselves contain good documentation, and the mac equivalent is very similar (CVOpenGLTextureCache), so you should have plenty to get you started.

OTHER TIPS

Regarding the bit manipulation, you can get a pointer to the pixel buffer's raw data:

CVPixelBufferLockBaseAddress(buffer, 0);
size_t stride = CVPixelBufferGetBytesPerRow(buffer);
char *data = (char *)CVPixelBufferGetBaseAddress(buffer);
for (size_t y = 0; y < CVPixelBufferGetHeight(buffer); ++y) {
    uint32_t *pixels = (uint32_t *)(data + stride * y);
    for (size_t x = 0; x < CVPixelBufferGetWidth(buffer); ++x)
        pixels[x] = (pixels[x] >> 8) | (pixels[x] << 24);
}
CVPixelBufferUnlockBaseAddress(buffer, 0);

It's kind of a shot in the dark, but have you tried GL_BGRA for glReadPixels with kCVPixelFormatType_32BGRA for CVPixelBufferCreate?

I suggest this because Technical Q&A QA1501 doesn't list any RGBA format as supported.

glReadPixels(0, 0, w*s, h*s, GL_BGRA, GL_UNSIGNED_BYTE, buffer);

Use GL_BGRA in glReadPixels. It works, just tried it myself.

    glReadPixels(0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, _buffer);
    CVPixelBufferRef buffer = NULL;
    CVReturn ret = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,_size.width,_size.height,kCVPixelFormatType_32BGRA,glubyte,_size.width*4,NULL,NULL,NULL,&buffer);
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top