Question

I can successfully create a movie from a single still image. However I am also given an array of smaller images that I need to superimpose on top of the background image. I've tried just repeating the process of appending frames with the assetWriter, but I get errors because you can't write to the same frame you've already written to.

So, I assume you have to compose the entire pixel buffer for each frame completely before you write the frame. But how would you do that?

Here's my code that works for rendering one background image:

CGSize renderSize = CGSizeMake(320, 568);
    NSUInteger fps = 30;

    self.assetWriter = [[AVAssetWriter alloc] initWithURL:
                                  [NSURL fileURLWithPath:videoOutputPath] fileType:AVFileTypeQuickTimeMovie
                                                              error:&error];
    NSParameterAssert(self.assetWriter);

    NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                   AVVideoCodecH264, AVVideoCodecKey,
                                   [NSNumber numberWithInt:renderSize.width], AVVideoWidthKey,
                                   [NSNumber numberWithInt:renderSize.height], AVVideoHeightKey,
                                   nil];

    AVAssetWriterInput* videoWriterInput = [AVAssetWriterInput
                                            assetWriterInputWithMediaType:AVMediaTypeVideo
                                            outputSettings:videoSettings];


    AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
                                                     assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
                                                     sourcePixelBufferAttributes:nil];

    NSParameterAssert(videoWriterInput);
    NSParameterAssert([self.assetWriter canAddInput:videoWriterInput]);
    videoWriterInput.expectsMediaDataInRealTime = YES;
    [self.assetWriter addInput:videoWriterInput];

    //Start a session:
    [self.assetWriter startWriting];
    [self.assetWriter startSessionAtSourceTime:kCMTimeZero];

    CVPixelBufferRef buffer = NULL;

    NSInteger totalFrames = 90; //3 seconds

    //process the bg image
    int frameCount = 0;

    UIImage* resizedImage = [UIImage resizeImage:self.bgImage size:renderSize];
    buffer = [self pixelBufferFromCGImage:[resizedImage CGImage]];

    BOOL append_ok = YES;
    int j = 0;
    while (append_ok && j < totalFrames) {
        if (adaptor.assetWriterInput.readyForMoreMediaData)  {

            CMTime frameTime = CMTimeMake(frameCount,(int32_t) fps);
            append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
            if(!append_ok){
                NSError *error = self.assetWriter.error;
                if(error!=nil) {
                    NSLog(@"Unresolved error %@,%@.", error, [error userInfo]);
                }
            }
        }
        else {
            printf("adaptor not ready %d, %d\n", frameCount, j);
            [NSThread sleepForTimeInterval:0.1];
        }
        j++;
        frameCount++;
    }
    if (!append_ok) {
        printf("error appending image %d times %d\n, with error.", frameCount, j);
    }


    //Finish the session:
    [videoWriterInput markAsFinished];
    [self.assetWriter finishWritingWithCompletionHandler:^() {
        self.assetWriter = nil;
    }];

- (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image {

    CGSize size = CGSizeMake(320,568);

    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             nil];
    CVPixelBufferRef pxbuffer = NULL;

    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
                                          size.width,
                                          size.height,
                                          kCVPixelFormatType_32ARGB,
                                          (__bridge CFDictionaryRef) options,
                                          &pxbuffer);
    if (status != kCVReturnSuccess){
        NSLog(@"Failed to create pixel buffer");
    }

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
                                                 size.height, 8, 4*size.width, rgbColorSpace,
                                                 (CGBitmapInfo)kCGImageAlphaPremultipliedFirst);

    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                           CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

Again, the question is how to create a pixel buffer for a background image and an array of N small images that will be layered on top of the bg image. The next step after this will be to also superimposed a small video.

Was it helpful?

Solution

You can add the pixel info from the image list over the pixel buffer. This example code shows how to add BGRA data over a ARGB pixelbuffer.

// Try to create a pixel buffer with the image mat
uint8_t* videobuffer = m_imageBGRA.data;


// From image buffer (BGRA) to pixel buffer
CVPixelBufferRef pixelBuffer = NULL;
CVReturn status = CVPixelBufferCreate (NULL, m_width, m_height, kCVPixelFormatType_32ARGB, NULL, &pixelBuffer);
if ((pixelBuffer == NULL) || (status != kCVReturnSuccess))
{
    NSLog(@"Error CVPixelBufferPoolCreatePixelBuffer[pixelBuffer=%@][status=%d]", pixelBuffer, status);
    return;
}
else
{
    uint8_t *videobuffertmp = videobuffer;
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixelBuffer);

    // Add data for all the pixels in the image
    for( int row=0 ; row<m_width ; ++row )
    {
        for( int col=0 ; col<m_height ; ++col )
        {
            memcpy(&pixelBufferData[0], &videobuffertmp[3], sizeof(uint8_t));       // alpha
            memcpy(&pixelBufferData[1], &videobuffertmp[2], sizeof(uint8_t));       // red
            memcpy(&pixelBufferData[2], &videobuffertmp[1], sizeof(uint8_t));       // green
            memcpy(&pixelBufferData[3], &videobuffertmp[0], sizeof(uint8_t));       // blue
            // Move the buffer pointer to the next pixel
            pixelBufferData += 4*sizeof(uint8_t);
            videobuffertmp  += 4*sizeof(uint8_t);
        }
    }


    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}

So, in this example, the data into a image (videobuffer) is added to the pixel buffer. Usually, the pixel data is stored in a single row, so for each pixel, we have 4 bytes (represented as 'uint8_t' in this case): First for blue, then green, next red and the last for the alpha value (remember that the original image is in BGRA format). The pixel buffer works in the same way, so the data is stored in a sigle row (ARGB in this case, as defined with 'kCVPixelFormatType_32ARGB' parameter). This piece of code reorders the pixel data to match with the pixelbuffer configuration:

memcpy(&pixelBufferData[0], &videobuffertmp[3], sizeof(uint8_t));       // alpha
memcpy(&pixelBufferData[1], &videobuffertmp[2], sizeof(uint8_t));       // red
memcpy(&pixelBufferData[2], &videobuffertmp[1], sizeof(uint8_t));       // green
memcpy(&pixelBufferData[3], &videobuffertmp[0], sizeof(uint8_t));       // blue

And once we have the pixel added, we can move forward a pixel by:

// Move the buffer pointer to the next pixel
pixelBufferData += 4*sizeof(uint8_t);
videobuffertmp  += 4*sizeof(uint8_t);

Moving the pointers 4 bytes forward.

If your images are smaller, you can add them in a smaller region, or define an 'if' using the alpha value as target data. For example:

// Add data for all the pixels in the image
for( int row=0 ; row<m_width ; ++row )
{
    for( int col=0 ; col<m_height ; ++col )
    {
        if( videobuffertmp[3] > 10 ) // check alpha channel
        {
            memcpy(&pixelBufferData[0], &videobuffertmp[3], sizeof(uint8_t));       // alpha
            memcpy(&pixelBufferData[1], &videobuffertmp[2], sizeof(uint8_t));       // red
            memcpy(&pixelBufferData[2], &videobuffertmp[1], sizeof(uint8_t));       // green
            memcpy(&pixelBufferData[3], &videobuffertmp[0], sizeof(uint8_t));       // blue
        }
        // Move the buffer pointer to the next pixel
        pixelBufferData += 4*sizeof(uint8_t);
        videobuffertmp  += 4*sizeof(uint8_t);
    }
}
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top