Question

Strange problem. I take frames from a video file (.mov) and write them with AVAssetWriter to another file without any explicit processing. Actually I just copy the frame from one memory buffer to another and them flush them through PixelbufferAdaptor. Then I take the resulting file, delete the original file, put the resulting file instead the original and do the same operation. Interesting thing is that the size of the file constantly grows! Can somebody explain why?

if(adaptor.assetWriterInput.readyForMoreMediaData==YES) {
            CVImageBufferRef cvimgRef=nil;
            CMTime lastTime=CMTimeMake(fcounter++, 30); 
            CMTime presentTime=CMTimeAdd(lastTime, frameTime);
            CMSampleBufferRef framebuffer=nil;
            CGImageRef frameImg=nil;
            if ( [asr status]==AVAssetReaderStatusReading ){
                framebuffer =   [asset_reader_output copyNextSampleBuffer];
                frameImg    =   [self imageFromSampleBuffer:framebuffer withColorSpace:rgbColorSpace];
            }
            if(frameImg && screenshot){
                //CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(framebuffer);
                CVReturn stat= CVPixelBufferLockBaseAddress(screenshot, 0);

                 pxdata=CVPixelBufferGetBaseAddress(screenshot);
                 bufferSize = CVPixelBufferGetDataSize(screenshot);
                // Get the number of bytes per row for the pixel buffer.
                 bytesPerRow = CVPixelBufferGetBytesPerRow(screenshot);
                // Get the pixel buffer width and height.
                 width = CVPixelBufferGetWidth(screenshot);
                 height = CVPixelBufferGetHeight(screenshot);
                 // Create a Quartz direct-access data provider that uses data we supply.
                 CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, pxdata, bufferSize, NULL);

                CGImageAlphaInfo ai=CGImageGetAlphaInfo(frameImg);
                size_t bpx=CGImageGetBitsPerPixel(frameImg);
                CGColorSpaceRef fclr=CGImageGetColorSpace(frameImg);

                 // Create a bitmap image from data supplied by the data provider.
                 CGImageRef cgImage = CGImageCreate(width, height, 8, 32, bytesPerRow,rgbColorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Big,dataProvider, NULL, true, kCGRenderingIntentDefault);
                 CGDataProviderRelease(dataProvider);

                stat= CVPixelBufferLockBaseAddress(finalPixelBuffer, 0);
                pxdata=CVPixelBufferGetBaseAddress(finalPixelBuffer);
                bytesPerRow = CVPixelBufferGetBytesPerRow(finalPixelBuffer);
                CGContextRef context = CGBitmapContextCreate(pxdata, imgsize.width,imgsize.height, 8, bytesPerRow, rgbColorSpace, kCGImageAlphaNoneSkipLast);
                CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(frameImg), CGImageGetHeight(frameImg)), frameImg);
                //CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
                //CGImageRef myMaskedImage;
                    const float myMaskingColors[6] = { 0, 0, 0, 1, 0, 0 };
                   CGImageRef  myColorMaskedImage = CGImageCreateWithMaskingColors (cgImage, myMaskingColors);
                 //CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(myColorMaskedImage), CGImageGetHeight(myColorMaskedImage)), myColorMaskedImage);

                [adaptor appendPixelBuffer:finalPixelBuffer withPresentationTime:presentTime];}
Was it helpful?

Solution

well, the mystery seems to be solved. The problem was in inappropriate codec configuration. This is set of configuration options I use now and it seems to do the work:

NSDictionary *codecSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                   [NSNumber numberWithInt:1100000], AVVideoAverageBitRateKey,
                                   [NSNumber numberWithInt:5],AVVideoMaxKeyFrameIntervalKey,
                                   nil];
    NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                   AVVideoCodecH264, AVVideoCodecKey,
                                   [NSNumber numberWithInt:[SharedApplicationData sharedData].overlayView.frame.size.width], AVVideoWidthKey,
                                   [NSNumber numberWithInt:[SharedApplicationData sharedData].overlayView.frame.size.height], AVVideoHeightKey,
                                   codecSettings,AVVideoCompressionPropertiesKey,
                                   nil];

    AVAssetWriterInput* writerInput = [AVAssetWriterInput
                                       assetWriterInputWithMediaType:AVMediaTypeVideo
                                       outputSettings:videoSettings];

Now the file size still grows but at much slower pace. There is a tradeoff between the file size and the video quality - size reduction affects the quality.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top