Question

I'm trying to convert a sample from objective C to Monotouch, and I have run into some difficulties.

Basically I want to read a video file, and decode the frames one by one to an opengl texture.

The key to doing this is to use the AVAssetReader, but I am not sure how to set this up properly in Monotouch.

This is my code:

    AVUrlAsset asset=new AVUrlAsset(NSUrl.FromFilename(videoFileName),null);
    assetReader=new AVAssetReader(asset,System.IntPtr.Zero);
    AVAssetTrack videoTrack=asset.Tracks[0];
    NSDictionary videoSettings=new NSDictionary();

    NSString key = CVPixelBuffer.PixelFormatTypeKey;
    NSNumber val=0x754b9d0; //NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; - Had to hardcode constant as it is not defined in Monotouch?

    videoSettings.SetNativeField(key,val);

//**The program crashes here:
    AVAssetReaderTrackOutput trackOutput=new AVAssetReaderTrackOutput(videoTrack,videoSettings);

    assetReader.AddOutput(trackOutput);
    assetReader.StartReading();

The program crashes on the line indicated above, with an invalid argument exception, indicating that the content of the NSDictionary is not in the right format? I have checked the video file, and it loads well, "asset" contains valid information about the video.

This is the original Objective C code:

                NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
                NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
                NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
                AVAssetReaderTrackOutput *trackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack outputSettings:videoSettings];

                [_assetReader addOutput:trackOutput];
                [_assetReader startReading];

I'm not that into Objective C, so any help is appreciated.

EDIT: I used the code suggested below

var videoSettings = NSDictionary.FromObjectAndKey (
  new NSNumber ((int) MonoTouch.CoreVideo.CVPixelFormatType.CV32BGRA),
  MonoTouch.CoreVideo.CVPixelBuffer.PixelFormatTypeKey);

And the program no longer crashes. By using the following code:

        CMSampleBuffer buffer=assetReader.Outputs[0].CopyNextSampleBuffer();
        CVImageBuffer imageBuffer = buffer.GetImageBuffer();

I get the image buffer which should contain the next frame in the video file. By inspecting the imageBuffer object, I find it has valid data such as the width and height, matching that of the video file.

However, the imageBuffer BaseAddress is always 0, which indicates the image has no data? I tried to do this as a test:

        CVPixelBuffer buffer=(CVPixelBuffer)imageBuffer;
        CIImage image=CIImage.FromImageBuffer(buffer);

And image is always returned as null. Does this mean the actual image data is not present, and my imageBuffer object only contains the frame header info?

And if so, is this a bug in Monotouch, or am I setting this up wrong?

I had an idea that I may need to wait for the image data to be ready, but in that case, I do not know how either. Pretty stuck now...

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top