Domanda

I am using AVCaptureMovieFileOutput to record some video. I have the preview layer displayed using AVLayerVideoGravityResizeAspectFill which zooms in slightly. The problem I have is that the final video is larger, containing extra image that didn't fit on the screen during preview.

This is the preview and resulting video

enter image description here enter image description here

Is there a way I can specify a CGRect that I want to cut from the video using AVAssetExportSession?

EDIT ----

When I apply a CGAffineTransformScale to the AVAssetTrack it zooms into the video, and with the AVMutableVideoComposition renderSize set to view.bounds it crops off the ends. Great, there's just 1 problem left. The width of the video does not stretch to the correct width, it just gets filled with black.

EDIT 2 ---- The suggested question/answer is incomplete..

Some of my code:

In my - (void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error method I have this to crop and resize the video.

- (void)flipAndSave:(NSURL *)videoURL withCompletionBlock:(void(^)(NSURL *returnURL))completionBlock
{
    AVURLAsset *firstAsset = [AVURLAsset assetWithURL:videoURL];

    // 1 - Create AVMutableComposition object. This object will hold your AVMutableCompositionTrack instances.
    AVMutableComposition *mixComposition = [[AVMutableComposition alloc] init];
    // 2 - Video track
    AVMutableCompositionTrack *firstTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo
                                                                        preferredTrackID:kCMPersistentTrackID_Invalid];
    [firstTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstAsset.duration)
                        ofTrack:[[firstAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:kCMTimeZero error:nil];

    // 2.1 - Create AVMutableVideoCompositionInstruction
    AVMutableVideoCompositionInstruction *mainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
    mainInstruction.timeRange = CMTimeRangeMake(CMTimeMakeWithSeconds(0, 600), firstAsset.duration);

    // 2.2 - Create an AVMutableVideoCompositionLayerInstruction for the first track
    AVMutableVideoCompositionLayerInstruction *firstlayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:firstTrack];
    AVAssetTrack *firstAssetTrack = [[firstAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
    UIImageOrientation firstAssetOrientation_  = UIImageOrientationUp;
    BOOL isFirstAssetPortrait_  = NO;
    CGAffineTransform firstTransform = firstAssetTrack.preferredTransform;
    if (firstTransform.a == 0 && firstTransform.b == 1.0 && firstTransform.c == -1.0 && firstTransform.d == 0) {
        firstAssetOrientation_ = UIImageOrientationRight;
        isFirstAssetPortrait_ = YES;
    }
    if (firstTransform.a == 0 && firstTransform.b == -1.0 && firstTransform.c == 1.0 && firstTransform.d == 0) {
        firstAssetOrientation_ =  UIImageOrientationLeft;
        isFirstAssetPortrait_ = YES;
    }
    if (firstTransform.a == 1.0 && firstTransform.b == 0 && firstTransform.c == 0 && firstTransform.d == 1.0) {
        firstAssetOrientation_ =  UIImageOrientationUp;

    }
    if (firstTransform.a == -1.0 && firstTransform.b == 0 && firstTransform.c == 0 && firstTransform.d == -1.0) {
        firstAssetOrientation_ = UIImageOrientationDown;
    }
//    [firstlayerInstruction setTransform:firstAssetTrack.preferredTransform atTime:kCMTimeZero];

//    [firstlayerInstruction setCropRectangle:self.view.bounds atTime:kCMTimeZero];





    CGFloat scale = [self getScaleFromAsset:firstAssetTrack];

    firstTransform = CGAffineTransformScale(firstTransform, scale, scale);

    [firstlayerInstruction setTransform:firstTransform atTime:kCMTimeZero];

    // 2.4 - Add instructions
    mainInstruction.layerInstructions = [NSArray arrayWithObjects:firstlayerInstruction,nil];
    AVMutableVideoComposition *mainCompositionInst = [AVMutableVideoComposition videoComposition];
    mainCompositionInst.instructions = [NSArray arrayWithObject:mainInstruction];
    mainCompositionInst.frameDuration = CMTimeMake(1, 30);

//    CGSize videoSize = firstAssetTrack.naturalSize;
    CGSize videoSize = self.view.bounds.size;
    BOOL isPortrait_ = [self isVideoPortrait:firstAsset];
    if(isPortrait_) {
        videoSize = CGSizeMake(videoSize.height, videoSize.width);
    }
    NSLog(@"%@", NSStringFromCGSize(videoSize));
    mainCompositionInst.renderSize = videoSize;




    // 3 - Audio track
    AVMutableCompositionTrack *AudioTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeAudio
                                                                        preferredTrackID:kCMPersistentTrackID_Invalid];
    [AudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstAsset.duration)
                        ofTrack:[[firstAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] atTime:kCMTimeZero error:nil];

    // 4 - Get path
    NSString *outputPath = [[NSString alloc] initWithFormat:@"%@%@", NSTemporaryDirectory(), @"cutoutput.mov"];
    NSURL *outputURL = [[NSURL alloc] initFileURLWithPath:outputPath];
    NSFileManager *manager = [[NSFileManager alloc] init];
    if ([manager fileExistsAtPath:outputPath])
    {
        [manager removeItemAtPath:outputPath error:nil];
    }
    // 5 - Create exporter
    AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mixComposition
                                                                      presetName:AVAssetExportPresetHighestQuality];
    exporter.outputURL=outputURL;
    exporter.outputFileType = AVFileTypeQuickTimeMovie;
    exporter.shouldOptimizeForNetworkUse = YES;
    exporter.videoComposition = mainCompositionInst;
    [exporter exportAsynchronouslyWithCompletionHandler:^{
        switch ([exporter status])
        {
            case AVAssetExportSessionStatusFailed:
                NSLog(@"Export failed: %@ : %@", [[exporter error] localizedDescription], [exporter error]);
                completionBlock(nil);

                break;
            case AVAssetExportSessionStatusCancelled:

                NSLog(@"Export canceled");
                completionBlock(nil);

                break;
            default: {
                NSURL *outputURL = exporter.outputURL;
                dispatch_async(dispatch_get_main_queue(), ^{
                    completionBlock(outputURL);
                });

                break;
            }
        }
    }];
}
È stato utile?

Soluzione

Here is my interpretation of your question: You are capturing video on a device with a screen ratio of 4:3, thus your AVCaptureVideoPreviewLayer is 4:3, but the video input device captures video in 16:9 so the resulting video is 'larger' than seen in the preview.

If you are simply looking to crop the extra pixels not caught by the preview then check out this http://www.netwalk.be/article/record-square-video-ios. This article shows how to crop the video into a square. However you'll only need a few modifications to crop to 4:3. I've gone and tested this, here are the changes I made:

Once you have the AVAssetTrack for the video you will need to calculate a new height.

// we convert the captured height i.e. 1080 to a 4:3 screen ratio and get the new height
CGFloat newHeight = clipVideoTrack.naturalSize.height/3*4;

Then modify these two lines, using newHeight.

videoComposition.renderSize = CGSizeMake(clipVideoTrack.naturalSize.height, newHeight);

CGAffineTransform t1 = CGAffineTransformMakeTranslation(clipVideoTrack.naturalSize.height, -(clipVideoTrack.naturalSize.width - newHeight)/2 );

So what we've done here is set the renderSize to a 4:3 ratio - the exact dimension are based on the input device. We then use a CGAffineTransform to translate the video position so that what we saw in the AVCaptureVideoPreviewLayer is what is rendered to our file.

Edit: If you want to put it all together and crop a video based on the device's screen ratio (3:2, 4:3, 16:9) and take the video orientation into mind we need to add a few things.

First here is the modified sample code with a few critical alterations:

// output file
NSString* docFolder = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject];
NSString* outputPath = [docFolder stringByAppendingPathComponent:@"output2.mov"];
if ([[NSFileManager defaultManager] fileExistsAtPath:outputPath])
    [[NSFileManager defaultManager] removeItemAtPath:outputPath error:nil];

// input file
AVAsset* asset = [AVAsset assetWithURL:outputFileURL];

AVMutableComposition *composition = [AVMutableComposition composition];
[composition  addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];

// input clip
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];

// crop clip to screen ratio
UIInterfaceOrientation orientation = [self orientationForTrack:asset];
BOOL isPortrait = (orientation == UIInterfaceOrientationPortrait || orientation == UIInterfaceOrientationPortraitUpsideDown) ? YES: NO;
CGFloat complimentSize = [self getComplimentSize:videoTrack.naturalSize.height];
CGSize videoSize;

if(isPortrait) {
    videoSize = CGSizeMake(videoTrack.naturalSize.height, complimentSize);
} else {
    videoSize = CGSizeMake(complimentSize, videoTrack.naturalSize.height);
}

AVMutableVideoComposition* videoComposition = [AVMutableVideoComposition videoComposition];
videoComposition.renderSize = videoSize;
videoComposition.frameDuration = CMTimeMake(1, 30);

AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(60, 30) );

// rotate and position video
AVMutableVideoCompositionLayerInstruction* transformer = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];

CGFloat tx = (videoTrack.naturalSize.width-complimentSize)/2;
if (orientation == UIInterfaceOrientationPortrait || orientation == UIInterfaceOrientationLandscapeRight) {
    // invert translation
    tx *= -1;
}

// t1: rotate and position video since it may have been cropped to screen ratio
CGAffineTransform t1 = CGAffineTransformTranslate(videoTrack.preferredTransform, tx, 0);
// t2/t3: mirror video horizontally
CGAffineTransform t2 = CGAffineTransformTranslate(t1, isPortrait?0:videoTrack.naturalSize.width, isPortrait?videoTrack.naturalSize.height:0);
CGAffineTransform t3 = CGAffineTransformScale(t2, isPortrait?1:-1, isPortrait?-1:1);

[transformer setTransform:t3 atTime:kCMTimeZero];
instruction.layerInstructions = [NSArray arrayWithObject: transformer];
videoComposition.instructions = [NSArray arrayWithObject: instruction];

// export
exporter = [[AVAssetExportSession alloc] initWithAsset:asset presetName:AVAssetExportPresetHighestQuality] ;
exporter.videoComposition = videoComposition;
exporter.outputURL=[NSURL fileURLWithPath:outputPath];
exporter.outputFileType=AVFileTypeQuickTimeMovie;

[exporter exportAsynchronouslyWithCompletionHandler:^(void){
    NSLog(@"Exporting done!");

    // added export to library for testing
    ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
    if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:[NSURL fileURLWithPath:outputPath]]) {
        [library writeVideoAtPathToSavedPhotosAlbum:[NSURL fileURLWithPath:outputPath]
                                    completionBlock:^(NSURL *assetURL, NSError *error) {
             NSLog(@"Saved to album");
             if (error) {

             }
         }];
    }
}];

What we added here is a call to get the new render size of the video based on cropping its dimensions to the screen ratio. Once we crop the size down, we need to translate the position to recenter the video. So we grab its orientation to move it in the proper direction. This will fix the off-center issue we saw with UIInterfaceOrientationLandscapeLeft. Finally CGAffineTransform t2, t3 mirror the video horizontally.

And here are the two new methods that make this happen:

- (CGFloat)getComplimentSize:(CGFloat)size {
    CGRect screenRect = [[UIScreen mainScreen] bounds];
    CGFloat ratio = screenRect.size.height / screenRect.size.width;

    // we have to adjust the ratio for 16:9 screens
    if (ratio == 1.775) ratio = 1.77777777777778;

    return size * ratio;
}

- (UIInterfaceOrientation)orientationForTrack:(AVAsset *)asset {
    UIInterfaceOrientation orientation = UIInterfaceOrientationPortrait;
    NSArray *tracks = [asset tracksWithMediaType:AVMediaTypeVideo];

    if([tracks count] > 0) {
        AVAssetTrack *videoTrack = [tracks objectAtIndex:0];
        CGAffineTransform t = videoTrack.preferredTransform;

        // Portrait
        if(t.a == 0 && t.b == 1.0 && t.c == -1.0 && t.d == 0) {
            orientation = UIInterfaceOrientationPortrait;
        }
        // PortraitUpsideDown
        if(t.a == 0 && t.b == -1.0 && t.c == 1.0 && t.d == 0) {
            orientation = UIInterfaceOrientationPortraitUpsideDown;
        }
        // LandscapeRight
        if(t.a == 1.0 && t.b == 0 && t.c == 0 && t.d == 1.0) {
            orientation = UIInterfaceOrientationLandscapeRight;
        }
        // LandscapeLeft
        if(t.a == -1.0 && t.b == 0 && t.c == 0 && t.d == -1.0) {
            orientation = UIInterfaceOrientationLandscapeLeft;
        }
    }
    return orientation;
}

These are pretty straight forward. The only thing to note is that in the getComplimentSize: method we have to manually adjust the ratio for 16:9 since the iPhone5+ resolution is mathematically shy of true 16:9.

Altri suggerimenti

AVCaptureVideoDataOutput is a concrete sub-class of AVCaptureOutput you use to process uncompressed frames from the video being captured, or to access compressed frames.

An instance of AVCaptureVideoDataOutput produces video frames you can process using other media APIs. You can access the frames with the captureOutput:didOutputSampleBuffer:fromConnection: delegate method.

Configuring a Session You use a preset on the session to specify the image quality and resolution you want. A preset is a constant that identifies one of a number of possible configurations; in some cases the actual configuration is device-specific:

https://developer.apple.com/library/mac/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html

the actual values these presets represent for various devices, see “Saving to a Movie File” and “Capturing Still Images.”

If you want to set a size-specific configuration, you should check whether it is supported before setting it:

if ([session canSetSessionPreset:AVCaptureSessionPreset1280x720]) {
    session.sessionPreset = AVCaptureSessionPreset1280x720;
}
else {
    // Handle the failure.
}
Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top