Вопрос

I am converting a MP4 file to a MPEG TS format, and though my code has started to produce video files the video and audio is running at superspeed. Running avconv -i (same as ffmpeg -i) on the output file I get the following (180 fps!):

Input #0, mpegts, from 'mpegtest_result.ts':
  Duration: 00:01:56.05, start: 0.011111, bitrate: 6356 kb/s
  Program 1 
    Metadata:
      service_name    : Service01
      service_provider: Libav
    Stream #0.0[0x100]: Video: h264 (Main), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 180 fps, 90k tbn, 47.95 tbc
    Stream #0.1[0x101]: Audio: aac, 48000 Hz, stereo, fltp, 126 kb/s

Currently, in my code, I do not alter the PTS or DTS value of the packet, and I am pretty sure that is what is messing up my video. The only thing I alter is the time_base through this piece of code (the variables should speak for themselves):

if(av_q2d(input_codec_context->time_base) * input_codec_context->ticks_per_frame > av_q2d(input_stream->time_base) && av_q2d(input_stream->time_base) < 1.0/1000) {
    output_codec_context->time_base = input_codec_context->time_base;
    output_codec_context->time_base.num *= input_codec_context->ticks_per_frame;
}
else {
    output_codec_context->time_base = input_stream->time_base;
}

I am aware that I should probably be calling packet.pts = av_rescale_q(...), but I am unsure which time_bases / values I should rescale between.

The full code can be seen here http://pastebin.com/CHvrvc3G.

For my input/output (code line 189+190) I get the following output:

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'testvideo.mp4':
  Metadata:
    major_brand     : M4V 
    minor_version   : 1
    compatible_brands: isomiso2avc1mp41M4A M4V mp42
    encoder         : Lavf54.63.100
  Duration: 00:07:15.41, start: 0.000000, bitrate: 1546 kb/s
    Stream #0.0(eng): Video: h264 (Main), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 1416 kb/s, 23.98 fps, 11988 tbn, 47.95 tbc
    Stream #0.1(und): Audio: aac, 48000 Hz, stereo, fltp, 127 kb/s
    Metadata:
      creation_time   : 2013-05-09 14:37:22
Output #0, mpegts, to 'mpegtest':
    Stream #0.0: Video: libx264, yuv420p, 1280x720, q=2-31, 1416 kb/s, 90k tbn, 23.98 tbc
    Stream #0.1: Audio: libfaac, 48000 Hz, stereo, 127 kb/s
Это было полезно?

Решение

If you're not doing any rescaling, then it's no wonder the timestamps are messed up.

Timestamps in the packets you send to the muxer must be in the stream timebase (AVStream.time_base). The API semantics right now is such that you set the codec timebase (AVStream.codec.time_base) before writing the header and then the muxer chooses the stream timebase. It may or may not use the codec timebase you set.

Timestamps in the packets you get from the demuxer are also in the stream timebase, so you should call av_rescale_q(pts/dts/duration, input_stream->time_base, output_stream->time_base).

Лицензировано под: CC-BY-SA с атрибуция
Не связан с StackOverflow
scroll top