Frage

My understanding of fragmented mp4 is that it is a single file, but internally it is structured as fragments. Can someone explain to me how these fragments can be addressed in the .mpd file for DASH? The .mpd files that I've seen seem to address various segments with separate urls, but a fragmented mp4, I imagine, would have to be addressed by byte offsets into the same url. How does the browser then know what times correspond to what byte ranges?

War es hilfreich?

Lösung

Here's a example mpd for MPEG DASH main profile. The mp4 file described by this mpd is a fragmented mp4. As you see :

<SegmentURL media="bunny_15s_200kbit/bunny_200kbit_dashNonSeg.mp4" mediaRange="868-347185"/>
<SegmentURL media="bunny_15s_200kbit/bunny_200kbit_dashNonSeg.mp4" mediaRange="347186-664464"/>

At <SegmentURL> element, the fragments can be addressed into the same url, and you can find byte offsets at @mediaRange attribute.

Andere Tipps

The .mpd file has a list of the segments with their byte ranges as shown above. To access the segments, you need to parse the mediarange attribute for each line and request it with something like XHR with setRequestHeader to specify the byte range. With this method, there's no server component needed. Here's some code I've been using:

  var xhr = new XMLHttpRequest();

  // Range is in format of 1234-34567
  // url is the .mp4 file path 
  if (range || url) { // make sure we've got content in our params
    xhr.open('GET', url);
    xhr.setRequestHeader("Range", "bytes=" + range); 
    xhr.send();
    xhr.responseType = 'arraybuffer';
    try {
      // watch the ready state
      xhr.addEventListener("readystatechange", function () {
        if (xhr.readyState == 4) { //wait for video to load
          // add response to buffer
          try {   
            // videoSource is a sourceBuffer on your mediaSource object.             
            videoSource.appendBuffer(new Uint8Array(xhr.response));
            videoSource.onreadystatechange = function () {
              if (videoSource.readyState == videoSource.done) {
                videoElement.play();
              }
            };
          } catch (e) {
            //  fail quietly  
          }
        }
      }, false);

The server has a manifest that can be created by scanning the file for moof boxes. A moof+mdat = one fragment. When a request for a fragment is made, the file offset is looked up in the manifest and the correct boxes are returned.

As far as I understand it... In the case of the DASH 'onDemand' profile, it is the job of the DASH packager to create the *.mpd (manifest) and specify which byte ranges map to a segment (could be a number of fragments). The client then loads the *.mpd and makes http byte range requests for the ranges in the manifest. I think the DASH 'live' profile is more similar to smooth streaming in that each segment has a url.

If you need to find out the position of the fragments within the mp4 container I believe this information is in the segment 'sidx' box.

It appears that ffmpeg now has support for HLS directly as well.

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top