To have both, you can segment your file using the DASH ondemand profile with MP4Box: MP4Box -dash 1000 -profile ondemand file. If you add the 'sidx' box, you will have the indexing information in the file (no need for the MPD). If you segment your file, forcing a single segment file, all the data will be in a single file (not in multiple segment files). Im not sure if this is because I am doing something wrong, or if there is a bug in Firefox. I have noticed that the method I am using is not creating a video which is seekable. Segmentation (as used in DASH) can also be used to make HTTP seekable files, if done carefully. I have been attempting to capture a web cam videos on a web page. If the file is fragmented, indeed the seek information is spread out along the file and the 'mfra' box could be used, but it is not well supported and not reliable. MP4Box can be used to generate such files: MP4Box -add file.mp4 output.mp4 It is easier if the header boxes are first (ftyp, moov. If the data is badly organized, one would have to do multiple HTTP byte-range requests. If the file is not fragmented, all the seek information is located in the 'moov' box, so there is no problem. (either in the Spec, in MDN or in both).Seeking into any MP4 file over HTTP can be done more or less simply. IIUC, it's very much the equivalent of mkclean for webm files.Ī similar informative-thingy would be to use WebAudio to mix several audio tracks before passing them to Media Recorder: it's not strictly part of this Spec, but it's good to have an informative example detailing this. IIRC, it can create the Cues from scratch. Yeah, in this case the polyfill would be a node.js package that would be informatively linked from this very spec, and would consist of a single function call that gets the whole set of recorded Blobs and passes it through the mentioned function ( CopyAndMoveCuesBeforeClusters), that tries to "clean up" the webm/mkv, so that it has correct Duration, Cues and a bunch of other things. In the case of the polyfill, would this have no official relation this spec, but could be included by pages using media recorder to rewrite the results of their recording to contain cues? Would there be a need for the file to already have cues written, in the sense that it's a strict move operation, or would it handle writing cues in files that didn't have any? should this method be passed as parameter the whole bag of Blobs received in ondatavailable? Or just some Blobs marked in some particular way.? Different container formats might need to rewrite different chunks of the output, so If the answer is 'the whole bag` then please read on. to add a finalise() method we would need to specify what data is passed into it, e.g.the user doesn't mind the UA holding on to the data for as long as needed, and indicates that by calling start() with no timeslice at first sight, this situation would allow the implementation to rewrite the cues/length appropriately since it holds on to all the data, right? The problem here is that requestData() can be called at any time, flushing any internal memory, and dumping us into case 2. The muxer specific nature: is your concern that a finalise() style function, or indicating that a data will not be read back until completion, is not enough to allow for all muxers to handle this case?Īside from the concerns already mentioned, adding a finalise()-like function would face some operational issues spec-wise.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |