Hi All,
Quick question, I'm encoding a VF DCP as I write, and the behaviour I'm experiencing seems less than efficient (I can understand what its doing if the circumstances were different, but not in this example)...
Long and short, I've encoded a FTR OV, and want to break it into two parts to insert an intermission (I know the playback server, ShowVault IMB in our case, can do this via the showpaylist tools, but I have always preferred this method).
I encoded the OV to contain multiple reels (around 40GB each). To facilitate the intermission, at the point of break, I added a 5 second fade out (for the part 1 VF) and a 5 second fade in (for the part 2 VF).
I assumed if the break occurred in the middle of reel 3 (for example), the VF for part 1 would simply reference reels 1 and 2 from the OV, and encode a new reel 3 (upto the break, adding the fade out). Likewise, the part 2 VF would encode a new reel 3 with the fade in, and then simply reference reels 4 and 5 from the OV... (I also appreciate the fade out is only in the picture .mxfs and the audio cuts abruptly, however a simple fade out macro for our sound processor fixes this, although not ideal)
What it seems to be doing however (albeit it quicker) is encoding totally new .mxf reels, making the VF act more like an OV (as I'm still encoding I can't yet see the full VF generated).... its going a lot quicker as it seems to simply be repackaging the bulk of the image data rather than re-encoding, but in terms of the sizes of the VFs, they will be huge, and mostly contain duplicate data of the OV, that also needs ingesting anyways....
Is the above the intended behaviour? If so, might I suggest altering the overall workflow for example where VFs are being generated from multi-reel OVs?
I get that in the case of single reel OVs you'd need to do everything again.
To stave off questions, reason for doing it this way rather than making two OVs for parts 1 and 2 by trimming the original source material, was I tried that first... issue came where I was getting an error in the Part 2 OV project about the trim point not being on a frame boundry of the original source file (not quite sure how that can happen when I can only input whole integer values into the HH:MM:SS:FF of the trim markers) and DCP-o-Matic had made an adjustment for me to fix it... alls well until the audio was then wildly out of sync in the encoded output.... Part 1 was fine, but not part 2...no amount of fiddling could get the original error to go away, so I've had to encode the whole FTR as one OV and split it that way....
As a side note, whilst its doing this, the progress bar of the bulk encoder isn't actually moving... its only through looking at the HDD activity I can see its doing anything at all!
Any thoughts on how DCP-o-Matic should handle VFs from multi reel OVs like this example?
Amazing software btw!
Cheers,
Owen.
VF DCP generated from OV with multiple reels
-
- Posts: 10
- Joined: Tue Nov 24, 2015 1:29 pm
Re: VF DCP generated from OV with multiple reels
Just to add to this, the encodes for both Parts one and two subsequently crashed... a DCP folder was made in both cases, with just the video MXFs in them (no CPL, PKL, Volume Index, or audio .mxfs)....
The batch encoder only showed an error as such:
programming error at ../src/lib/reel_writer.cc:413
when clicking on the 'details' button it really doesn't elaborate much more:
'it is not known what caused this error'
I'm trying again with no changes, heres hoping!!
I really don't want to mess with the input source (ie manually split it) as that will cause a drop in quality as it gets encoded, and then encoded to JP2000 again on top of that!
Owen.
The batch encoder only showed an error as such:
programming error at ../src/lib/reel_writer.cc:413
when clicking on the 'details' button it really doesn't elaborate much more:
'it is not known what caused this error'
I'm trying again with no changes, heres hoping!!
I really don't want to mess with the input source (ie manually split it) as that will cause a drop in quality as it gets encoded, and then encoded to JP2000 again on top of that!
Owen.
-
- Posts: 185
- Joined: Mon Nov 13, 2017 8:40 pm
Re: VF DCP generated from OV with multiple reels
I wouldn't like to mislead you into having the impression that I know how to do this with DCPom, but you know that a video file can be split into parts by the means of CPL editing, and so can be split the audio. Not such luck with the subtitles, I 'm afraid.
Meaning, if you have a single-reel with (say) 1000 frames, you can cook it up in such a way in the CPL that says:
Reel A: Play the movie (sort for both the video and audio essences) from frame 1 to frame 500.
Reel B: Then, play the movie from frame 501 to frame 1000.
Thus, seeing that as a two reel movie that uses the same files for both reels, but fiddling with the IntrinsicDuration, EntryPoint and Duration values in order to present what you want in your preferable manner.
While you're at it, you can add the intermediate reel where you want (between the two you already have, in this case), by adding the corresponding info in the CPL file and providing the appropriate files to the DCP.
The whole project would then need new UUID for the CPL, PKL etc. and, on top of that, the updated hash info.
I, again, remind you that this is not the DCP-o-matic way of creating a version file, but it can provide a playable result, until someone in here share the way to do so without re-encoding, or creating resynchronization.
P.S. It goes without saying that you can't get away that easily with what I wrote above, once the DCP is encrypted.
Meaning, if you have a single-reel with (say) 1000 frames, you can cook it up in such a way in the CPL that says:
Reel A: Play the movie (sort for both the video and audio essences) from frame 1 to frame 500.
Reel B: Then, play the movie from frame 501 to frame 1000.
Thus, seeing that as a two reel movie that uses the same files for both reels, but fiddling with the IntrinsicDuration, EntryPoint and Duration values in order to present what you want in your preferable manner.
While you're at it, you can add the intermediate reel where you want (between the two you already have, in this case), by adding the corresponding info in the CPL file and providing the appropriate files to the DCP.
The whole project would then need new UUID for the CPL, PKL etc. and, on top of that, the updated hash info.
I, again, remind you that this is not the DCP-o-matic way of creating a version file, but it can provide a playable result, until someone in here share the way to do so without re-encoding, or creating resynchronization.
P.S. It goes without saying that you can't get away that easily with what I wrote above, once the DCP is encrypted.
-
- Posts: 2804
- Joined: Tue Apr 15, 2014 9:11 pm
- Location: Germany
Re: VF DCP generated from OV with multiple reels
Owen - you can't, and shouldn't reference individual reels/MXFs from an OV with DCP-o-matic. When creating the Part1 VF, you load the OV first, then reference both video and audio from that OV, then trim to just the beginning to fade/black/silence, then create the DCP. Duplicate this project, rename to Part2, then adjust the trimming to keep everything from black/fade-in/silence to end, create Part2 DCP. Both DCPs will then only contain a few small metadata files, no MXFs at all. All the assets used by these two VFs are contained in the OV. You can not fade out/in referenced assets though, that is not possible, since the idea of referencing assets is that they remain part of the OV unaltered. Yes, in theory, DCP-o-matic could automatically metadata-trim referenced assets and add a new segment with the fade out/fade in, but that is not supported currently. You would need to insert that passage manually into the VF project and set appropriate non-visible in/out points. Possible, but cumbersome.
However, in this case, creating a VF is unecessarily complicated, you should simply create two new (OV) DCPs for both Part1 and Part2 (do the same as above, but dont check the 'Link to OV' checkboxes). DCP-o-matic will only copy the necessary material into each partial DCP (no recompress), no matter if or where reels are split. In this case, you can also use the fade out/fade in features, as original assets are created. DCP-o-matic will only recompress those images with the fade out/in applied, and will pass through all other frames.
All said valid based on the assumption you are using a version of DCP-o-matic that does it right.
Which version of DCP-o-matic are you using? I just tried all three methods in 2.11.58, splitting a StarWars8 trailer in two parts, and all worked (and play at least in DCP-o-matic player) as VFs (OV in the case of non-referenced partial DCPs). The VF method with fade out/in though needed some head scratching to get in/out points right. Easy if you have hard cuts shortly before and after the intermission point. Otherwise, you would have to go strictly by hs:f numbers, e.g. if you want to confine recompression strictly to the fade-out/fade-in passage.
- Carsten
However, in this case, creating a VF is unecessarily complicated, you should simply create two new (OV) DCPs for both Part1 and Part2 (do the same as above, but dont check the 'Link to OV' checkboxes). DCP-o-matic will only copy the necessary material into each partial DCP (no recompress), no matter if or where reels are split. In this case, you can also use the fade out/fade in features, as original assets are created. DCP-o-matic will only recompress those images with the fade out/in applied, and will pass through all other frames.
All said valid based on the assumption you are using a version of DCP-o-matic that does it right.
Which version of DCP-o-matic are you using? I just tried all three methods in 2.11.58, splitting a StarWars8 trailer in two parts, and all worked (and play at least in DCP-o-matic player) as VFs (OV in the case of non-referenced partial DCPs). The VF method with fade out/in though needed some head scratching to get in/out points right. Easy if you have hard cuts shortly before and after the intermission point. Otherwise, you would have to go strictly by hs:f numbers, e.g. if you want to confine recompression strictly to the fade-out/fade-in passage.
- Carsten
-
- Posts: 10
- Joined: Tue Nov 24, 2015 1:29 pm
Re: VF DCP generated from OV with multiple reels
Carsten,
the only reason I bring this up was the 'trim not aligned to frame boundary error' I kept getting for the Part 2 DCP... I'm sure the source is a constant framerate (.m2ts) but I can't be 100% sure, so I have no idea why this error existed in the first place, its not like I can put a floating point number in as the frame number in the HH:MM:SS:FF timeline.....
I was hoping to save a bit of time, without having to mess with the source material (as I've now had to do).
Here's what I thought would happen.. Lets use a simple 7.1 audio VF as an example as its a bit easier to get across:
I have a 5.1 OV, with 5 .mxf picture reels, and 5 .mxf audio reels.... the CPL effectively pairs them up, and puts them in order. the 7.1 VF only contains 5x 7.1 audio .mxf reels, the CPL of the VF then pairs the picture .mxfs (which it doesn't contain), to the 7.1 audio .mxfs which it does. To all intents and purposes, the CPL of the 7.1 VF looks identical like the 5.1 OV with both picture .mxfs and audio .mxfs referenced.
Now in my use case, I see no reason why it can't be any different...
My full OV has 5 reels of audio and picture. The Part 1 VF has a need for reels 1 and 2, but has no need to do any modification (nothing has changed), thus it can reference them instead of copying them (the VF CPL will look identical to the OV for the first two reels), however, reel 3 has to be trimmed, and a fade to black included... therefore DoM should encode a new picture, and audio .mxf for its portion of reel 3. Thus the only actual assets the part 1 VF contains is 1x CPL, 1x PKL, 1x VOLINDEX, and 1x audio mxf, and 1x video mxf (which represents its reel 3, but which is in fact, only a portion of the original reel 3 from which it has replaced).
The Part two VF is the same, just with the bit after the trim as a new picture and audio .mxf but then referencing reels 4 and 5 from the OV.
I would expect DoM to realise it can get away with only encoding a new reel for the affected 'bit'...
CPL of Part 1 VF:
CPL of Part 2 VF:
This way, to generate the two parts, with a fade in and out from the OV DoM only has to encode a few thousand frames for each part, and each VF is only a few 10's GB each, rather than 70 or 80GB (which would simply be the Ov size divided by two).
I know this doesn't work for non-burn in subs, but I'm not dealing with them, so alls Ok.
does that help or hinder?
the only reason I bring this up was the 'trim not aligned to frame boundary error' I kept getting for the Part 2 DCP... I'm sure the source is a constant framerate (.m2ts) but I can't be 100% sure, so I have no idea why this error existed in the first place, its not like I can put a floating point number in as the frame number in the HH:MM:SS:FF timeline.....
I was hoping to save a bit of time, without having to mess with the source material (as I've now had to do).
Here's what I thought would happen.. Lets use a simple 7.1 audio VF as an example as its a bit easier to get across:
I have a 5.1 OV, with 5 .mxf picture reels, and 5 .mxf audio reels.... the CPL effectively pairs them up, and puts them in order. the 7.1 VF only contains 5x 7.1 audio .mxf reels, the CPL of the VF then pairs the picture .mxfs (which it doesn't contain), to the 7.1 audio .mxfs which it does. To all intents and purposes, the CPL of the 7.1 VF looks identical like the 5.1 OV with both picture .mxfs and audio .mxfs referenced.
Now in my use case, I see no reason why it can't be any different...
My full OV has 5 reels of audio and picture. The Part 1 VF has a need for reels 1 and 2, but has no need to do any modification (nothing has changed), thus it can reference them instead of copying them (the VF CPL will look identical to the OV for the first two reels), however, reel 3 has to be trimmed, and a fade to black included... therefore DoM should encode a new picture, and audio .mxf for its portion of reel 3. Thus the only actual assets the part 1 VF contains is 1x CPL, 1x PKL, 1x VOLINDEX, and 1x audio mxf, and 1x video mxf (which represents its reel 3, but which is in fact, only a portion of the original reel 3 from which it has replaced).
The Part two VF is the same, just with the bit after the trim as a new picture and audio .mxf but then referencing reels 4 and 5 from the OV.
I would expect DoM to realise it can get away with only encoding a new reel for the affected 'bit'...
CPL of Part 1 VF:
Code: Select all
<reelList>
<!--reel 1-->
<reel>
(REEL 1 audio and picture from OV)
</reel>
<!--reel 2-->
<reel>
(REEL 2 audio and picture from OV)
</reel>
<!--reel 3-->
<reel>
(NEW audio and picture .mxf generated by DoM)
</reel>
</reelList>
Code: Select all
<reelList>
<!--reel 1-->
<reel>
(NEW audio and picture .mxf generated by DoM)
</reel>
<!--reel 2-->
<reel>
(REEL 4 audio and picture from OV)
</reel>
<!--reel 3-->
<reel>
(REEL 5 audio and picture from OV)
</reel>
</reelList>
This way, to generate the two parts, with a fade in and out from the OV DoM only has to encode a few thousand frames for each part, and each VF is only a few 10's GB each, rather than 70 or 80GB (which would simply be the Ov size divided by two).
I know this doesn't work for non-burn in subs, but I'm not dealing with them, so alls Ok.
does that help or hinder?
-
- Posts: 10
- Joined: Tue Nov 24, 2015 1:29 pm
Re: VF DCP generated from OV with multiple reels
As an addition, using the latest test version. Doing it the way suggested (ie. Without trying to add fade in / out) it still failed with the reel writer error (line 380 this time).
I loaded the OV, (as its reel based, in the DCP setting tab I had to choose the reel per content selection). This allowed me to the check the Reference OV for picture. When applying the trim, as I would have expected (it occurring in the middle of a picture reel) I was forced to generate new audio mxfs.
The VF started to generate, it made the audio mxfs, and computed their digests, then failed with the reel writer error.
Not sure what I did wrong?
To fix this, as its on a deadline, I've had to edit the source to create a 'new' file that is only the bit for Part 2, thus I'm just making a normal DCP as you'd expect from that instead
Still would be good to work out these questions though.
I loaded the OV, (as its reel based, in the DCP setting tab I had to choose the reel per content selection). This allowed me to the check the Reference OV for picture. When applying the trim, as I would have expected (it occurring in the middle of a picture reel) I was forced to generate new audio mxfs.
The VF started to generate, it made the audio mxfs, and computed their digests, then failed with the reel writer error.
Not sure what I did wrong?
To fix this, as its on a deadline, I've had to edit the source to create a 'new' file that is only the bit for Part 2, thus I'm just making a normal DCP as you'd expect from that instead
Still would be good to work out these questions though.
-
- Posts: 2804
- Joined: Tue Apr 15, 2014 9:11 pm
- Location: Germany
Re: VF DCP generated from OV with multiple reels
Hi Owen.
While the DCP standard allows to reference more or less any number of reels from either OV or VF package in a CPL, the software to create the VF CPL needs to be able to structure this accordingly and offer a conceptional interface to the task. Currently, DCP-o-matic will only allow to reference assets/reels unaltered, and will not allow to insert splits/trims automatically at e.g. fade-out/fade-in passages. Things like that would need to happen based on a schemed workflow, and DCP-o-matic is not built with a selection of such workflows (I think, hardly any DCP mastering tool is). So, you can do many things that the DCP standard allows, but you may have to cater for the details manually as far as the current GUI allows.
In my example, I (right-click) repeated the trailer to be split in the project, referenced only the first occurrence in both video and audio to OV, trimmed it (to allow the addition of the fade-out segment), and included the second occurrence with the fade-out and trims applied as VF assets. DCP-o-matic that way references the unaltered first part from the OV without including any assets, and copies just the frames needed for the fade-out as video and audio mxf from the second occurrence to the VF.
Now, in your case I am not sure if I understand 5.1 and 7.1 VF audio thing in addition to the part1/2 segmenting. If you want to create Part1 and Part2 versions for both 5.1 and a 7.1 VF, that will become very tricky. I think no one would try to do this in a commercial distribution environment, but would simply create two separate part1 and part2 5.1 OVs, and two matching 7.1 VFs. Working with VFs needs a high degree of workflow discipline and a lot of thinking. That's why this thread contains so much text already. In your description, I can follow you until the point where you only reference the image part of the OV, but not the audio. If you don't reference the audio as well from the OV, DCP-o-matic clearly needs to create/copy new audio to the VF.
As for the frame boundary thing - this message in new versions will only occur as a warning when you open older projects. In older versions, trims could accidentally be placed between frame boundaries, which caused errors on trims. New versions detect this and shift accordingly. However, I thought you were trying to create a part1 and part2 DCP from a full OV DCP, and so, you shouldn't need to deal with these MTS files again? Maybe you had to recreate the project with the MTS files in order to split them with the fades? That would not have been necessary, as you can re-use the DCP you created, no matter as OV or for a VF.
The batch encoder progress bar not moving SHOULD have been fixed a while ago: https://dcpomatic.com/mantis/view.php?id=1109
- Carsten
While the DCP standard allows to reference more or less any number of reels from either OV or VF package in a CPL, the software to create the VF CPL needs to be able to structure this accordingly and offer a conceptional interface to the task. Currently, DCP-o-matic will only allow to reference assets/reels unaltered, and will not allow to insert splits/trims automatically at e.g. fade-out/fade-in passages. Things like that would need to happen based on a schemed workflow, and DCP-o-matic is not built with a selection of such workflows (I think, hardly any DCP mastering tool is). So, you can do many things that the DCP standard allows, but you may have to cater for the details manually as far as the current GUI allows.
In my example, I (right-click) repeated the trailer to be split in the project, referenced only the first occurrence in both video and audio to OV, trimmed it (to allow the addition of the fade-out segment), and included the second occurrence with the fade-out and trims applied as VF assets. DCP-o-matic that way references the unaltered first part from the OV without including any assets, and copies just the frames needed for the fade-out as video and audio mxf from the second occurrence to the VF.
Now, in your case I am not sure if I understand 5.1 and 7.1 VF audio thing in addition to the part1/2 segmenting. If you want to create Part1 and Part2 versions for both 5.1 and a 7.1 VF, that will become very tricky. I think no one would try to do this in a commercial distribution environment, but would simply create two separate part1 and part2 5.1 OVs, and two matching 7.1 VFs. Working with VFs needs a high degree of workflow discipline and a lot of thinking. That's why this thread contains so much text already. In your description, I can follow you until the point where you only reference the image part of the OV, but not the audio. If you don't reference the audio as well from the OV, DCP-o-matic clearly needs to create/copy new audio to the VF.
As for the frame boundary thing - this message in new versions will only occur as a warning when you open older projects. In older versions, trims could accidentally be placed between frame boundaries, which caused errors on trims. New versions detect this and shift accordingly. However, I thought you were trying to create a part1 and part2 DCP from a full OV DCP, and so, you shouldn't need to deal with these MTS files again? Maybe you had to recreate the project with the MTS files in order to split them with the fades? That would not have been necessary, as you can re-use the DCP you created, no matter as OV or for a VF.
The batch encoder progress bar not moving SHOULD have been fixed a while ago: https://dcpomatic.com/mantis/view.php?id=1109
- Carsten
Last edited by Carsten on Mon Feb 19, 2018 11:56 pm, edited 1 time in total.
-
- Posts: 10
- Joined: Tue Nov 24, 2015 1:29 pm
Re: VF DCP generated from OV with multiple reels
Hi Carsten,
The whole 51/71 OV/VF was merely an example of how I know mastering facilities work (my day job involves a lot of stuff with DCPs so I know a bit, but hardly enough to write something as good as DoM!).
I kinda agree that DoM cannot do all things for all people, and your main target for VFs is those wanting to add subs, just seemed odd that for something that programatically isn't hard (from a program logic stand point, not saying its an easy thing to actually implement) it was making a bit of a meal of it (that is to say making it harder than it otherwise needs to be).
You raise an interesting point about the frame boundry warning..... I feel there is a bug.....
This was happening when I was attempting to make two OVs trimming the source material up. The error occurs when saving the project, and re-opening it within the same version... although it didn't stop the DCP from being made, the audio was horrendously out of sync (no audio settings other than gain had been touched).
The whole 51/71 OV/VF was merely an example of how I know mastering facilities work (my day job involves a lot of stuff with DCPs so I know a bit, but hardly enough to write something as good as DoM!).
I kinda agree that DoM cannot do all things for all people, and your main target for VFs is those wanting to add subs, just seemed odd that for something that programatically isn't hard (from a program logic stand point, not saying its an easy thing to actually implement) it was making a bit of a meal of it (that is to say making it harder than it otherwise needs to be).
You raise an interesting point about the frame boundry warning..... I feel there is a bug.....
This was happening when I was attempting to make two OVs trimming the source material up. The error occurs when saving the project, and re-opening it within the same version... although it didn't stop the DCP from being made, the audio was horrendously out of sync (no audio settings other than gain had been touched).
-
- Posts: 2804
- Joined: Tue Apr 15, 2014 9:11 pm
- Location: Germany
Re: VF DCP generated from OV with multiple reels
Hmm, weird - when reusing DCPs, the frame boundary thing should not happen I think...
The test I did with a trailer admittedly was a one-reeler, but with a non-zero entry point for both video and audio (even with different entry points for video and audio). Maybe I should check with a multiple reel DCP as well. In the past, I have seen the error on reel writing when testing more complex features, but I noticed that Carl has fixed many of these bugs.
I think the commercial DCP mastering tools make it a bit easier to shuffle around individual reels (and modify existing reels) because the workflows usually supply reels with syncpop heads and tails, so, during rearranging them, you can check sync more easily. If there isn't a reel (pun) reason to create reels, I usually avoid creating multiple reels in DCP-o-matic. It also makes VF creation a bit easier. Whenever I need to access parts of reels, I load them into DCP-o-matic, repeat them, then split/treat them separately. When doing this with reels that have been written by DCP-o-matic, this usually creates no sync issues, as all reels written by DCP-o-matic are typically entry-point identical/inherently synced for video and audio.
DCP-o-matic does not actively support a reel centric workflow as many commercial DCP mastering packages do. The reason is probably that DCP-o-matic was originally conceived as a conversion tool for finished content, while professional film productions very typically use a reel based workflow. DCP-o-matic will create reels if told so, and will follow reel structures in CPLs, but as long as you don't load individual reels/mxfs, it is not built e.g. to deal with different entry points or masking syncpop. That would need a completely different GUI. In order to not make a statement here that people may misunderstand when they find this thread on google - DCP-o-matic can handle and compose reel based source content into a DCP and also adjust sync if needed, but it is not a streamlined workflow. DCP-o-matic is not a video editor. It has functions for split content, sync shifts, etc. but they are not as accessible as in dedicated tools.
That said - what you tried to achieve should be possible, but unlike the one-reel trailer I tried it with successfully, one should maybe also test it with a typical multi-reel composition.
- Carsten
The test I did with a trailer admittedly was a one-reeler, but with a non-zero entry point for both video and audio (even with different entry points for video and audio). Maybe I should check with a multiple reel DCP as well. In the past, I have seen the error on reel writing when testing more complex features, but I noticed that Carl has fixed many of these bugs.
I think the commercial DCP mastering tools make it a bit easier to shuffle around individual reels (and modify existing reels) because the workflows usually supply reels with syncpop heads and tails, so, during rearranging them, you can check sync more easily. If there isn't a reel (pun) reason to create reels, I usually avoid creating multiple reels in DCP-o-matic. It also makes VF creation a bit easier. Whenever I need to access parts of reels, I load them into DCP-o-matic, repeat them, then split/treat them separately. When doing this with reels that have been written by DCP-o-matic, this usually creates no sync issues, as all reels written by DCP-o-matic are typically entry-point identical/inherently synced for video and audio.
DCP-o-matic does not actively support a reel centric workflow as many commercial DCP mastering packages do. The reason is probably that DCP-o-matic was originally conceived as a conversion tool for finished content, while professional film productions very typically use a reel based workflow. DCP-o-matic will create reels if told so, and will follow reel structures in CPLs, but as long as you don't load individual reels/mxfs, it is not built e.g. to deal with different entry points or masking syncpop. That would need a completely different GUI. In order to not make a statement here that people may misunderstand when they find this thread on google - DCP-o-matic can handle and compose reel based source content into a DCP and also adjust sync if needed, but it is not a streamlined workflow. DCP-o-matic is not a video editor. It has functions for split content, sync shifts, etc. but they are not as accessible as in dedicated tools.
That said - what you tried to achieve should be possible, but unlike the one-reel trailer I tried it with successfully, one should maybe also test it with a typical multi-reel composition.
- Carsten
Last edited by Carsten on Tue Feb 20, 2018 2:23 am, edited 4 times in total.
-
- Posts: 10
- Joined: Tue Nov 24, 2015 1:29 pm
Re: VF DCP generated from OV with multiple reels
Just to confirm, as I appreciate with all the text, and it being a technical subject, things can sometimes not be clear...
The main issue with the frame boundary error was when starting the project from the beginning, no DCPs or anything, just the source .m2ts file.
I created a project which I wanted to be part one, loaded the .m2ts, found the point I wanted to create the split, trimmed everything after that point, and saved. I created a 2nd project, which I wanted to be part two, loaded the same .m2ts file, found the point where I previously had split it in the part 1 project, and trimmed all content before that point. Thus I have two projects, one shows 00:00:00:00 -> 00:58:50:00 of the source .m2ts, and a different project that shows 00:58:50:00 -> END of the same source .m2ts file. the issue is, when I save the part 2 project file, close DoM, come back and open the project file (within the same version of DoM) I was getting the warning relating to the frame boundary... from what you have said, that shouldn't happen as the version of DoM that saved the project, was the same that opened it...
The error only occured in the part 2 of the project, and although it didn't stop the DCP encode, it did, seemingly (as I can think of no other factors), create a huge audio sync issue....
Its useful to know about the reel thing in DoM. In reality, tools like Clipsters prefer reels, and a main reason for doing so is to aid with dealing with large files, and quick generation of VFs, plus additional inserts like Studio logos, additional dubbing cards in the credits etc..
I think in reality, I am pushing DoM beyond what its really designed for, so it's not DoM's fault per se.
My solution is to end up manipulating the source .m2ts within software (XMedia Recode) to create a part 1 file, and a part 2 file, thus not needing to use trim within DoM. As it turns out, I was able to (using XMedia Recode) create the required part 1 and part 2 source files without actually doing any transcoding of the image or audio, thus preserving the quality of the original source .m2ts.
The main issue with the frame boundary error was when starting the project from the beginning, no DCPs or anything, just the source .m2ts file.
I created a project which I wanted to be part one, loaded the .m2ts, found the point I wanted to create the split, trimmed everything after that point, and saved. I created a 2nd project, which I wanted to be part two, loaded the same .m2ts file, found the point where I previously had split it in the part 1 project, and trimmed all content before that point. Thus I have two projects, one shows 00:00:00:00 -> 00:58:50:00 of the source .m2ts, and a different project that shows 00:58:50:00 -> END of the same source .m2ts file. the issue is, when I save the part 2 project file, close DoM, come back and open the project file (within the same version of DoM) I was getting the warning relating to the frame boundary... from what you have said, that shouldn't happen as the version of DoM that saved the project, was the same that opened it...
The error only occured in the part 2 of the project, and although it didn't stop the DCP encode, it did, seemingly (as I can think of no other factors), create a huge audio sync issue....
Its useful to know about the reel thing in DoM. In reality, tools like Clipsters prefer reels, and a main reason for doing so is to aid with dealing with large files, and quick generation of VFs, plus additional inserts like Studio logos, additional dubbing cards in the credits etc..
I think in reality, I am pushing DoM beyond what its really designed for, so it's not DoM's fault per se.
My solution is to end up manipulating the source .m2ts within software (XMedia Recode) to create a part 1 file, and a part 2 file, thus not needing to use trim within DoM. As it turns out, I was able to (using XMedia Recode) create the required part 1 and part 2 source files without actually doing any transcoding of the image or audio, thus preserving the quality of the original source .m2ts.