Frage von Daniel Weilich:Hello first and greetings in the first round,
My sleepless nights are of a general nonlinear editing questions in understanding the sequence settings plagued editors to add. Loss between the source codec / format and avoid the final export.
Yes yes, the unfortunate sequence settings: I get partu not in the head -> Is it inevitable in the case of different sequence settings now additional transcoding the material or not? This is another question. Convert the editors s.Ende now the "playout" of source codec / format 1 directly in export codec / format 2 without setting the sequence in the intermediate step? Or is converted of the NLE's usually of codec / format to sequence a codec / format, and then only at the codec / format 2? So there are 1 or 2 steps running in the background when exporting its finished sequence in its target format? In node-based compositing are programs such as Nuke, for example, it garkeine stage sequence in the usual sense and one step in the thought has only raw material and export selection. This leads to the suspicion that either the sequence settings are basically virtual or there Programs work differently.
The question is based on me often inconvenient but unavoidable visual thinking of the I do not get away. Embarrassing but true: I have cut quite a while, but got up today nowirklich clear answer. Unfortunately, there and back again inevitable situations where you must select the source and target different Sequenzeinstelllungen. One I have already given answer used to be that most NLE's the sequence settings can definitely not ignore IF Effects and attribute changes (aspect ratio / movement / time, etc.) on the sequence lie, because the effects are so adapted to the sequence settings. This makes sense, but the answer's not enough, because although a special case clear finishes, but leaves out many other situations. Also received was not as popular NLE's differences or whether there are differences at all. He was referring to then I think to Final Cut Pro.
If anyone here the basic principles such as Programs (such as Final Cut Pro, MC, Edius or Premiere), as this "tick" to know, I would be very grateful for any advice. Thank you for your suggestions in advance!
It greets the Daniel
Antwort von Belize:
In Vegas Pro, there are project characteristics, noSequenzeinstellungen. And since there is a dichotomy. The project properties, resolution, field order, frame rate affect influence, the material within the project in principle, not (not synonymous with the filter effect).
As can 1920x1080i50 video is imported into a project with the project properties 2 fps with 8x8 pixels and progressive and yet the have no control, when the video is rendered, for example, again 1920x1080i50 or another format. Are crucial here but the render settings, while the project properties are just the basis for the preview while editing.
There is no additional influence of the above project properties it inside of Vegas Pro only if internal new media are generated (for the generated identically to the project properties) when a framegrabbing is performed (because the right of the preview function) and in some exceptional cases, "Custom Video frame size of output" if rendering in the render dialog option is selected (if material, in this case, the proportions of input and output differ).
For other nonlinear editing systems can I take noAussage.
Antwort von Daniel Weilich:
Belize, thanks to you for the comments bezügl sony vegas. interesting to hear. One could, of course, now premature to conclude that in other NLE's is the same way? Particularly as there was synonymous with Premiere project settings are, but I think more related to drives and capturing Resolutionetc. In Final Cut Pro sequence settings are very detailed, pixel ratio of over Frame ratio added up codec for the sequence. Since it seems likely that this is because different behavior than in Vegas. But for other comments I would be very grateful.
Belize, but your funny example made me an idea. That's what I'll check it out now spaßenshalber once in different trials rummfliegen here. I set the sequences to be so small that they clearly see ARTIFACTS would have if the whole is then rendered out in large.
I look forward to further experience,
It greets the Daniel
Antwort von Axel:
The sequence setting should ideally be in either the image size or the frame rate footage from the codec to differ, as this in the
preview (Belize Declaration) real-time costs. Since it is now relying on real-time, which is then a NoGo.
The sequence of adjustment can, of course, the footage range equal to the export codec by codec, without ever a frame has been rendered in the sequence codec (your name "virtual" setting). Although possible, it is not reasonable.
The sequence setting should be the project setting, the codec in which to export the finished video in the best (or sufficient for its purposes) quality. Let's say you have DSLR footage that you render the same in the Web format H.264, as it now is technically possible, yes. As shown by two recent parallel threads ("quality losses"), this is not a wise procedure. It would be better, original in a lossless or low-loss codec to have a high quality, and from smaller versions of
this file to calculate.
The sequence should either be identical with the codec codec footage to be (you're in a hurry, and the codec is as DV or DVCProHD) or less compressed and efficient (see above). For the compositing programs in the "suites" an intraframe compressed codec is installed as a render codec, because everything else is unacceptable quality. Since today the post-NLE editing often using dynamic links again with the NLE is linked, it is not very useful to work here with different settings.
Conclusion:
[footage codec], [Edit-and-export ("master" -)-codec], [target codec].
Antwort von Daniel Weilich:
Axel, if I understand you correctly (thanks for your answer!), Then you recommend in the case of different footage (which I could affix indeed a problem example) ergo in a lossless codec (or uncompressed, etc.) "chainplate" so that all an identical (common) present maximum lossless codec / format and the sequence is set for this codec and then synonymous to this codec "mastered" in order then by the master in turn the different end formats to create out? Then you would indirectly to say that the answer to my question "Yes". So: "Yes, the sequence setting produces an intermediate step if it matches the footage." That would be but at some footage equal to 3 steps instead of a transcoder or 2 Of that I wanted gone. So I've always stopped short because I was not talking about the behavior of the sequence setting. Would be quite useful but if you click there and original material draws of transcoded there to request the specified export format. (Step 1)
The sequence setting should ideally be in either the image size or the frame rate footage from the codec to differ, as this in the preview (Belize Declaration) real-time costs. Since it is now relying on real-time, which is then a NoGo. What if a classic cut is planned, so noEffektspielerei, no CG in the cut. So if noEffects on the sequence lie, but present different initial footage in the same sequence must be pure (alternating images of various photos with different aspect ratio, etc.)? Would not it be useful to know whether the sequence settings when exporting produce an intermediate transcoding step or not? Because this was still my original question. And not the workflow. together as Sorenson 3 DVCPRO PAL WIDESCREEN with 16:9 (anamorphic). Frame ratio almost the same but different pixel ratio. I now render out to # FFV1 Huffyuv # # # Lagarith x264 or other lossless, I'd like to know if my sequence setting corresponds to one of the output codecs are now generated in the other a double step. But anyway thanks for the workflow Recommendation. So I have always worked until now because it is exactly the workflow that I would recommend if I'm not sure whether or not (bezügl. my question)
Thanks and best regards