Newsmeldung von slashCAM:Here it comes to news reporting with links and images on the pages slashCAM Magazine
Antwort von deti:
The encoded MOV files in the H.264 data stream has only 8-bit and so it comes to 255:1, or 20 log 255 = 48dB, which results in f-stop 48 / 6 = 8 - if the signal is NOT a Noise included. But this is impossible in practice.
In my view, a higher dynamic range only in RAW photo mode is possible.
@ WoWu: The thread is yours.
Deti
Antwort von ruessel:
The exposure curve needs to be not so linear, movies can be synonymous to good 8Bit Bluray / DVD burn with the entire aperture range. So I've at least understood. Sure, not every exposure point is described in the digital stream and s.der right place, but in principle, fit the brightest and darkest point of the exposure on it ....... Keywords HDR.
Antwort von deti:
An aperture stop is defined by a distance of 6 dB in the signal. When I use the HDR in the usual "Tonemapping" do, so I reduce the output value range by a non-linear compression, whereby the distance-6dB is lost. Thus one can not speak of stops to describe the tonal range. For 10 f-stops so I need 60dB, so that we have 10 bits and not on the H.264 encoding in the camera.
If I could still add that the Red ONE record in 12bit, the result is an effective dynamic range of 72dB and a theoretical range of aperture 12th
Deti
Antwort von ruessel:
Thus one can not speak of stops to describe the tonal range. Good, I see that.
But still we can cheat and say that the camera light conditions of 10 f-stops without even drown or effloresce can map nonlinear ..... which is nothing special, can my Ex1 with the Kneefunktion synonymous.
Antwort von deti:
But still we can cheat and say that the camera light conditions of 10 f-stops without even drown or effloresce can map nonlinear ..... which is nothing special, can my Ex1 with the Kneefunktion synonymous. One can assert the right, but not why it is not physically true, for by restoring the original dynamic range so there is substantial quantization.
Example: We take on a gray scale (brightness over 0-100%) with 10bit compress the dynamic range to 8bit and then we go back to 10bit, because of course the original Picture impression want to restore. The result: The gray wedge gets strip, where were the original no-10bit. So what helps the cheat? Would not it have been better since, for example, black to be at least linear and drown the remaining range of values represent?
Deti
Antwort von ruessel:
If I could still add that the Red ONE record in 12bit, the result is an effective dynamic range of 72dB and a theoretical range of aperture 12th And what I like in practice, if the starting material is an 8bit Bluray? It must then be synonymous but the output value range by a non-linear compression is crushed.
Antwort von deti:
Because in the BD, you buy the original Picture impression is restored and therefore inevitably the range of values is truncated. Otherwise, the BD-picture would be much more sluggish than the original in the movie.
Deti
Antwort von ruessel:
Would not it have been better to let drown as black and at least linearly for the remaining range of values represent? It is a big if. I think not, like about my tests with my Ex1R me the Picture, better when the top and bottom are somewhat compressed tonal range. The picture may look a little flatter, it has no longer lose the structures these "cheap" video look. Man shoots usually synonymous grayscale steps to rectify and 10bit does not synonymous to the flat, we live in the optical 8-bit world. However, the images are thus really dull, it may be with the dynamic compression (Black Stretch Knee +) does not exaggerate.
Antwort von deti:
I'm not a cameraman can still Cutter, but computer scientists and comment on these connections only from a technical perspective. Which creative preferences are each destroyed and what desired effects are achieved that is beyond my knowledge.
Deti
Antwort von ruessel:
Yes, luckily creative taste can not really express in figures. Ahhh but, on the bank account of some advertising agencies ....
Antwort von Harald_123:
The encoded MOV files in the H.264 data stream has only 8-bit and so it comes to 255:1, or 20 log 255 = 48dB, which results in f-stop 48 / 6 = 8 - if the signal is NOT a Noise included. But this is impossible in practice. One can increase the Dynamics of dithering.
This is common in audio for many years. So it is no problem, which with a simple calculation for a 16 bit audio CD resulting 96 dB Dynamics, in the audible range with dithering to increase by 30 dB.
What Canon is doing what here, I do not.
Antwort von deti:
One can increase the Dynamics of dithering. Aha, you know - you really like modern compression techniques, such as H.264 work?
Deti
Antwort von Jake the rake:
The encoded MOV files in the H.264 data stream has only 8-bit and so it comes to 255:1, or 20 log 255 = 48dB, which results in f-stop 48 / 6 = 8 - if the signal is NOT a Noise included. But this is impossible in practice.
One can increase the Dynamics of dithering.
This is common in audio for many years. So it is no problem, which with a simple calculation for a 16 bit audio CD resulting 96 dB Dynamics, in the audible range with dithering to increase by 30 dB.
What Canon is doing what here, I do not. Hi,
You have a source somewhere that? Dither was invented "to cover up with Noise quiet passages, or to mix so that it is ultimately not" square "(wave form at 1 bit) sounds. To prevent the loss of the Sun strikes very Dynamics, the "noise shaping" invented later.
What you probably mean the difference between dithered and ungedithertem signal (in an analog system), where the dithered signal (possibly) a better SNR delivers ...
Antwort von Harald_123:
You have a source somewhere that? For example, here. In H.264 is appropriate dithering algorithms provide more image data can be reduced by.
Antwort von deti:
You have a source somewhere that?
For example, here. In H.264 is appropriate dithering algorithms provide more image data can be reduced by. In the article mentioned the primary issue is that of VP7 and dithering is used at the decoder side in the post-processing to meet the block formation and drown black areas. These procedures are in fact the same for MPEG2 reduce - or H.264 video used for the damage caused by the compression algorithms aberrations. This will increase but by no means the dynamic range.
In the above PDF is crystal clear:
The downside to this is that consequently there is no lower-order bit dithering within H.264 and so minor fluctuations appear in the large expanse of color, especially on HD material. Deti
Antwort von Harald_123:
and to meet drown black areas. These procedures are in fact used the same for MPEG2 - Video or H.264 If "drown black areas to avoid," usually the Dynamics has been enlarged. That is a core feature of high-Dynamics. And if it is in H.264 s.einer certain place "no lower-order bit dithering", there might still be other species.
Wait and see but if WoWu has to say something synonymous. He seems to know about so specifically in such areas of "resilient".
Antwort von deti:
If "drown black areas to avoid," usually the Dynamics has been enlarged. That is a core feature of high-Dynamics. No. This is to compare with the so-called comfort noise, as the phone in case of Voice Activity Detection (VAD) is used.
Deti
Antwort von Harald_123:
@ Deti
Antwort von deti:
@ Deti
Antwort von carstenkurz:
What have since measured the Sensor + A / D range in video mode. Actually is understandable that not substantially different from the range in Stillimage mode.
The difference with the RED is: RAW is the full extent subsequently recorded for white balance, exposure and Grading available.
The Canon provides nunmal but synonymous with a flat tone curve is only 8 bits in the source material, moreover only 4:2:0.
In other words, it succeeds in analogy to the HDR technique effectively accommodate a greater range in 8 bits. In it, the video mode is different from actually not that great Stillimage-JPEG mode (if you can before the color-coding time outside).
The difference is then precisely in the post.
Who can imagine the more, the Canon is like a diamond / reverse recording, the RED (or other RAW cameras) negative one. In the Canon image characteristic in Videmodus is set pretty final at the recording, the codec allows grading then only in a very narrow range. RAW is much more than just Negativanalogie possible.
The slide looks like when it is properly exposed, immediately from crunchy, but you can not turn much more to it. If it is on top of poorly exposed, is hardly what to do. That looks negative 'raw' from first to garnix, but can be further treated in the much broader than even the optimal exposure 'Stop'.
The Canon certainly brings the best out of 'their' out technology, but the Comparison with the REDs or other RAW cameras is complete nonsense.
- Carsten
Antwort von WoWu:
This is so colorful bya. Here you have basically yes, all right. Mathematically, of course, can be accommodated only eight linear apertures.
So when I assumed that RAW is created linear, it can not be more than 8 stops.
But it is not recorded on film, still in the electronics RAW. Even film has a gamma process and is far away of RAW. That is, depending on the material varies. In electronics, we have agreed on the gamma curve 2.2, the essential work on the synonymous monitor manufacturer. The course is so that we can accommodate 10 covers.
As regards the second issue, the resolution (steps) in certain areas, so the more of a question, how is quantized and is now anything but linear, come at the same distance n quantization levels used. On a non-linear today are the manufacturers of the samples determined pursuant to the hardware performance all the way to the individual requirements or / and. So, you want a better Resolutionin dark areas, is there with Resolutiongesampled higher than in light areas, the broadcast of the camera anyway can not transfer in full. Or a Manufacturer takes the quantization according to Huffman / Fano, in which the quanta are chosen such that they correspond to equiprobable areas of signal amplitude, that is: rare brightness = gross quantum, frequent brightness = fine quanta.
This means that the quantization (in new cameras) can even change dynamically with the image content.
Also, the quantization will be elected at high signal densities.
And that brings us the signal to noise ratio, the third component, which is not easy, as the (base) Theory says it, here is linear can be included.
Statistically, results in a lower effective value of the quantization rungsrauschens or a larger signal to noise ratio. Depending on the type of weighting changes the Störabstandswerte, benefit or detriment of the additional possibilities provided by such a signal processing.
So it can happen that 12 - and 14-bit quantized signals have identical noise ratios, even though the theory says yes, that one possible signal to noise ratio can be correspondingly greater in a 14-bit environment.
Advantage of such a system, compared to the 12-bit system, then the extended possibilities inherent in 16 384 samples to 4096 samples or even only 256 samples in an 8-bit system. In information theory one speaks of a measure of uncertainty removed.
The more samples are generally of a source are received, the more information is obtained and simultaneously reduces the uncertainty about what could have been sent.
Not least, the fill factor of the sensor plays a role in this context because it is not no preference, if I for example with a 18 000 electrons readout noise of 9 electrons with 40 000 or get ... 10 electrons. Without knowing the fill factor is not at all the subsequent quantization are determined, therefore, is pure speculation.
Higher quantization suggests, therefore, not necessarily reflected in a better signal to noise ratio, but in a better image information. The Manufacturer will be able to choose by such means the expected signal to noise ratio even purposefully. For this they preferably use the advanced features of a dynamic quantization.
So:
The whole discussion is really waste because only very Manufacturer give limited information about what they actually do.
Therefore, the picture look like a (not very) cheap camcorder significantly better than that of a (perceived quality) RAW recording.
Most of the taste since playing with. Lens can not claim that such images are always better because the quality of the post here very much involved. However, you can fix with a top-class camera editing defects. It is thus clear for the Manufacturer of a camera cheaper and easier to simply what he gets as RAW to transfer almost untouched, but to create high-quality algorithms in the processing. However, one can shine with such processing synonymous ... and one must not forget synonymous
Antwort von carstenkurz:
In H.264 appropriate dithering algorithms are intended to reduce image data to a greater degree.
Soso ;-)
'The downside to this is that consequently there is no lower-order bit dithering within H.264 and so minor fluctuations appear in the large expanse
of color, especially on HD material. "
What you probably mean is exactly the opposite - it will usually dither applied in post processing to compress. or suppress Quantisierungsartefakte. This increases the contrast but not extent, but only veiled Banding & Co.
Who dither when applying Encoding: pants, pliers and so ...
- Carsten
Antwort von Harald_123:
Who dither when applying Encoding: pants, pliers and so ...
I can do with audio (consciously synonymous with professional use) for 10 years altogether. UV22, UV22 HR ... It must be added when creating the data, one data reduction. For audio, it brings afterwards m. E. nothing more.
For video, I do not know really. How WoWu writes: The Manufacturer is not disclosed.
About Microsoft's "Windows Media Encoder Studio Edition" can be
Antwort von WoWu:
Harald,
the "dithering" has an opposite effect in H.264. In other compression methods it has been so used to not durchgezeichnete space before the "termination process" and thus to the representation of MPEG-artifacts (macro blocks to preserve).
In H.264 (depending), only one motion vector can be specified for individual areas. leading to data reduction. If one were to animate the artificial surfaces "that the number of vectors would again increase dramatically ... and thus the required bit rate.
The BBC is increasing since last October, no more material, whose origin is 16mm film, without the grain was removed.
This is precisely the reason.
"VC-1 Advanced tuned 10-bit to 8-bit dithering"
And what is being done is, yes, only the lack of gradations in a 10 bit signal that is represented in 8 bits, replaced by a pixel structure. A common procedure which actually leads to a little data reduction. Rather, the fact means that no 10-bit but only 8 bits are transmitted.
Or are you here for something completely different: The self-dithering sA / D converters?