16bit color improves everything from dynamic range, sharpness, contrast, accurate color, color banding, graduations, smoothness and everything in between. Not any one effected item is a "huge" improvement, but the collection of all the things it improves becomes a large improvement. Think of it as the difference you see with images from your digi camera or phone vs DSLR - the color pop, accuracy, banding in skys, etc. Most phones (and even digi cams) last I looked are still in the 10bit range.
Imagine you have two buildings, A and B. Dynamic range is analogous to the height of the buildings, i.e., the difference between floor-level elevation vs. roof-level elevation.
Bit-depth is how many floors you're dividing each building into.
The two measures have zero relationship to each other, unless every building has the same exact floor height (which they don't).
E.g., building A has 16 floors, with a floor height of 12 feet. Total height = 192 feet.
Building B only has 14 floors, but with a floor height of 15 feet. Total height = 210 feet.
Building B has less "bit-depth" (# of floors) but higher "DR" (total height).
Similarly, the 14-bit D800 has less bit-depth (14-bit) than an IQ180 (16-bit), but the D800 still has a higher DR (14 vs 13).
Ade, here is my opinion which is different from yours.
DR and bit depth are related because the DR in typical sense is defined or measured from the digital picture file which is stored with a given bit depth. An example is that one can never extract 12bit DR from a file represented using 4bit depth.
Bit depth is typically set to be slightly higher than DR. Anything more, it's just recording noise on the last few bits. Anything less, it won't be able to record the DR that's available from the sensor.
DR is dependent mostly on the characteristics of the sensor (more later). The bit-depth representation cannot change this DR.
For example, for simplicity's sake let's consider a camera with a dynamic range which can capture light levels from 0 (minimum, representing black) to 100 (maximum white). Let's see what happens when we represent this sensor's data using a 1-bit, 2-bit, and 3-bit depths.
As you can see DR does not change on the number of bits used to represent the values. A 1-bit system can only represent black (0) and white (100). A 2-bit system gives you some grayscale, but does not change the min black (0) & max white (100) levels. A 3-bit system gives you even more gray levels, but the min black & max white levels still remain the same.
We can extrapolate the same concept to 8-bit (256 levels), 14-bit (16384 levels) or even 16-bit (65536 levels) and the result will not change. The DR will remain 100. Higher bit-depths lets you capture finer gradations from the DR you already have -- by giving you more intermediate levels -- but cannot give you higher total DR.
ps. In reality DR is the ratio (not difference) between the max and min levels. Max is defined as the "saturation" level of the sensor's pixels -- that is, the amount of photons/electrons which can be captured before the pixel overflows. Min is defined as the noise floor, which typically means the read noise of the sensor.
Ade, yes, I understand exactly what you are saying.
While your intepretation of DR range is valid in analog world with 0 noise, it's no longer valid in a digital world.
First take the 3bit example, obviously values like 88, 90 will all be rounded and there is no way to differentiate them, thus there is no way to recover the highlight or shadow as people expect with high DR data.
Second, if one comes up with a sensor that throws away 99% of photons, then immediately the DR in your definition will be raised by 100x to 0-10000. That's obviously not the case in digital world.
Actually your last paragraph explains well. DR is the ratio between max and min values. The min value is the signal resolution that is pretty much defined by the lowest value that's not completely dark. In this case, the 3bit system will have a DR of roughly 100/14 which is about 8.
Or another way of looking at it, by connecting a 3bit system after your sensor, you effectively introduced a noise system that has a noise of 14 (or some value like that ) such that you can't differentiate between values like 1 and 5.
Actually if you work it out, in the digital world it still works exactly the same, with the caveat I explained in the last paragraph. (I only used subtraction instead of ratios to make the math easy for everyone to follow).
Let's use actual real-world DR ratios to compare D800 (FX) vs Phase One P65+ (MF).
According sensorgen the D800 has a max saturation of 44972 e- with the noise floor at 2.6 e-.
(e- = equivalent electron output from photons captured)
1-bit system (2 levels) 0 = 2.6 e- 1 = 44972 e- DR = ln(44972/2.6)/ln(2) = 14 stops
Ade, nice discussion. I was going to quote the same sensor info to show that DR and bit depth track each other. I think D800 has 14 bit depth.
Now I see where our discrepency come from.
The 0...0/1...1 values are used to represent the absolute black/white. Thus 0...0 should always equal to 0 (not 2.6e as you indicated. BTW, 2.6e is just a standard deviation and at any moment it can be +/- as well as 1e or 3e). 1...1 should always equal to the maximum. They should not be used to calculate DR, (at least not 0...0 because you can't divide by 0). Rather it should be the ratio of 0...01 and 1...10 or 1...11. From that, you can see that bit depth puts an absolute ceiling on the DR from the output file.
If the sensor tops the DXoMark charts far above the D800, D800E and D800x I could see adding such a body with two lenses, one for landscapes and one for portraits. No complete multi-lens kit; just a lens, or zoom, in the 24 to 35mm range for landscapes and another in the 75 to 120mm range for portraits. But it better have a sensor much better than Nikon's highest mp FX sensor and lenses at least as sharp as the best from Nikon. Otherwise, I don't see enough advantage to justify the additional cost.
Me too, though my brother just got a Leica M Type 240 and having so much fun with it that I'm green with envy. It's been tempting to get one of those instead of MF system especially if I keep my D800e.
That 2.6 e- value is not a standard deviation. Noise is usually expressed as the number of electrons RMS. (Technically RMS^2 = mean^2 + stddev^2).
What I think you're referring to (limits of DR due to bit precision) is indeed true for a given sensor, especially at the ADC stage. This is often referred to as the ADC quantization error, which effectively reduces the number of bits the ADC can output.
However, the noise floor & saturation levels are typically measured through the entire system (including the ADC) so any quantization error would have already been accounted for.
E.g., someone measured the noise floor & saturation of the D800, through its 14-bit ADC, and calculated 14-stops DR. The same measurement on the P65+ though its 16-bit ADC yielded just 11.5 stops.
So just because camera X has more bits than camera Y, does not mean camera X has more DR than camera Y. One measure does not imply the other.
Comments
Bit-depth is how many floors you're dividing each building into.
The two measures have zero relationship to each other, unless every building has the same exact floor height (which they don't).
E.g., building A has 16 floors, with a floor height of 12 feet. Total height = 192 feet.
Building B only has 14 floors, but with a floor height of 15 feet. Total height = 210 feet.
Building B has less "bit-depth" (# of floors) but higher "DR" (total height).
Similarly, the 14-bit D800 has less bit-depth (14-bit) than an IQ180 (16-bit), but the D800 still has a higher DR (14 vs 13).
DR and bit depth are related because the DR in typical sense is defined or measured from the digital picture file which is stored with a given bit depth. An example is that one can never extract 12bit DR from a file represented using 4bit depth.
Bit depth is typically set to be slightly higher than DR. Anything more, it's just recording noise on the last few bits. Anything less, it won't be able to record the DR that's available from the sensor.
DR is dependent mostly on the characteristics of the sensor (more later). The bit-depth representation cannot change this DR.
For example, for simplicity's sake let's consider a camera with a dynamic range which can capture light levels from 0 (minimum, representing black) to 100 (maximum white). Let's see what happens when we represent this sensor's data using a 1-bit, 2-bit, and 3-bit depths.
bits = level
1-bit system: (2 levels)
0 = 0 (black)
1 = 100 (white)
DR = (100-0) = 100
2-bit system: (4 levels)
00 = 0 (black)
01 = 33
10 = 66
11 = 100 (white)
DR = (100-0) = 100
3-bit system: (8 levels)
000 = 0 (black)
001 = 14
010 = 29
011 = 43
100 = 57
101 = 71
110 = 86
111 = 100 (white)
DR = (100-0) = 100
As you can see DR does not change on the number of bits used to represent the values. A 1-bit system can only represent black (0) and white (100). A 2-bit system gives you some grayscale, but does not change the min black (0) & max white (100) levels. A 3-bit system gives you even more gray levels, but the min black & max white levels still remain the same.
We can extrapolate the same concept to 8-bit (256 levels), 14-bit (16384 levels) or even 16-bit (65536 levels) and the result will not change. The DR will remain 100. Higher bit-depths lets you capture finer gradations from the DR you already have -- by giving you more intermediate levels -- but cannot give you higher total DR.
ps. In reality DR is the ratio (not difference) between the max and min levels. Max is defined as the "saturation" level of the sensor's pixels -- that is, the amount of photons/electrons which can be captured before the pixel overflows. Min is defined as the noise floor, which typically means the read noise of the sensor.
D3 • D750 • 14-24mm f2.8 • 35mm f1.4A • PC-E 45mm f2.8 • 50mm f1.8G • AF-D 85mm f1.4 • ZF.2 100mm f2 • 200mm f2 VR2
While your intepretation of DR range is valid in analog world with 0 noise, it's no longer valid in a digital world.
First take the 3bit example, obviously values like 88, 90 will all be rounded and there is no way to differentiate them, thus there is no way to recover the highlight or shadow as people expect with high DR data.
Second, if one comes up with a sensor that throws away 99% of photons, then immediately the DR in your definition will be raised by 100x to 0-10000. That's obviously not the case in digital world.
Actually your last paragraph explains well. DR is the ratio between max and min values. The min value is the signal resolution that is pretty much defined by the lowest value that's not completely dark. In this case, the 3bit system will have a DR of roughly 100/14 which is about 8.
Or another way of looking at it, by connecting a 3bit system after your sensor, you effectively introduced a noise system that has a noise of 14 (or some value like that ) such that you can't differentiate between values like 1 and 5.
Nikon have 74 lenses which will fit DX or FX
non are very compatible with MF
My Nikon gear has never let me down
the same cannot be said for the Pentax gear I have owned
Actually if you work it out, in the digital world it still works exactly the same, with the caveat I explained in the last paragraph. (I only used subtraction instead of ratios to make the math easy for everyone to follow).
Let's use actual real-world DR ratios to compare D800 (FX) vs Phase One P65+ (MF).
According sensorgen the D800 has a max saturation of 44972 e- with the noise floor at 2.6 e-.
(e- = equivalent electron output from photons captured)
1-bit system (2 levels)
0 = 2.6 e-
1 = 44972 e-
DR = ln(44972/2.6)/ln(2) = 14 stops
2-bit system (4 levels)
00 = 2.6 e-
01 = 14992 e-
10 = 29982 e-
11 = 44972 e-
DR = ln(44972/2.6)/ln(2) = 14 stops
So the D800's dynamic range is 14 stops regardless of how many bits you divide the range with.
The Phase One P65+ sensor has a max saturation of 53019 e- and a noise floor of 17.6 e-.
2-bit system (4 levels)
00 = 17.6 e-
01 = 17685 e-
10 = 35352 e-
11 = 53019 e-
DR - ln(53019/17.6)/ln(2) = 11.5 stops
You can see in this example the D800 even with lesser bits still has more DR than the P65+.
And this is true in reality, only with 14- vs 16-bits instead of 1- vs 2-bits.
Now I see where our discrepency come from.
The 0...0/1...1 values are used to represent the absolute black/white. Thus 0...0 should always equal to 0 (not 2.6e as you indicated. BTW, 2.6e is just a standard deviation and at any moment it can be +/- as well as 1e or 3e). 1...1 should always equal to the maximum. They should not be used to calculate DR, (at least not 0...0 because you can't divide by 0). Rather it should be the ratio of 0...01 and 1...10 or 1...11. From that, you can see that bit depth puts an absolute ceiling on the DR from the output file.
Me too, though my brother just got a Leica M Type 240 and having so much fun with it that I'm green with envy. It's been tempting to get one of those instead of MF system especially if I keep my D800e.
@tc88
That 2.6 e- value is not a standard deviation. Noise is usually expressed as the number of electrons RMS. (Technically RMS^2 = mean^2 + stddev^2).
What I think you're referring to (limits of DR due to bit precision) is indeed true for a given sensor, especially at the ADC stage. This is often referred to as the ADC quantization error, which effectively reduces the number of bits the ADC can output.
However, the noise floor & saturation levels are typically measured through the entire system (including the ADC) so any quantization error would have already been accounted for.
E.g., someone measured the noise floor & saturation of the D800, through its 14-bit ADC, and calculated 14-stops DR. The same measurement on the P65+ though its 16-bit ADC yielded just 11.5 stops.
So just because camera X has more bits than camera Y, does not mean camera X has more DR than camera Y. One measure does not imply the other.