If I see the jpg as a result at the end of the chain "raw-development and post production", I expect one cannot go beyond the borders in terms of colour depth of a jpg. But the 14 bit raw gives me much more possibilities to avoid blank highlights and this will become visible in the jpg, too. That was the same in darkroom era, film always got a bigger tonescale than paper was able to reproduce and that wasn't a bad thing. I'm not to keep only 5% but I do think, I don't bring soo much pictures with a high amount of crap home. And if I need to try variations, I keep the best and delete the rest.
There are some activities, though, which result in high output. Like HDR, panoramas, HDR-panoramas, timelapse and focus stacking, but even then is the question: why not taking full advantage of all the camera can give? Up to 2 Memory cards of up to 128GB, tethered shooting with a laptop makes diskspace a no-brainer or a less brainer than the question of power consumption while taking 1000s of pics.
On the other side: what could possibly be the advantage of using 12 instead 14 bits?
@SnP I don't know if it's the driver, it's just the necessary resultant byproduct of the conversation. Nor do I believe that "5%" is any goal or needs to be any number for anyone.
As to the number of prints. In my publishing days, we would 'print' far less, but 'keep' them all in the archive. I'm guessing that the same applies now for the digital work flow. I shoot about 45,000 shots a year mostly for a local theater group, but "publish/print" a fraction of those, yet keep all of them. Nearly all go to the theater on DVD for the archive, the student technical staff and actors for their portfolios, and many are printed for displays for the lobby, and of course some are use for publicity. And I do some advertising shoots and still get a few calls for television and movie calls for video work.
For prints I actually delivery practically none. Perhaps 120 20"x30" display prints per year.
@TTJ RE: 12bit 14bit jpg, once the file has been converted, it's converted. What magic it has had is gone. What was there to make it great, made it great. I'm changing your language only to make sure that the difference is in the right tense. It's there because it was there. Why that is important is what happened or could have happened in post or even in capture with the nuance of light. The range of 14bit is a big, big thing.
The there is there.
You can prove this to yourself for just a few bucks, if you have a 12bit and 14bit device (D90, D7000, D600) and Photoshop and a Costco or Costco-like provider.
Take a fairly uncomplicated picture - a portrait with bokeh is really good for this - and blow it up to 20"x30". The cost is only $9 and takes 2 hours.
Logic says a D90 at that size should look like poo. If you shoot RAW, use minimal sharping, and only process the image slightly in RAW to the extend that it needs to properly prepared for Photoshop PSD's format for curves, saturations, etc, it should look perfectly normal at viewing distance (2 times the size corner to corner - what I learned - could be different to what you learned - but it just looks good).
Such is the power of 14bit and the digital darkroom. ;-)
Thank you all...wow. I really do like the idea of 14 bit lossless compressed except in a recent shoot at the Henry Ford Museum in Dearborn, Michigan. The lighting is by spotlights, focused on parts of the exhibit and they are almost black in between the spots. So, after a few RAW images, i rapidly saw these were not good, so i shot +- 3 f/stop JPEG Fine HDR in the D4. And, in LR 4.3, I almost always pull shadows up and highlights down, then brush areas to bring things together even more, finally bring contrast up to a pleasing image.
And i see that the 14 bit was where I was getting all the phenomenal shadow detail out of the D4.
I think I started this thread in the idea maybe one could shoot a D800 in 12 bit to save file size. And, the answer is, at least for me, no.
My history from the dark ages was to shoot Plus-X at ASA 80 and develop in D-76 1:1 for about 80% of normal times. I guess this was giving me the "14 bit" where as at normal ASA & developing it would equate to "12-bit"
As Mike said...Ansel Adams....in my case i believe it was Fonville Winans & Gerhard Bakker.
It's there because it was there. Why that is important is what happened or could have happened in post or even in capture with the nuance of light. The range of 14bit is a big, big thing.
The there is there.
Such is the power of 14bit and the digital darkroom. ;-)
I understand what you all are saying about 14 bit being better. My question is, in really low light which handles the noise better, 12 bit or 14 bit? Thanks!
The 14 bit Jpg on a D7000 has more information than a Raw from the D200...
“To photograph is to hold one’s breath, when all faculties converge to capture fleeting reality. It’s at that precise moment that mastering an image becomes a great physical and intellectual joy.” - Bresson
There is no such thing as a 14bit jpeg (unless you mean "TIFF JPEG"). Jpegs are 8.25-12bits (the latter only in very recent, medical scanners) and that is only at the lowest level of compression.
Post edited by PB_PM on
If I take a good photo it's not my camera's fault.
There is no such thing as a 14bit jpeg. By definition all jpegs are 8bit.
From Thom Hogan,
Nikon uses 16-bit processing for the full 14-bit image data for JPEG and TIFF for the D3 and D300 series. This is true for the D7000 as well. Nikon uses 12-bit processing for D2x, D2xs, and D200.
There is no such thing as a 14bit jpeg. By definition all jpegs are 8bit.
From Thom Hogan,
Nikon uses 16-bit processing for the full 14-bit image data for JPEG and TIFF for the D3 and D300 series. This is true for the D7000 as well. Nikon uses 12-bit processing for D2x, D2xs, and D200.
My best,
Mike
Interesting, he seems to contradict himself... from his D300/D300s review.
First, note that the camera is always 12-bit when you shoot JPEG or TIFF
My interpretation is this. Jpegs files themselves are 8-12bit files (as of version 2.2, the latest as far as I can tell). The camera's uses the 14bit data to create the jpg, but the file itself is only 8bit (up to 16 million colours). That is, unless Nikon has created a new JPEG standard that I cannot find any data about on the net.
Post edited by PB_PM on
If I take a good photo it's not my camera's fault.
I am not that technical on bits... All I know is that my D7000 gives me the opportunity to have 14bit set when shooting jpg and that the images I get from them have more detail than a Raw from the D200. Especially in low kelvin light and with bright highlights...
“To photograph is to hold one’s breath, when all faculties converge to capture fleeting reality. It’s at that precise moment that mastering an image becomes a great physical and intellectual joy.” - Bresson
The D7000 gives you the option to choose 12bit or 14bit under the "NEF (RAW) recording" in the menu.
JPEG settings are split into three categories, like every other modern Nikon DSLR: 1) Under "Image quality" along with NEF+JPEG settings you can choose: - Fine, Normal, Basic
2) Resolution settings: Size: Large, Medium or Small: L - 4928 x 3264 / 16.1 MP M - 3696 x 2448 / 9.0 MP S - 2464 x 1632 / 4.0 MP
I have some experience in programmatically coding image files so might be able to give a bit of insight on the JPEG bit depth.
- Technically speaking, our cameras produce files in accordance to the EXIF standard - The current EXIF standard is version 2.3 (used by the latest Nikon DSLRs) - In EXIF 2.3, compressed images are stored using the JPEG format, while uncompressed images are stored in the TIFF format - EXIF 2.3 Section 4.4.3 specifies using 8-bit image data components
Accordingly, recent Nikon DSLRs will always use 8-bit channels for both JPEG and TIFF files.
Therefore, only way to output more than 8-bits per channel in a Nikon camera is to use the NEF file format.
I have no knowledge in this area. Interesting facts Ade...
“To photograph is to hold one’s breath, when all faculties converge to capture fleeting reality. It’s at that precise moment that mastering an image becomes a great physical and intellectual joy.” - Bresson
The camera is a computer and processes at some level of bit processing higher before delivering the image at 12-bit. I think that is what Hogan is reporting the information that Nikon told him (after all, he doesn't operate in a vacuum nor do I think he has super duper equipment to test his findings). ;-)
IOW, while there is just so much color in a JPG file - the JPEG is 12-bit, but how it comes to be birthed is through a different process. It goes through a different and more 'colorful' 16-bit processing in the new cameras.
The end JPEG images are still have to met the requirements, but the engine that gets them there was more robust.
You're right that all recent Nikon DSLRs internally process images with 16-bits per channel before JPEG conversion. (1) Unfortunately, the final JPEG output is only 8-bits, not 12-bits.
So regardless of how sophisticated Nikon's internal 16-bit processing might be, a large part of the image color data will be thrown away during the conversion to 8-bit JPEG. In the past Thom estimated that 2/3rds of the RAW data may be lost during conversion in a typical camera.
Perceptually, that amount of loss is usually acceptable if the JPEG is to be used as-is, but as you pointed out earlier in the thread, it's not ideal for post-processing.
(1) Note: we know the sensor Analog-to-Digital conversion is outputting 14-bits max, so the internal processing at 16-bits doesn't actually add any more detail. Using 16-bits might help a bit with performance (everything is byte-aligned) and perhaps can minimize any rounding errors.
You're right that all recent Nikon DSLRs internally process images with 16-bits per channel before JPEG conversion. (1) Unfortunately, the final JPEG output is only 8-bits, not 12-bits.
@Ade, You're absolutely right, of course. I'm quoting Hogan from his D7000 guide and incorrectly stating (after re-reading - it's obviously incorrect, like saying blue is green) bit depth.
I wanted to suggest that the sensor and camera just did a great job of sampling the data.
Whether that is accurate can be tested, I suppose, if someone has several cameras and lenses and wants to go through the drudgery of taking pictures and counting colors and color accurately. Even that might not be good enough. I don't know.
You cant really compare RAW bits to JPEG bits anyway (apples to oranges). JPEG embeds an ICC color profile, either sRGB or Adobe RGB. Because these color spaces use a non-linear transformation, the 8 bits cover a much larger colorspace anyway. So the RAW to JPEG conversion is *not* just a linear transform of the 12 or 14 bit colorspace, so you really get more like 11 bits out of an 8 bits JPEG. Hope this helps!
There are two factors at play here: 1) the overall size of the color space; 2) the precision by which we can express different colors within that space.
First, even with AdobeRGB, the JPEG image color space is much smaller than the sensor's space reflected in RAW. So right off the bat we are potentially clipping many colors simply by converting to JPEG, and the gamma (non-linear) transformation can't help with that. The 8-bits can only reflect colors within the smaller AdobeRGB gamut.
Second, with 8-bits there are only 256 levels available per channel which has to be distributed (non-evenly) throughout the color space. While having a gamma curve helps preserve the perceptually important colors, 256 levels are often simply not sufficient in post-processing especially if the image data must be "stretched" (e.g., common operations such fixing an underexposed picture by increasing the exposure in post).
The above is why "banding" (posterization) artifacts are much more likely to occur when post processing 8-bit JPEGs. Even though via non-linear transformation those 8-bits can cover the range (gamut) of the color space, we don't have enough levels within the color space, so many colors which are supposed to be different end up being represented by one color band.
The "fix" is to directly address the two factors listed above:
1) Use a large enough color space to preserve (most) of the demosaiced sensor data with minimal clipping, such as by using the ProPhoto RGB color space, which is not an option if we use the in-camera JPEG (limited to sRGB or AdobeRGB only).
2) Use the largest precision (bit-depth) available for processing: 12- or 14-bit NEF in the RAW format, and 16-bits when converting the image into RGB data (e.g., ProPhoto RGB at 16-bits per channel).
In post, the difference between 8-bit JPEG and 12-bit RAW (converted to 16-bit RGB) is massive. Like night vs. day in many situations. The difference between 12-bit RAW and 14-bit RAW is much more subtle, but can be noticeable.
Comments
There are some activities, though, which result in high output. Like HDR, panoramas, HDR-panoramas, timelapse and focus stacking, but even then is the question: why not taking full advantage of all the camera can give? Up to 2 Memory cards of up to 128GB, tethered shooting with a laptop makes diskspace a no-brainer or a less brainer than the question of power consumption while taking 1000s of pics.
On the other side: what could possibly be the advantage of using 12 instead 14 bits?
Now I'm going to get performance anxiety.
@SnP I don't know if it's the driver, it's just the necessary resultant byproduct of the conversation. Nor do I believe that "5%" is any goal or needs to be any number for anyone.
As to the number of prints. In my publishing days, we would 'print' far less, but 'keep' them all in the archive. I'm guessing that the same applies now for the digital work flow. I shoot about 45,000 shots a year mostly for a local theater group, but "publish/print" a fraction of those, yet keep all of them. Nearly all go to the theater on DVD for the archive, the student technical staff and actors for their portfolios, and many are printed for displays for the lobby, and of course some are use for publicity. And I do some advertising shoots and still get a few calls for television and movie calls for video work.
For prints I actually delivery practically none. Perhaps 120 20"x30" display prints per year.
@TTJ RE: 12bit 14bit jpg, once the file has been converted, it's converted. What magic it has had is gone. What was there to make it great, made it great. I'm changing your language only to make sure that the difference is in the right tense. It's there because it was there. Why that is important is what happened or could have happened in post or even in capture with the nuance of light. The range of 14bit is a big, big thing.
The there is there.
You can prove this to yourself for just a few bucks, if you have a 12bit and 14bit device (D90, D7000, D600) and Photoshop and a Costco or Costco-like provider.
Take a fairly uncomplicated picture - a portrait with bokeh is really good for this - and blow it up to 20"x30". The cost is only $9 and takes 2 hours.
Logic says a D90 at that size should look like poo. If you shoot RAW, use minimal sharping, and only process the image slightly in RAW to the extend that it needs to properly prepared for Photoshop PSD's format for curves, saturations, etc, it should look perfectly normal at viewing distance (2 times the size corner to corner - what I learned - could be different to what you learned - but it just looks good).
Such is the power of 14bit and the digital darkroom. ;-)
I will try to do something today about HDR.
My best,
Mike
And i see that the 14 bit was where I was getting all the phenomenal shadow detail out of the D4.
I think I started this thread in the idea maybe one could shoot a D800 in 12 bit to save file size. And, the answer is, at least for me, no.
My history from the dark ages was to shoot Plus-X at ASA 80 and develop in D-76 1:1 for about 80% of normal times. I guess this was giving me the "14 bit" where as at normal ASA & developing it would equate to "12-bit"
As Mike said...Ansel Adams....in my case i believe it was Fonville Winans & Gerhard Bakker.
So, thanks to all.... ;;)
D3 • D750 • 14-24mm f2.8 • 35mm f1.4A • PC-E 45mm f2.8 • 50mm f1.8G • AF-D 85mm f1.4 • ZF.2 100mm f2 • 200mm f2 VR2
D3 • D750 • 14-24mm f2.8 • 35mm f1.4A • PC-E 45mm f2.8 • 50mm f1.8G • AF-D 85mm f1.4 • ZF.2 100mm f2 • 200mm f2 VR2
Ok, you are appointed to have T-Shirts made and sell these at the meeting in Colorado...LOL
Mike
The D7000 gives you the option to choose 12bit or 14bit under the "NEF (RAW) recording" in the menu.
JPEG settings are split into three categories, like every other modern Nikon DSLR:
1) Under "Image quality" along with NEF+JPEG settings you can choose:
- Fine, Normal, Basic
2) Resolution settings:
Size: Large, Medium or Small:
L - 4928 x 3264 / 16.1 MP
M - 3696 x 2448 / 9.0 MP
S - 2464 x 1632 / 4.0 MP
3) JPEG compression:
- size priority
- optimal quality
There are no other JPEG settings.
- Technically speaking, our cameras produce files in accordance to the EXIF standard
- The current EXIF standard is version 2.3 (used by the latest Nikon DSLRs)
- In EXIF 2.3, compressed images are stored using the JPEG format, while uncompressed images are stored in the TIFF format
- EXIF 2.3 Section 4.4.3 specifies using 8-bit image data components
Accordingly, recent Nikon DSLRs will always use 8-bit channels for both JPEG and TIFF files.
Therefore, only way to output more than 8-bits per channel in a Nikon camera is to use the NEF file format.
The camera is a computer and processes at some level of bit processing higher before delivering the image at 12-bit. I think that is what Hogan is reporting the information that Nikon told him (after all, he doesn't operate in a vacuum nor do I think he has super duper equipment to test his findings). ;-)
IOW, while there is just so much color in a JPG file - the JPEG is 12-bit, but how it comes to be birthed is through a different process. It goes through a different and more 'colorful' 16-bit processing in the new cameras.
The end JPEG images are still have to met the requirements, but the engine that gets them there was more robust.
My best,
Mike
You're right that all recent Nikon DSLRs internally process images with 16-bits per channel before JPEG conversion. (1) Unfortunately, the final JPEG output is only 8-bits, not 12-bits.
So regardless of how sophisticated Nikon's internal 16-bit processing might be, a large part of the image color data will be thrown away during the conversion to 8-bit JPEG. In the past Thom estimated that 2/3rds of the RAW data may be lost during conversion in a typical camera.
http://www.bythom.com/jpeg.htm
Perceptually, that amount of loss is usually acceptable if the JPEG is to be used as-is, but as you pointed out earlier in the thread, it's not ideal for post-processing.
(1) Note: we know the sensor Analog-to-Digital conversion is outputting 14-bits max, so the internal processing at 16-bits doesn't actually add any more detail. Using 16-bits might help a bit with performance (everything is byte-aligned) and perhaps can minimize any rounding errors.
I wanted to suggest that the sensor and camera just did a great job of sampling the data.
Whether that is accurate can be tested, I suppose, if someone has several cameras and lenses and wants to go through the drudgery of taking pictures and counting colors and color accurately. Even that might not be good enough. I don't know.
My best,
Mike
There are two factors at play here: 1) the overall size of the color space; 2) the precision by which we can express different colors within that space.
First, even with AdobeRGB, the JPEG image color space is much smaller than the sensor's space reflected in RAW. So right off the bat we are potentially clipping many colors simply by converting to JPEG, and the gamma (non-linear) transformation can't help with that. The 8-bits can only reflect colors within the smaller AdobeRGB gamut.
Second, with 8-bits there are only 256 levels available per channel which has to be distributed (non-evenly) throughout the color space. While having a gamma curve helps preserve the perceptually important colors, 256 levels are often simply not sufficient in post-processing especially if the image data must be "stretched" (e.g., common operations such fixing an underexposed picture by increasing the exposure in post).
The above is why "banding" (posterization) artifacts are much more likely to occur when post processing 8-bit JPEGs. Even though via non-linear transformation those 8-bits can cover the range (gamut) of the color space, we don't have enough levels within the color space, so many colors which are supposed to be different end up being represented by one color band.
The "fix" is to directly address the two factors listed above:
1) Use a large enough color space to preserve (most) of the demosaiced sensor data with minimal clipping, such as by using the ProPhoto RGB color space, which is not an option if we use the in-camera JPEG (limited to sRGB or AdobeRGB only).
2) Use the largest precision (bit-depth) available for processing: 12- or 14-bit NEF in the RAW format, and 16-bits when converting the image into RGB data (e.g., ProPhoto RGB at 16-bits per channel).
In post, the difference between 8-bit JPEG and 12-bit RAW (converted to 16-bit RGB) is massive. Like night vs. day in many situations. The difference between 12-bit RAW and 14-bit RAW is much more subtle, but can be noticeable.