Most Nikon, Canon or otherwise cropped or full frame DSLR's are capable of 12 bit raw images. Higher end Nikon's, Canon's, etc will let you shoot 14 bit raw. Mamiya, and Hasselblad allow 16 bit raw file recording. (I am not sure that the Pentax 645z will do 16 bit).
I shoot a D700 in 14bit raw, I was looking at a D810, or a D4s, but I am tired of 14bit raw. I am looking for 16, 24, or 32 bit raw files. We have memory cards on the market that will store 512gb of data.
To put this in perspective, I can capture a raw image from a flat bed scanner in a "perceived" (I don't know if it really is) 48bit color Tiff file. Infact, scanners have been able to do 48bit color for years.
What this means to me as a photographer, is that there are that many more shades of grey or color available to more closely match analog or some of that fidelity that film has over digital.
||COOLPIX 5000|●|D70|●|D700|●|D810|●|AF-S NIKKOR 14-24mm f/2.8G ED|●|AF Nikkor 20mm f/2.8D|●|AF Nikkor 50mm f/1.4D|●|AF-S NIKKOR 50mm f/1.4G|●|AF Micro-Nikkor 60mm f/2.8D|●|AF-S Micro Nikkor 60mm f/2.8G ED|●|AF-S VR Zoom-NIKKOR 70-200mm f/2.8G IF-ED (Silver)|●|AF-S Teleconverter TC-20E III|●|PB-6 Bellows|●|EL-NIKKOR 50mm f/2.8||
Comments
... H
Nikon N90s, F100, F, lots of Leica M digital and film stuff.
This is with an un-trained ear on non-audiophile equipment (I-pod) because there is that much difference.
I would expect the difference to be similar with a 12bit, 14bit, 16bit or say a 24bit raw file for a photo.
What's the bit depth of the human eye? Cause I sure know my girlfriend sees colors I don't, like peach, lilac...
My wife has the same problem, she sees shoes ,hats , Coats, and jewellery which I cannot see!
This is the 'red book' specification and must be followed to be compliant with the Sony/Phillips patent license.
128 kbit is one of many MP3 formats.
.... H
Nikon N90s, F100, F, lots of Leica M digital and film stuff.
That's 1378 kbit/s.
Sigma 70-200/2.8, 105/2.8
Nikon 50/1.4G, 18-200, 80-400G
1 10-30, 30-110
I don't know if the camera internallyi does 14 bit or 16 bit A/D conversion, but I read somewhere that 14 bits are enough for current Nikon sensors (which is why the 14 bit format was chosen, I'm sure).
Sigma 70-200/2.8, 105/2.8
Nikon 50/1.4G, 18-200, 80-400G
1 10-30, 30-110
Me, I want dynamic range. That's what I seem to wrestle with in post-processing the most. Then, resolution, but even that is secondary if I don't garner ambitions beyond web galleries.
http://laurashoe.com/2011/08/09/8-versus-16-bit-what-does-it-really-mean/
Really, however, don't you posterize at some level when you do any manipulation of any recorded set of exposure measurements? That's the consequence of digital recording in any medium: the quantization (or bucketization) of analog measurements into digital approximations. So, when you goof around with the exposure, you're just taking the existing buckets of measurements and moving some of the measurements to other buckets. The larger the color space, the smaller the buckets, so the posterization is not discernable at 12, 14 or 16-bit color (to most of us, I'd guess). I just looked up 'posterization' in Wikipedia; that definition would direct us to 'noticeable' bucketing. Part of my day job is communications engineering, and the whole quantization/bit error thing is in my face almost daily, thank you Claude Shannon. And to think I took up a hobby that let me take that part of work home...
So, just arbitrarily converting your RAWs to JPEG loses reams of color information that you can easily see the absence of in post manipulations. But what's the benefit of 14 vs 12-bit recording with respect to post? I mean, you can easily express the difference in powers of 2, but can you see it, and in what finished products does it make a discernible difference?
The real answer lies in the final presentation, and what it is capable of. The whole reason video folks are excited about 4K is so that they can edit in that space (including color, contrast, resolution, etc...) and then re-sample down to 1080i for broadcast and have it look great. In photo editing we want to work in 16 bit/color (48 bit) space so that when we dumb it down for 8-bit/color (24 bit) space it still looks great.
Remember those 12/14/16 bits are representing saturation, brightness, and hue so the larger that space is to start with, the better, especially if you are going to heavily edit it. Also remember that every additional power of two doubles the number of buckets for quantization. Look at the output from a 16-bit capable sensor and you will see exactly what the benefit is.
Or you wont, if you are only going to look at an 8-bit representation on a 2MP screen.
To retain as much color information as long as possible, I'm opening my RAW files in GIMP with the ufraw plugin, 16-bit, Adobe 1997 color space, and do all my adjustment and save the work in the GIMP native format. I just switched from shooting JPEG to RAW, so I don't have many images yet to compare, just some test shots in the front yard. In my recent trip to shoot train pictures, I struggled the most with dynamic range, so I'm evaluating whether pulling RAW from the D50 will give me any significant improvement in that.
There's a part of me that wants a newer camera to get dynamic range; I'd probably try to score a D7100 after Nikon announces whatever follow-on camera they have in the wings. But I've got to weigh that expenditure against being able to afford more train field trips next year. I'd rather be shooting them with the old D50 than taking pictures of my front yard with a shiny new D7100...
The question is how/why use 12/14/16 bit color when monitors are only 8 bit?
I think my description of posterization above gives one example of why.