I only shoot RAW, tested some digital photo formats and put a photo from a duck on PAD. My program at the moment is Capture One 20. The maximum reach I have now for the Nikon Z6 is 200mm, so I have to crop. I always crop and know what I want, when I take the photo.
The .JPG from this duck on PAD is 2820 x 1876px Original made by 24mp Nikon Z6, cropped to 1880 x 1876px and output to 150% (easy). I also made a 200% .JPG with a result of 3760 x 2502px All three, no sharpening or other adjustments and printed to a perfect high resolution A3 format photo's, you cannot see a difference.
A did a test a couple of years ago with a photo made with the crop sensor Nikon D7200 and also the 70-200 f/4mm on it, ISO 100 - 1/1000s - f/5.6, with a photo from a tortle dove, result a 6000 x 4000px RAW photo, in Lightroom I made a 200% .JPG photo, result 12.000 x 4.000px and my photolab made a print from 120 x 40cm for me in there best possible quality, which you could pixel peep.
I told that photolab that I did this test, they called me 2 days later and told that they made a 240 x 80cm, which was perfect. The camera settings for ISO, shutter and aparture are the important factors.
My two cents, you dont see photo's made by a camera anymore, it is just software (which is fun).
The photography world is completely changed and the last years I see nothing new, only billions of the same crappy photo's, made with a camera or phone, all heavely edited with software, with all the "build in" different JPG convertors and camera settings. Internet is loaded with "reviewers" who know what camera has the best JPG convertor.
Mobile phones I don't even want to talk about, that is software only and the world loves these photo's.
For us, old- and very old school it is different, but this world does not exsist anymore, pixels are only used for all kinds of software to build a photo. Replace a sky easy, buy Skylum, want some weird adjustments, buy Topaz Labs, Franzis etc.
I hope for us, that Nikon stays in the photo business, because the CIPA figures are worse and worse in the last 2 years. There comes a time we don't talk about DSLR's or mirrorless camera's anymore, technic don't stop.
But till that time, we have lots of fun.
Post edited by Ton14 on
User Ton changed to Ton14, Google sign in did not work anymore
I once stood on the stage at the Royal Shakespeare company in Stratford upon Avon and pronounced the famous bards lines.." All the worlds a cvnt and all the players on it Fackars" and that is where we are now with photography Toni4. ( there was no audience ..I was setting up for a video but I later found it went straight through to the dressing rooms !!!)
I have not tried to use software to increase the pixels in an image. Has anyone done this and if so how has it worked for you? I have used panorama stitching together of 3 or more (6 or 9 has been my top) images and that has increased pixels with no bad effect if the stitching came together well because nothing moved between the exposures and the exposures were on manual rather than some auto setting. I think the largest pano image I have made was about 150 megapixels from either a 24 or 36 mp sensor and a moderate telephoto lens to create an image size which would be captured by a 28 or 35mm angle of view. My impression of this was that if you really want mega pixels for a super large print you can use this technique rather than buy a hugely expensive medium format body. By using the moderate telephoto lens you are seeing and therfore capturing more detail than the 28 or 35 mm lens would be able to see and capture.
This is the path from a friend's condo to the beach. 111 megapixels (25335 × 5436) from a 24 mp sensor and 50mm lens to make an extreme wide angle photo. I don't remember how many exposures. You can click through to flickr and view it original size.
It was a difficult job, due to light, now automated and easy to do.
The easiest way is to use Lightroom, Photoshop or Affinity to stich photo's to panorama, also a HDR panorama. When taking photo's overlap by about 20%, that's all, you don't need a tripod anymore.
Lots of gimbals has software included to make panorama's for Android and IOS phones, completely automated process to take a couple of row's and stich it to a panorama, but .... you need a beast of a computer, even bigger when you have a 48mp camera
"Blow up" from Exposure software does a perfect job to blow up photo's, but only .JPG's, also Gigapixel AI from Topaz, you can upscale by up to 600% !! preserving image quality, also for prints.
Fine with me. It just seems Nikon would likely jump from 20 to 24 in the next iteration. But surely if there is a D850 next gen and a Z8 with a 61 mp sensor Nikon could produce a 30mp DX sensor.
In the context of your statement 30mp DX would only make sense in a Z body with the superior resolving power of the new S lenses. BUT, we don't have new super telephoto S lenses yet so the F mount super telephotos have be be able to resolve detail at 30 mp on a DX sensor in order to show more feather detail, for example. I don't know if anyone here can show some proof that the Nikon's 300, 400, 500 and 600 mm super telephoto lenses are able to resolve more than 30 mp on a DX sensor. If so, 30 mp could produce a sharper image in a F mount or Z mount body. If not, it won't improve the detail in the image because the glass will be the limiting factor.
From imatest data I have seen on the internet conducted on the same modern lenses using cameras with notably MP differences, at least at base ISO, there appears to be some improvement in resolution output, though not proportional to the increase in camera resolving power. Whether it's noticeable in a print I can't say. So an increase in MP generally means an improvement in output. WestEndFoto has spoken to his own experience related to differences in lens resolution on the same camera.
There are quite a number of real life comparisons of lens resolving capabilities on the Leica Forum, third party and within the Leica group of cameras. Setting aside forum member biases, they are at least something to read during this isolation period.
Since Nikon is already using Flourite glass and other new tech to improve the resolution and corners of telephotos, it is unclear to me how they could improve further for Dx lenses. Nikon tele primes are already excellent, and their newest tele zooms seem to be on the forefront of excellence as well.
I think the DX dream is dead and probably should be buried unless one wants to shift to Fuji which specializes in that format.
"I think the DX dream is dead" In DSLR this is probably correct but not in Z body where Nikon is just getting started. But Nikon may keep Z body DX sensor bodies to a lower level and may not produce a Z DX sensor body equal to the D500. We just have to wait and see. I hope they do produce a Z body like the D500 since that D500 is so good.
Did anyone try my experiment to zoom in on a sharp photograph with tiny details and examine the pixels?
The more pixels, the less extra resolution you gain from every added pixel, but I am sure that there is a lot more resolution to be gained from where we are today. If people don't need or want more pixels, that is another question and not much to argue about.
@donaldejose It is not only the body, it is new complete hardware and software, body and lens combination. You can test the Nikon Z50 with a F-mount lens compared to a dedicated Z-mount lens, which is made for the body and the Z-camera. It works different (less) with F-mount, an example is IBIS, 3-axis instead of 5. Take the super F-mount lens 24-70mm f/2.8G on the Z6 and Z7, the 24-70mm f/4S gives better results !!. It is the combination. Test on your Z50.
Further the camera software. The Z serie works with "Expeed6", which is a big improvement on previous versions of course. The "in camera software" became so important for the photo.
Imagine a 12 mp sensor D700 or D3 or D3s x 6 being able to produce a 72 megapixel file or a 45 mp D850 sensor x 6 being able to produce a 270 mp file all for only $100. Even having the ability to double your sensor pixels for such a low price is a great deal.
I have not needed more pixels so I have not tested how Gigapixel AI works. Don't see how I would ever need more pixels than currently found in sensors unless I were printing very large in which case it seems that it could produce a sharper giant print. Thus, I am not going to even try it out. Hope it works great but I just have no use for it at this time.
Folks, How much of an impact does the processor "engine" make when shooting RAW? I can see the impact on jpgs. But its role in RAW is less clear.
Well, considering that the "really raw" data is in a weird mosaic and the values represent linear light, the processor engine, whether in the camera or on your computer has significant impact. Both engines have to black subtract, white balance, and demosaic in order to get a "flat RGB" image that you can recognize; after that, the main thing applied is a tone curve to take the image from linear to perceptual. Those curves have a lore of their own, from the in-camera picture control curves to the log and filmic curves from the movie industry.
Color is the other thing; the camera colorspace is way bigger than any display can handle. The transform from camera to display space can make a big difference in color gradation depending on how its done, especially if there are extreme colors in the image. The standard transform is based on a 3x3 matrix of numbers per-camera that does a simple slope-of-a-line transform, which sometimes results in posterized extreme colors. Better yet is a lookup table transform, which shapes the gradations of extreme colors better. The Adobe DCPs that come with Lightroom do a lookup table transform.
Your question resonates with me right now; I'm assembling the equipment to measure camera spectral sensitivity. Making a LUT profile from spectral data is the best way to control how the color transform handles all the color translations from camera to display space. Something to do while in solitary...
@ggbutcher - I was initially wondering how much difference going from an eXpeed4 to an eXpeed6 "in-camera" engine mattered if you were exporting RAW files to a third party converter.
Your project sounds really interesting. The various post processing software makers all claim to have the "best" RAW converters and, while I can see differences, what it means is mostly opaque to me.
When you finish I'm hoping you can do a "for dummies" version.
Good luck - stay well
PS: Just did a little reading/watching on LUTs and color grading. It was eye opening.
Comments
The .JPG from this duck on PAD is 2820 x 1876px
Original made by 24mp Nikon Z6, cropped to 1880 x 1876px and output to 150% (easy).
I also made a 200% .JPG with a result of 3760 x 2502px
All three, no sharpening or other adjustments and printed to a perfect high resolution A3 format photo's, you cannot see a difference.
A did a test a couple of years ago with a photo made with the crop sensor Nikon D7200 and also the 70-200 f/4mm on it, ISO 100 - 1/1000s - f/5.6, with a photo from a tortle dove, result a 6000 x 4000px RAW photo, in Lightroom I made a 200% .JPG photo, result 12.000 x 4.000px and my photolab made a print from 120 x 40cm for me in there best possible quality, which you could pixel peep.
I told that photolab that I did this test, they called me 2 days later and told that they made a 240 x 80cm, which was perfect. The camera settings for ISO, shutter and aparture are the important factors.
My two cents, you dont see photo's made by a camera anymore, it is just software (which is fun).
The photography world is completely changed and the last years I see nothing new, only billions of the same crappy photo's, made with a camera or phone, all heavely edited with software, with all the "build in" different JPG convertors and camera settings. Internet is loaded with "reviewers" who know what camera has the best JPG convertor.
Mobile phones I don't even want to talk about, that is software only and the world loves these photo's.
For us, old- and very old school it is different, but this world does not exsist anymore, pixels are only used for all kinds of software to build a photo. Replace a sky easy, buy Skylum, want some weird adjustments, buy Topaz Labs, Franzis etc.
I hope for us, that Nikon stays in the photo business, because the CIPA figures are worse and worse in the last 2 years. There comes a time we don't talk about DSLR's or mirrorless camera's anymore, technic don't stop.
But till that time, we have lots of fun.
This is the path from a friend's condo to the beach. 111 megapixels (25335 × 5436) from a 24 mp sensor and 50mm lens to make an extreme wide angle photo. I don't remember how many exposures. You can click through to flickr and view it original size.
I know others here have made panos.
The easiest way is to use Lightroom, Photoshop or Affinity to stich photo's to panorama, also a HDR panorama. When taking photo's overlap by about 20%, that's all, you don't need a tripod anymore.
Lots of gimbals has software included to make panorama's for Android and IOS phones, completely automated process to take a couple of row's and stich it to a panorama, but .... you need a beast of a computer, even bigger when you have a 48mp camera
"Blow up" from Exposure software does a perfect job to blow up photo's, but only .JPG's, also Gigapixel AI from Topaz, you can upscale by up to 600% !! preserving image quality, also for prints.
Propulsion Laboratory (JPL) team at NASA made a 1.8 billion pixel, 360 degree panorama of the Martian surface. https://mars.nasa.gov/resources/curiositys-1-8-billion-pixel-panorama/?site=msl
There is a lot of hardware to make the photo's
@donaldejose : Having had a 24mp DX and FX as well as the 46mp D850, I would be happy with the perfprmance of a DX size D850 with 24mP.
There are quite a number of real life comparisons of lens resolving capabilities on the Leica Forum, third party and within the Leica group of cameras. Setting aside forum member biases, they are at least something to read during this isolation period.
Since Nikon is already using Flourite glass and other new tech to improve the resolution and corners of telephotos, it is unclear to me how they could improve further for Dx lenses. Nikon tele primes are already excellent, and their newest tele zooms seem to be on the forefront of excellence as well.
I think the DX dream is dead and probably should be buried unless one wants to shift to Fuji which specializes in that format.
The more pixels, the less extra resolution you gain from every added pixel, but I am sure that there is a lot more resolution to be gained from where we are today. If people don't need or want more pixels, that is another question and not much to argue about.
Further the camera software. The Z serie works with "Expeed6", which is a big improvement on previous versions of course. The "in camera software" became so important for the photo.
@snakebunk Yep when you zoom you can see it.
I have not needed more pixels so I have not tested how Gigapixel AI works. Don't see how I would ever need more pixels than currently found in sensors unless I were printing very large in which case it seems that it could produce a sharper giant print. Thus, I am not going to even try it out. Hope it works great but I just have no use for it at this time.
Color is the other thing; the camera colorspace is way bigger than any display can handle. The transform from camera to display space can make a big difference in color gradation depending on how its done, especially if there are extreme colors in the image. The standard transform is based on a 3x3 matrix of numbers per-camera that does a simple slope-of-a-line transform, which sometimes results in posterized extreme colors. Better yet is a lookup table transform, which shapes the gradations of extreme colors better. The Adobe DCPs that come with Lightroom do a lookup table transform.
Your question resonates with me right now; I'm assembling the equipment to measure camera spectral sensitivity. Making a LUT profile from spectral data is the best way to control how the color transform handles all the color translations from camera to display space. Something to do while in solitary...
Your project sounds really interesting. The various post processing software makers all claim to have the "best" RAW converters and, while I can see differences, what it means is mostly opaque to me.
When you finish I'm hoping you can do a "for dummies" version.
Good luck - stay well
PS: Just did a little reading/watching on LUTs and color grading. It was eye opening.