So pixel shift is distinguished from current focus shift of D850 and Z7 by taking multiple images by shifting pixel and merging two or more images taken consecutively by the camera without changing focus points during the image run.
If I recall correctly, it was Hasselblad who introduced mutli-shot technology in its H4 series cameras many years ago- these cameras are prohibitively expensive, but results can be astonishing for detail (i.e. jewelry and artwork). The combined files can be huge. It seems it could be applicable to static subjects of any kind.
I have not used either technology.
Does anyone want it, need it, and should Nikon consider introducing it along with the focus shift. Has anyone used both and can provide thoughts about which may be more useful and why. Panasonic introduced it in its S1R camera.
Is there any tech on the horizon that would obviate need for this? I perhaps incorrectly drew a conclusion from two separate on-line interviews that Leica is working on technology which would eliminate the need for this tech.
Sure I would want it. But one needs to be aware of the limitations.
First, if there is movement in the scene it won't work as it will produce artifacts, at least on the objects that are moving. So it will be good for architecture, but likely not landscapes. Certainly not for portraiture.
Second, you are not turning your 46mp camera into a 184 sensor camera. You are turning it into a 46mp camera with better data at each of the sensors. It does this by capturing the full colour data for each pixel. There is an increase in sharpness, but it is subtle, not wow.
Hmmm......when I convert to black and white I think I must be achieving something similar? Maybe I would need a monochrome sensor for this?
So in short, I will always take a tool that gives me more resolution, even in limited circumstances. But it is a minor effect, not a wow. I would rather have a 46 mp sensor than a 36 mp sensor with pixel shift in all situations assuming that the lens benefits from the higher resolution sensor.
The main issue is you have 4 pixels at different times from each other. So your 5 second exposure is now 20 seconds and in each quad pixel you have one at 5, 10, 15, and 20 seconds.
This rules out astrophotography, portrait, and wildlife. But it does give you more on architecture and product photography.
Though I am really questioning the situations you need over 180MP to process. It makes me think if you need the resolution that you'll know it and could rent a medium format for that occasion.
I view it as a nice to have feature that helps sell cameras but is of limited practical benefit.
I would probably rather have an image stacking feature that takes multiple shorter exposures then combines them to reduce noise but still has the same resolution as normal (more or less). I believe smartphones are doing this and possibly some Olympus cameras?
So I was reading an article in Large Format Photography Forum recently and various members were comparing notes on the benefits of different film scanning methods and equipment, several members mentioned using the Sony A7RIV to scan medium and large format film as well as a Fuji GFX-50R. Using the Sony and a total of16 pixel shift images and an old but venerable enlarging lens, the Sony beat out the Fuji with a macro lens by a bit. They all felt that this method was the next best thing to a full fledged drum scan, which is saying a lot.
What is interesting however, was that the user of the Sony said that a combination of 4 pixel shift images did not in fact show much improvement over a single shot. Dustin Abbott has said that same in an article on the Sony. So the question is whether there is better software out there than what Sony has employed.
But for "static" subjects, yes this may be very useful, particularly if you have the patience to work it out.
I can see this kind of thing working in places like Canyonlands, Yosemite, Red Rocks, Painted Desert etc, where nothing is going to move except for clouds that may be in the images.
I want it. I doubt Nikon would build the version of it I want though. Pixel shifting for added detail or for reduced noise is at an early stage of evolution. So we're not seeing full benefits of it. Google and Apple are telegraphing the direction this could go.
The issue of motion during capture can be calculated out. Exaggeration to make the point: if you can capture 100 frames in 1 second, inspect them, compare fixed objects where the content doesn't change for a greater number of those frames over the duration to establish baseline image, then subject-follow the pixels whose content was similar but in a different location (think of how eye detection currently works), calculate what that subject "should" look like, from the sum of all those pixels and samples, add the pixel shift to get even more data... then calculate the baseline subject and remove the random and spurious noise - you can imagine how computational photography can get to an original that clearly has more detail and less noise than any single capture in those 100.
If you're concerned about the "moment", pick it yourself anywhere in that 1 second, then the computation uses all the other data to build the most "accurate" content to that point in time. I think it will happen, but I have zero confidence Nikon will be a significant player. At best, they may get to be the starting point for capture. Then provide those frames to some software (Lightroom or other) that can do the math. Ideally it all gets done in the background, with recommended best frame aided by supporting data, but also the ability to choose a different frame. There are smarter people than me working on even greater possibilities.
D7100, D60, 35mm f/1.8 DX, 50mm f/1.4, 18-105mm DX, 18-55mm VR II, Sony RX-100 ii
Comments
First, if there is movement in the scene it won't work as it will produce artifacts, at least on the objects that are moving. So it will be good for architecture, but likely not landscapes. Certainly not for portraiture.
Second, you are not turning your 46mp camera into a 184 sensor camera. You are turning it into a 46mp camera with better data at each of the sensors. It does this by capturing the full colour data for each pixel. There is an increase in sharpness, but it is subtle, not wow.
Hmmm......when I convert to black and white I think I must be achieving something similar? Maybe I would need a monochrome sensor for this?
So in short, I will always take a tool that gives me more resolution, even in limited circumstances. But it is a minor effect, not a wow. I would rather have a 46 mp sensor than a 36 mp sensor with pixel shift in all situations assuming that the lens benefits from the higher resolution sensor.
This rules out astrophotography, portrait, and wildlife. But it does give you more on architecture and product photography.
Though I am really questioning the situations you need over 180MP to process. It makes me think if you need the resolution that you'll know it and could rent a medium format for that occasion.
I would probably rather have an image stacking feature that takes multiple shorter exposures then combines them to reduce noise but still has the same resolution as normal (more or less). I believe smartphones are doing this and possibly some Olympus cameras?
What is interesting however, was that the user of the Sony said that a combination of 4 pixel shift images did not in fact show much improvement over a single shot. Dustin Abbott has said that same in an article on the Sony. So the question is whether there is better software out there than what Sony has employed.
But for "static" subjects, yes this may be very useful, particularly if you have the patience to work it out.
I can see this kind of thing working in places like Canyonlands, Yosemite, Red Rocks, Painted Desert etc, where nothing is going to move except for clouds that may be in the images.
The issue of motion during capture can be calculated out. Exaggeration to make the point: if you can capture 100 frames in 1 second, inspect them, compare fixed objects where the content doesn't change for a greater number of those frames over the duration to establish baseline image, then subject-follow the pixels whose content was similar but in a different location (think of how eye detection currently works), calculate what that subject "should" look like, from the sum of all those pixels and samples, add the pixel shift to get even more data... then calculate the baseline subject and remove the random and spurious noise - you can imagine how computational photography can get to an original that clearly has more detail and less noise than any single capture in those 100.
If you're concerned about the "moment", pick it yourself anywhere in that 1 second, then the computation uses all the other data to build the most "accurate" content to that point in time. I think it will happen, but I have zero confidence Nikon will be a significant player. At best, they may get to be the starting point for capture. Then provide those frames to some software (Lightroom or other) that can do the math. Ideally it all gets done in the background, with recommended best frame aided by supporting data, but also the ability to choose a different frame. There are smarter people than me working on even greater possibilities.