I recently noted that Lumix are to bring out a 42MP micro 4/3 and a 100-400 pro grade lens ...Good I thought make a great bird combo for more POI. Then I went oh no the iso sensitivity is bad at 800 v the FX at 3000 so thats out. So I took my old D800 (I keep the good cameras in the floor safe) and photo in the garden FX 1/80 F8 iso 200, Then I switched it to DX still 1/80. Then I fetched the old D3200 ..still 1/80 and no doubt if I use a M4/3 I would still have got 1/80. The light falling on the sensor is just as bright whatever the size of the sensor right? ( I said brightnes not quantity) So if a camera is just a lens in front of a sensor whats going on? DX sensor half the size of FX so half the sensitivity and 4/3 is 25% of FX so thats a quarter sensitivity .. Now I know that these ratings are based on noise levels (-30db) so are they saying that a 25% crop on my D850 if taken at over 800 iso will be as bad as a M4/3? My neighbour who is a 4/3 fanatic does not follow it either.
There are plenty of reasons to dislike M4/3s, but personally noise wasn't the biggest one. The format (crop) is the worst aspect, since nobody views images on screen with a screen ratio even remotely like it.
If I take a good photo it's not my camera's fault.
Use the same lens across all of those bodies, the same exposure you’re seeing makes sense. Next compare output. You’re getting less information. You’re getting a crop of the biggest sensor in the smallest sensors. While “all” the smaller crops could give you was the light in their small part of town, perhaps at roughly equal quality, the largest sensor could collect more data at the same time. This example holds the lens constant.
You could do another more common comparison, by holding the composition constant. Shorter focal length lenses on the smaller sensor bodies, longer on the larger. In this case the scene would be the same, but pixel peeping would reveal higher quality of the larger sensor, and if in low light, less noise.
D7100, D60, 35mm f/1.8 DX, 50mm f/1.4, 18-105mm DX, 18-55mm VR II, Sony RX-100 ii
The only information I get from that is " random shot noise" worse on smaller sensors ..will research that .I also get that in "the daytime" Thom will use a M4/3 for landscapes and that using JPEG on m4/3 is a better option as the maker has put in noise reduction. If the 400mm lens in front of my sensor gives me a 10mm high bird on the sensor it does that on all sensors . The 4/3 has already cropped it for me so am I worse off than cropping from my D850 down to 25% ? That bird on the D850 is 12MP on the 4/3 it will be 42MP ( when camera is released) and at present 20MP.
If the 400mm lens in front of my sensor gives me a 10mm high bird on the sensor it does that on all sensors . The 4/3 has already cropped it for me so am I worse off than cropping from my D850 down to 25% ? That bird on the D850 is 12MP on the 4/3 it will be 42MP ( when camera is released) and at present 20MP.
In this case I think you are better of with the 4/3 because you get a higher resolution. There is no reason to have a large sensor if you crop it away. And if you have a zoom lens you can zoom out if you only rarely get a close or large bird that needs more than 4/3 with 400 mm.
I use full frame cameras because I only use prime lenses and it allows me to fit large or close birds in the sensor. It is my way of zooming out. It also gives great image quality when I do fill the sensor.
You also have to keep in mind that smaller formats suffer from higher levels of diffraction. At F4 on m4/3s you are already as diffraction limited as F10 on full frame, so you are losing sharpness, making the useable resolution of the 40+mp sensor less useful. When I used 4/3s in the past I noticed this, and that was back when they used 12MP sensors, I’d hate to see what it would be like with more MP.
If I take a good photo it's not my camera's fault.
Well investigation shows that much of what is going on is based on "equivalence" a term I have not encountered in 40 years in this game ,.So it must be important ...not In simple terms equivalence is comparing other formats to FX like FX is some sort of gold standard. In this context if the sensor is 25% of the size of FX it must be 25% of the sensitivity because a quarter of the light falls on it. There is an excellent article here https://admiringlight.com/blog/full-frame-equivalence-and-why-it-doesnt-matter/2/ Which as you can see is called " Equivalence why it does not matter" The only thing that does matter it seems is shot noise. This is a concept shrouded in Integral calculus and I ain't going back there !! Shot noise seems to be random emissions from pixels which occure all the time but become an increasing percentage of sensor output the darker it gets and the hotter it gets. Hence our astrophoto friends wanting to cool sensors. I would have thought if you had 46MP you would get more pixels firing off than if you had 20 on a M4/3 but if you apply the equivalence your sensor is equivalent to 80MP so in the twisted scenario of equivalence you get more . I shall investigate further.
I wasn’t talking about noise, I was talking about diffraction. It’s an effect cause by the aperture, and how light bends within the lens. The smaller the lens, the smaller the aperture the greater the effect will be, which is why smaller formats will always be more diffraction limited than larger formats. It’s a matter of physics, not snake oil.
If I take a good photo it's not my camera's fault.
Maybe I am wrong but his it how I think about it. There is much debate and discussion between Tony Northrup and Ken Wheeler (the Angry Photographer on YouTube) about this topic. Tony says FX "gathers more light" and that is why it is better at higher ISO while Ken says the same amount of light per square centimeter falls on a sensor no matter the size of that sensor. Both are correct in the statement they make but Ken Wheeler is incorrect when it comes to considering the total photons gathered per image produced which is what Tony is talking about. Think of solar panels. In the same sunlight you will get more electrons out of two solar panels than one. Half the size of light sensitive material will produce half the electricity. A digital camera sensor is also turning light into electricity. Half the size of the sensor will produce half the electricity when both sensors are exposed to the same amount of light per square centimeter. An FX sensor will gather about 2 times more photons and therefore generate about two times more electrons than a DX sensor for the same image produced. Exposure values do not change because those are based upon photons gathered per square centimeter but an FX sensor image is constructed from two times more photons turned into electrons than is a DX sensor image. An image generated from fewer electrons will have more noise than an image generated from twice the electrons when generated by the same image creating software in the camera. Think about an image with a correctly exposed highlight and shadows which are essentially just underexposed from fewer photons gathered producing fewer electrons for the image producing software to work with. Noise shows up first in the shadow area of a photo. The shadow area is like the DX sensor and the highlights are like the FX sensor in terms of number of photons gathered generating number of electrons produced to be processed into that part of the image. So how do you reduce the noise in the shadows? You increase the exposure (like double the time or open the aperture one stop) thus increasing the amount of photons producing electrons for that part of the image. But, you then can blow out the highlights if you don't have enough dynamic range to work with. You have to compare the noise of DX and FX sensors with the same generation of image processing software. When you do so the FX sensor will produce about half the noise of the DX sensor in the same image shot with the same exposure values. This can be changed when viewing in-camera produced jpegs if Nikon puts more aggressive noise reduction software into a DX body than it does into an FX body but more aggressive noise reduction also means more loss of detail at the same high ISO. So there can be many complications. Another one is that noise from a smaller pixel pitch can seem to be less because it is smaller "grains." But basically, in the same amount of sunlight exposure values will be the same across all sensor sizes while high ISO image quality will basically be two times better from an FX sensor compared to a DX sensor when the image fills the sensor. Now as to a 25% crop from an FX sensor being as bad at high ISO as a micro 4/3rd sensor, the answer is no. Any crop from an FX sensor will retain all the image quality of the FX sensor and its image processing software. You are just looking at 1/4th of the same image generated from the light gathered. Remember, you are looking at only a part of the image and its characteristics do not change from what is baked into the entire image. You may see more noise because you are essentially "pixel peeping" when you crop. But remember that is not how you would use and FX body in the first place. You would fill the frame with your intended image and not fill one forth of the frame with your intended image only to intend to discard three fourths of the photons the sensor collected in order to produce an image. If we assume that you have enough light to expose at base ISO so noise is as low as that sensor can produce than it may be possible that the higher pixel number of a smaller sensor will produce a sharper image if you are using the same lens and cannot get closer to fill the frame with the image. Because in the last hypothetical you are shooting with the same generation of image processing software which has essentially the same noise characteristics at base ISO and you are projecting the same size image on the sensor because you cannot get closer than you may be able to produce a better image simply because you have more pixels on the subject and all other things are being kept equal. I hope this helps. If I am wrong others here can point our my errors.
Excellent donaldjose ..thanks for going to the trouble of writing it. The solar panel analogy is good but would not two solar panels produce twice the shot noise of one? IF that was IF shot noise is the problem I am still struggling with the concept as all the descriptions are couched in "vague language". With bird photography, you often discard 90% of the image area and I understand what PB PM means on diffraction ..its all in that equivalence ..diffraction at F8 on FX is the same as at F4 on m4/3. PS I have always been very disapointed with M4/3 .poor image quality produced by too few pixels and a low pass filter.I equate a Oly M5 with a D7000, Good in its day buy no good today.
"The solar panel analogy is good but would not two solar panels produce twice the shot noise of one?" Think of "shot noise" as a percentage of good usable signal. Think of "shot noise" as "signal to noise ratio." The ratio stays the same. Yes, two solar panels produce twice as much "shot noise" or "noise" but two solar panels also produce twice as much signal so the signal to noise ratio remains the same between one and two solar panels. The difference is in the total number of elections you have in the good "signal" with which to create the image. More "signal" electrons equals better image quality which can be obtained by processing those additional electrons. More pixels don't create more total "signal" given the same sensor size because more pixels doesn't mean more light photons fall on that sensor. It just means the photons which do fall are divided into smaller pieces which should produce increased sharpness because you have better and more precise edges on objects in the photo (think eyelashes, the more pixels per single lash the sharper that detail can be rendered by the sensor or think bird feathers, the more pixels per feather the sharper the details of the feather can be rendered). More pixels can also equate so smaller "grain" in the noise so the noise can seem to be less even though it really isn't, just being in smaller "clumps." I first noticed the pattern of DX high ISO being about half as good as FX high ISO years ago when I was looking at DxOMark ISO ratings of the same camera generations and noticed a consistent pattern of DX being about half of FX which I realized must be a factor of the relative sensor size. Hope this helps. If I am wrong others here can point that out. This is just how I think about the issue and why I prefer FX unless there are other more important considerations such as occur when selecting the D500 for sports action and wildlife because of its reach and AF over say a D750 or D800 with more megapixels.
Let me make a correction here in response to @Pistnbroke's question. Noise can be positive or negative, otherwise it wouldn't be called noise. So adding two solar panels together, the signal will double, the noise will offset each other somewhat, so the signal to noise ratio will improve.
The only way I can square the circle on this one is if the noise from a 4/3 sensor is of a similar amplitude to that from a FX. Then the bigger image output from the FX would suppress it to lower light levels .......
Another way to think about it is the size of the lens aperture. If you understand diffraction and resolution, you will appreciate that photographic concepts are derived from that. In astronomy, aperture is everything.
Then you will realize that the size of the sensor is not my fundamental (aperture is) and appreciate the confusion that comes from this.
I would expect signal to noise ratio would be about the same per generation of sensors. I do not know if one wafer manufacturer can produce that much better a product than others or if one sensor fabricator can be that much better than others. I would expect quite similar signal to noise ratios across the board with the greatest variance to come from the processing of that signal after it is received from the digital sensor. Thus, Pistnbroke, I believe you can square the circle. Perhaps someone else here knows better.
For a same aperture, yes, FX, DX, etc will all have the same light density (brightness). That's why ISO used is same. But notice the word "density", it means amount per unit area in this case. So FX will still take in 2.5x total light compared to DX at the sensor level.
Now let's look at pixel level. (1) If the pixel density is same (say between D850 and D500), then each pixel will take in the same amount of light. So D850 can be cropped into a D500 image without losing quality. (2) If the pixel count is same (say between D750 and D7200), then the D750 pixel will take in 2.5x amount of the light compared to the D7200 pixel. That will improve the per pixel signal to noise ratio (SNR). So the D750 picture will be cleaner than D7200 picture.
As to why a bigger pixel will have better per pixel SNR, you can imagine a 2x sized pixel is just measuring light 2 times and averaging it out. As to why averaging improves SNR, think about if you want to get a really accurate reading of your cholesterol level. You can measure it once or you can measure it 10 times (assuming you are willing to be poked ). Measuring 10 times and averaging those will definitely provide a more accurate reading because some readings will be higher than real value and some reading will be lower than real. Averaging does not reduce signal strength, but reduces noise because there can be cancellations of noise.
That's why also in astro, people take hundreds of pictures and average those to improve SNR. That's also why you can down sample a D850 and get a same quality picture as a native lower resolution sensor, at least in theory.
That's why also in astro, people take hundreds of pictures and average those to improve SNR. That's also why you can down sample a D850 and get a same quality picture as a native lower resolution sensor, at least in theory.
Hope this explains.
And this is why people think that 24 mp sensors are better at noise than 36 mp sensors. They used to be because the borders around the pixels were wasted. Now those borders are very small. While there is a benefit to the 24 mp sensor, it is very small. Down sampling the larger resolution sensor in practice gets you very close, albeit not all the way.
Yes, but there will be a 100mp (maybe 120mp) full frame body in a few years and prices will drop as bodies age. It is just a matter of time; if you have time to wait! I would very much like to see Nikon put a larger sensor into the Z body; as large a sensor as is possible. I do hope Nikon planned for that option when it created the Z mount. I don't care if that larger sensor is "medium format" or "small medium format" as those are just arbitrary prior film sizes. Just make it as large as possible with full image circle coverage by the great Z S line of lenses and that will be great.
As there are a vast number of couples who have put off their weddings I did consider opening up the business again but the wife was not keen . So the justification /tax relief for big expenditure is gone so its wait and see ..Same applies to the M4/3 for the birds ..$4000 no I dont think so
Comments
You could do another more common comparison, by holding the composition constant. Shorter focal length lenses on the smaller sensor bodies, longer on the larger. In this case the scene would be the same, but pixel peeping would reveal higher quality of the larger sensor, and if in low light, less noise.
http://www.sansmirror.com/articles/equivalence-in-a-nutshell.html
If the 400mm lens in front of my sensor gives me a 10mm high bird on the sensor it does that on all sensors . The 4/3 has already cropped it for me so am I worse off than cropping from my D850 down to 25% ? That bird on the D850 is 12MP on the 4/3 it will be 42MP ( when camera is released) and at present 20MP.
I use full frame cameras because I only use prime lenses and it allows me to fit large or close birds in the sensor. It is my way of zooming out. It also gives great image quality when I do fill the sensor.
In simple terms equivalence is comparing other formats to FX like FX is some sort of gold standard. In this context if the sensor is 25% of the size of FX it must be 25% of the sensitivity because a quarter of the light falls on it.
There is an excellent article here
https://admiringlight.com/blog/full-frame-equivalence-and-why-it-doesnt-matter/2/
Which as you can see is called " Equivalence why it does not matter"
The only thing that does matter it seems is shot noise. This is a concept shrouded in Integral calculus and I ain't going back there !!
Shot noise seems to be random emissions from pixels which occure all the time but become an increasing percentage of sensor output the darker it gets and the hotter it gets. Hence our astrophoto friends wanting to cool sensors.
I would have thought if you had 46MP you would get more pixels firing off than if you had 20 on a M4/3 but if you apply the equivalence your sensor is equivalent to 80MP so in the twisted scenario of equivalence you get more . I shall investigate further.
IF that was IF shot noise is the problem I am still struggling with the concept as all the descriptions are couched in "vague language".
With bird photography, you often discard 90% of the image area and I understand what PB PM means on diffraction ..its all in that equivalence ..diffraction at F8 on FX is the same as at F4 on m4/3.
PS I have always been very disapointed with M4/3 .poor image quality produced by too few pixels and a low pass filter.I equate a Oly M5 with a D7000, Good in its day buy no good today.
Then you will realize that the size of the sensor is not my fundamental (aperture is) and appreciate the confusion that comes from this.
For a same aperture, yes, FX, DX, etc will all have the same light density (brightness). That's why ISO used is same. But notice the word "density", it means amount per unit area in this case. So FX will still take in 2.5x total light compared to DX at the sensor level.
Now let's look at pixel level. (1) If the pixel density is same (say between D850 and D500), then each pixel will take in the same amount of light. So D850 can be cropped into a D500 image without losing quality. (2) If the pixel count is same (say between D750 and D7200), then the D750 pixel will take in 2.5x amount of the light compared to the D7200 pixel. That will improve the per pixel signal to noise ratio (SNR). So the D750 picture will be cleaner than D7200 picture.
As to why a bigger pixel will have better per pixel SNR, you can imagine a 2x sized pixel is just measuring light 2 times and averaging it out. As to why averaging improves SNR, think about if you want to get a really accurate reading of your cholesterol level. You can measure it once or you can measure it 10 times (assuming you are willing to be poked ). Measuring 10 times and averaging those will definitely provide a more accurate reading because some readings will be higher than real value and some reading will be lower than real. Averaging does not reduce signal strength, but reduces noise because there can be cancellations of noise.
That's why also in astro, people take hundreds of pictures and average those to improve SNR. That's also why you can down sample a D850 and get a same quality picture as a native lower resolution sensor, at least in theory.
Hope this explains.