How do Nikon DSLRs process Medium/Small image sizes?

roombarobotroombarobot Posts: 201Member
edited December 2013 in Nikon DSLR cameras
Nikon DSLRs have three image sizes Large/Medium/Small. For instance, the D800/E has 36Mpix, 20Mpix, and 9Mpix. How does the camera arrive at the pixel values for these sizes? 36Mpix obviously uses the values for each individual pixel that the sensor has. But what about the 20 and 9 Mpix? Does the camera average the pixels to create the values for the smaller number of pixels when shooting Medium or Small? For instance, does it average 4 of the 36Mpix to create one of the 9Mpix when you are shooting in Small?

I ask in particular, because if the pixels are actually averaged, the signal/noise level at 9Mpix would be higher, due to the averaging, but of course the resolution would be less.
Post edited by roombarobot on
«1

Comments

  • PB_PMPB_PM Posts: 4,494Member
    I suspect it works just like down-sampling in most photo editors, not anything fancy like averaging.
    If I take a good photo it's not my camera's fault.
  • roombarobotroombarobot Posts: 201Member

    How does downsampling work in most photo editors? I really hope it doesn't just drop every Nth value.

    It seems to me that the D800 must do some sort of averaging, particularly since the 20Mpix value isn't an even factor and you couldn't just drop every other pixel or anything.

  • IronheartIronheart Posts: 3,017Moderator
    There are several methods used to "downsize" an image, but a simple average would be an poor choice (from an image quality perspective anyway). There are fancier algorithms that are typically used like bicubic downsampling, bilinear, nearest neighbor, quadratic, Gaussian, Hamming, Bessel and box. Good reading if you are having trouble sleeping :-)

    As to which one Nikon uses, it is likely a proprietary combination of some of these, or something completely custom.
  • roombarobotroombarobot Posts: 201Member

    The reason I ask what Nikon does in the Expeed processor is because any form of averaging, from simple to fancy, will increase the signal to noise. By averaging the pixels Signal:Noise should go up by the square root of the number of noise-independent measurements. Thus, if I was to shoot my D800 in 9Mpix mode, the noise should be one half of what it would be in 36Mpix mode. At the expense of the resolution, which would be 1/4, of course.

    This interests me because I often don't take my D800 out when I am just taking snapshots, instead using my S95 or even my phone, because the hard-drive-crushing 36Mpix photos aren't worth it for silly candids in many cases. On the other hand, I want to use my D800 more, since I did shell out serious dough to get it. So the Small image size could be a great compromise for me, particularly if there is also an advantage like this.

    [Sure, I could take them all in Large/Fine or even RAW and then post-process to crop and downsample, but I am not very fond of post-processing as it is.]

    Maybe I am just not using the correct search terms, but I can't seem to find how Nikon does the in-camera downsampling. Does anyone know?
  • PapermanPaperman Posts: 469Member
    I think you are talking about pixel binning which no DSLR currently has. ( Leica S2 ditched it the last minute ). The $50,000 phaseOne's and so probaly have it. In short, the pixels are not combined when you go for 9Mp - just eliminated ( I'm guessing )
  • roombarobotroombarobot Posts: 201Member

    Paperman, yes, binning would be one way to do it. Averaging after the shot would be another. That is exactly what I want to know. Are they averaged/combined in any way, or are the extra pixels just tossed?

    With the Small = 9Mpix, one could just keep every 4th pixel and toss the rest, but that doesn't explain how one gets to the Medium = 20Mpix. I am hoping they are averaged or binned or something, but I can't seem to find the answer.

  • snakebunksnakebunk Posts: 993Member
    Good question!

    There should be processing power left to do something better than dropping pixels.

    If you take a photograph of a black curve on white paper (or whatever with sharp edges) using the small setting, and post a magnification, I think we can draw some conclusions.
  • PapermanPaperman Posts: 469Member
    edited December 2013
    With the Small = 9Mpix, one could just keep every 4th pixel and toss the rest but that doesn't explain how one gets to the Medium = 20Mpix. I am hoping they are averaged or binned or something, but I can't seem to find the answer.

    Averaging maybe but definitely not binning/combining ... It would have been a great way to convert the D800 to a 9Mp high ISO monster and we would have known about it by now.
    Post edited by Paperman on
  • snakebunksnakebunk Posts: 993Member
    edited December 2013
    Because each pixel is only sensitive to one color (at least on Nikon DSLRs), pixels must always be combined somewhere along the line in order to calculate the colors. I do not know if this has to be done before the raw file is created, but most certainly before a jpg can be created.
    Post edited by snakebunk on
  • spraynprayspraynpray Posts: 6,545Moderator

    This interests me because I often don't take my D800 out when I am just taking snapshots, instead using my S95 or even my phone, because the hard-drive-crushing 36Mpix photos aren't worth it for silly candids in many cases. On the other hand, I want to use my D800 more, since I did shell out serious dough to get it. So the Small image size could be a great compromise for me, particularly if there is also an advantage like this.
    That is exactly the reason I didn't buy the D800 (and have said elsewhere that I would like a D600/610 sensor in a D800 body). You said 'hard drive crushing'? If your processor can work with the larger image files, then your solution is simple - take the D800 but shoot a lot less images a lot more carefully. That is a discipline which makes better images too IME.
    Always learning.
  • WestEndBoyWestEndBoy Posts: 1,456Member

    This interests me because I often don't take my D800 out when I am just taking snapshots, instead using my S95 or even my phone, because the hard-drive-crushing 36Mpix photos aren't worth it for silly candids in many cases. On the other hand, I want to use my D800 more, since I did shell out serious dough to get it. So the Small image size could be a great compromise for me, particularly if there is also an advantage like this.
    That is exactly the reason I didn't buy the D800 (and have said elsewhere that I would like a D600/610 sensor in a D800 body). You said 'hard drive crushing'? If your processor can work with the larger image files, then your solution is simple - take the D800 but shoot a lot less images a lot more carefully. That is a discipline which makes better images too IME.
    Do you shoot Raw, JPEG or Raw plus JPEG? Hmmmm.....forgot about TIFF.
  • spraynprayspraynpray Posts: 6,545Moderator
    Me? Raw. Always (apart from snapping stuff for ebay).
    Always learning.
  • roombarobotroombarobot Posts: 201Member
    I mainly shoot JPEG. I really do not enjoy post-processing.

    That is exactly the reason I didn't buy the D800 (and have said elsewhere that I would like a D600/610 sensor in a D800 body). You said 'hard drive crushing'? If your processor can work with the larger image files, then your solution is simple - take the D800 but shoot a lot less images a lot more carefully. That is a discipline which makes better images too IME.
    Good point on thinking and composing more, Spraynpray. Like your username, with digital I've gotten in the habit of just shooting. I probably could take a bit more time.

    On the other hand, I was joking about the hard-drive-crushing. I did originally buy a D600, but the unending oil spots drove me to the D800E. I can always shoot in 9 or 20Mpix if I want to, but the D600 can't shoot in 36Mpix. I would have liked the D600 to work out, but I couldn't trust it at all.

    If we could find a clear answer to this, I think it would be useful to all Nikon DSLR owners, full/crop, pro/enthusiast, anyone.



  • spraynprayspraynpray Posts: 6,545Moderator
    I don't know the details, but I understand Canon can shoot various qualities of RAW - perhaps Nikon ought to think about that.
    Always learning.
  • IronheartIronheart Posts: 3,017Moderator
    edited December 2013
    The smaller in-camera JPEGs are downsampled from the RAW using a method Nikon will likely not disclose, but rest assured they are not "throwing away every nth pixel". It is a downsampling that "averages" or more precisely "combines" data from several pixels, and you will see the coresponding decrease in noise and increase in signal. However since the final image produced is a JPEG, you will loose dynamic range in the sense that you will fix a certain DR into the JPEG at the time it is made. In a true pixel binning like the phase one does, you wind up with a smaller RAW file that has less noise, more signal and all of the DR of the "original".

    Many people believe that the in-camera conversion to JPEG is superior to what LR/PS can do, and of course all of the picture controls and d-lighting settings can help make some pretty darn good pics. Remember that NX2 uses the same algorithms as expeed does. So yes, for family snaps, Facebook photos, go for the small or medium JPEGs, they will be of very high quality and just the right size. You can always write the RAWs as well to the other card in case you happen to capture that Pulitzer winner while out with the family :-)
    Post edited by Ironheart on
  • AdeAde Posts: 1,071Member
    There are several methods used to "downsize" an image, but a simple average would be an poor choice (from an image quality perspective anyway). There are fancier algorithms that are typically used like bicubic downsampling, bilinear, nearest neighbor, quadratic, Gaussian, Hamming, Bessel and box. Good reading if you are having trouble sleeping :-)

    As to which one Nikon uses, it is likely a proprietary combination of some of these, or something completely custom.
    Actually, bilinear is mathematically equivalent to a simple average for the specific image size reductions we're talking about (L=100%, M=75%, S=50%). Bilinear does a weighted interpolation between two points in each dimension (2x2 = 4 pixels). However, in the case of a 75% (or 50%) reduction, the interpolation is effectively taken from the exact middle of the equidistant pixels -- which is the definition of a simple average.

    Also, nearest neighbor is not a "fancy algorithm": is simply takes the value of the nearest pixel and drops the rest. It is fast but likely to produce the worst results.

    The Expeed (Milbeaut) processor has bilinear, bicubic, nearest neighbor and other algorithms built-in. For JPEG reductions, I would guess that bilinear is used (which again is equivalent to averaging).
  • IronheartIronheart Posts: 3,017Moderator
    edited December 2013
    Ade, my point was that an algorithm is applied vs. either tossing data vs. take the average of two pixels (that's what I would call a simple average). I also wanted to point out there was more than one choice. Computing the midpoint of four pixels, might not be considered simple for some folks ;-)

    In any event, I doubt Nikon is using bilinear , more likely at least bicubic, otherwise we would be seeing a lot more artifacts (such as aliasing, blurring, and edge halos) than we already do.

    Also let's not forget that expeed will also correct for lens distortion, chromatic aberration, etc... When producing the JPEG. The Nikon engineers spend a lot of time tweaking those JPEGs to get the best possible quality out of the camera. Those of us that like/need to post process from the RAW are ignoring that entire pipeline.
    Post edited by Ironheart on
  • AdeAde Posts: 1,071Member
    Bicubic produces more halo artifacts (not less!) than bilinear. Halo is very common, characteristic artifact of bicubic and the fact that we don't see halo everywhere is what makes bilinear more likely.

    (Also, for 75% and 50% reduction, there's very little advantage of doing bicubic, but the computation cost is increased 4x).

    Simple average of four equidistant pixels a, b, c, d is just (a+b+c+d)/4

    which is also the bilinear interpolation of the same four numbers. :)
  • IronheartIronheart Posts: 3,017Moderator
    edited December 2013
    Well, unless we get an expeed engineer on here we may never know :-)
    In addition to the resampling/resizing algorithms (simple or otherwise) there are several others applied in-camera, including those mentioned above (distortion, aberration), such as noise reduction, sharpness, contrast, white balance, etc... So you are a far cry from a strict downsizing of the raw data. Throw in quality as a variable (Fine, normal, or basic) and we will start talking compression algorithms next ;)
    My overall point to the OP was that Nikon goes through a ton of effort to get the camera to produce the best possible JPEG right out of the camera, so he shouldn't be afraid of using those for family snaps. Saving the RAWs as well is just insurance.
    Post edited by Ironheart on
  • roombarobotroombarobot Posts: 201Member

    This is a great discussion, thank you all!

    Ironheart, I hear you and I do know that Nikon does try to put out good JPEGs. That's why I got the D800E over the D800, dpreview said the JPEG quality was significantly better. I wanted to also feel like I was at least getting better signal:noise when I shot Medium or Small JPEGs and it sounds like I am. Thanks!

  • WestEndBoyWestEndBoy Posts: 1,456Member
    Bicubic produces more halo artifacts (not less!) than bilinear. Halo is very common, characteristic artifact of bicubic and the fact that we don't see halo everywhere is what makes bilinear more likely.

    (Also, for 75% and 50% reduction, there's very little advantage of doing bicubic, but the computation cost is increased 4x).

    Simple average of four equidistant pixels a, b, c, d is just (a+b+c+d)/4

    which is also the bilinear interpolation of the same four numbers. :)
    Ade, you seem to be a fountain of technical knowledge. What are some of your sources for this stuff?
  • AdeAde Posts: 1,071Member
    On bicubic and halo? The source is actually linked in the above post (from the word "more"), and comes from wikipedia:

    "(bicubic) preserves fine detail better than the common bilinear algorithm. However, due to the negative lobes on the kernel, it causes overshoot (haloing)."

    http://en.wikipedia.org/wiki/Bicubic_interpolation

    Another good resource is Cambridge In Colour. Here's a link where they explain that bilinear and nearest neighbor are the only two common algorithms which do not exhibit halo artifacts:

    http://www.cambridgeincolour.com/tutorials/digital-photo-enlargement.htm

    The above page also explains that the three common variants of bicubic ("bicubic smoother", "bicubic", and "bicubic sharper") all tend to cause halos, in increasing amounts.
  • KnockKnockKnockKnock Posts: 400Member
    I've always assumed the worst on this topic. That shooting at a lower resolution just tosses the data. I'm glad roombarobot asked because I thought there would be someone saying clearly, here's exactly what Nikon does.

    Should we not be able to tell? Shooting tests and comparing against downsized originals using different methods, then pixel peeping?

    Then the question would become, what are the other makers doing? Canon? Olympus? Fuji etc? Sounds like a good article for one of the major photog magazines!

    D7100, D60, 35mm f/1.8 DX, 50mm f/1.4, 18-105mm DX, 18-55mm VR II, Sony RX-100 ii
  • roombarobotroombarobot Posts: 201Member
    I've always assumed the worst on this topic. That shooting at a lower resolution just tosses the data. I'm glad roombarobot asked because I thought there would be someone saying clearly, here's exactly what Nikon does.

    Should we not be able to tell? Shooting tests and comparing against downsized originals using different methods, then pixel peeping?

    Then the question would become, what are the other makers doing? Canon? Olympus? Fuji etc? Sounds like a good article for one of the major photog magazines!

    That does sound like a good expose of the algorithms that each manufacturer uses! I too had assumed/worried the worst. I am glad to hear that I get some benefit from the smaller size.

    I don't know if we could tell too easily by looking at shooting tests. As noted above, there are lots of other factors going on, JPEG smoothing, lens corrections, etc., beyond just the averaging. That would complicate the analysis.


  • tc88tc88 Posts: 537Member
    While D800 has 36MP, I believe in reality it has 9M each of red and blue sites, and 18M of the green sites. It doesn't have 36M each of red, blue and green. So rather than downsamping, the 36M is up sampled from mostly 9M data. The 20M is probably upsampled using the same algorithm with different weighting parameters.
Sign In or Register to comment.