Z8 on the way with 60 MP

13»

Comments

  • FreezeActionFreezeAction Posts: 893Member

    Ok, so if we assume the wafer is 300 mm in diameter we can each draw one on paper and play with putting 24x36 rectangles or 36x36 squares inside that 300 mm circle and judge for ourselves just how many you can get on a wafer and which format created the greatest waste area on the wafer. If you can get twice the number of rectangles as you can squares we could assume the square sensor would cost twice whatever the rectangular one does. IF you can get one third more rectangles than squares a square sensor would cost one third more.

    I have to plead some ignorance with this technical stuff but can the sensors be of varied size from one wafer and not create production problems? If so then there is a possibility for a square sensor. What I'm wondering is how to match the size sensors per wafer to market demands and waste as little as possible from each wafer. In a previous life I remember solving such problems like this in applied calculus which is just a faint memory now...
  • WestEndFotoWestEndFoto Posts: 3,742Member
    Yeah, but there are defects. If the sensor is the size of the wafer and you have two or three defects per wafer then you might have to produce a hundred wafers to get one sensor. Or the sensors are small you will get few rejects. MHedges, do you know anything about this?
  • snakebunksnakebunk Posts: 993Member
    Do we know that the wafer is the expensive part in creating sensors? I can imagine adding a numer of million pixels must cost something too.

    PS. I found a 300 mm wafer on Ebay for $500. It should be enough for all of us in this thread :).
  • PB_PMPB_PM Posts: 4,494Member
    edited May 2019
    There is more to it than just the size of the wafer used. It's also the process, the size of the transistors used, and how many layers thick wafer is. The image sensor is just a processor, but one that converts light into numbers via diodes. More MP = more diodes = more complex and smaller circuitry to gather data from them all. Thus more layers of circuitry are needed, and thus prices go up, like a non-silicon based board does. The smaller the transistors and diodes used, the more expensive the manufacturing process becomes, since an increasingly higher level of precision is required.

    As an example with made up numbers, based on real world examples I've talk about with developers; a simple board (10 cm2) with 2-3 layers might cost $10 for 10 units, but the same sized board with 10 layers could cost $50 for 10 units. So as the sensors become more complex in design, more layers are required and thus thicker wafers are needed. Not a huge price increase mind you, since we are likely talking about differences in nano metres.

    Other factors would also drive up the cost of the camera, like a larger sensor would require more powerful processors, larger and faster data pathways between the sensor and the processors, which would require more power (less battery life, per mAh), which might mean bigger and heavier batteries. There are up and downsides to everything.
    Post edited by PB_PM on
    If I take a good photo it's not my camera's fault.
  • snakebunksnakebunk Posts: 993Member
    Maybe a rough estimate is that a 36 by 36 sensor would cost slightly less than a 44 by 33?
  • donaldejosedonaldejose Posts: 3,675Member
    Good point snakebunk. Nikon should be able to produce a 36 mm square sensor for no more than the cost of a GFX 50s which sells for $5,500.00.
  • mhedgesmhedges Posts: 2,881Member
    I suppose it is technically possible to fill in the corners with smaller sensors, but I don't think it's practical to do that. They would have to do a different exposure with a different mask in the stepper, and that would increase the time each wafer spent in the machine. Modern high end steppers are the single most expensive piece of equipment in the wafer fab, and they wouldn't want to do anything that would decrease throughput, especially for a relatively small increase in wafer utilization.

    There are also issues with process compatibility as @PB_PM mentions. Now, I'm guessing most modern Sony BSI sensors use very similar process flows but still there could be problems there.

    The other hurdle that immediately comes to mind is wafer dicing. When you are done you need to cut them up into individual sensors. If you put smaller sensors in the corners then you can no longer do traditional mechanical dicing, which cuts similar to a table saw in that the cuts need to be straight and continuous. There are ways around it, using (for example) water guided laser dicing, which cuts more like a jig saw, but again that's a specialty process.

    The other thing that I haven't seen mentioned that impacts cost is the stepper field size, and the need to "stitch" multiple fields together to make very large die. Most steppers have a field size of 22x22mm, which is big enough to make an APS-C sensor in one stepping if you convert that to rectangular coverage, but is not big enough to make FF sensors. That is one of the reasons (I think) that FF sensors cost more per unit area than APS-C sensors. I'm not sure how they get around that - there are some steppers with bigger fields, although even then 24x36mm is a challenge. Or they could be doing multiple masks per die (i.e. expose one half, then the other half) which is tricky but can be done. It would absolutely have to be done for the MF sized sensors.

  • ggbutcherggbutcher Posts: 390Member
    edited May 2019
    Thinking aloud, how about the downstream challenges:
    • Readout has to be clocked differently for each stride.
    • Not knowing what part of the sensor will be used, all of the data needs to be read.
    • Until the image rectangle data is skived off, the data needs to be formatted in a way to represent the location AND length of each stride. Don't want to take such a data structure too far downstream
    And now, idle rumination of a non-authority: I'm not so sure I want the data from the periphery of the image circle of most lenses... comments?
    ___________________________________________
    Edit: I probably used terms not universally understood...
    Readout: Referring to the circuitry that reads each row of the sensor into the ADC to make binary values. For a circular sensor, readout circuitry would require logic to clock through a different number of pixels for each stride. Complicated...
    Stride: A row of an image. A circular sensor will have a different row length for each stride 0 through height/2, then the same progression in reverse for stride (height/2)+1 to height.

    Post edited by ggbutcher on
  • FreezeActionFreezeAction Posts: 893Member
    edited May 2019
    While there are lot of different wishes for the new 60MP body I'm just thinking out loud about and a model like the D810A for working with the night sky.... So far no D850A so maybe there is a chance? B&H has listed the D810A as no longer available so just maybe something else will come along. At least there is a rental available still.
    Post edited by FreezeAction on
  • rmprmp Posts: 586Member
    I like my z7. I liked my D800. I have a hard time visualizing why 60MP would be that much better than a 47MP image. I read the discussions about circle, square, and rectangular sensors and images. I just do not see a big enough advantage to care. What advantage would make you upgrade?
    Robert M. Poston: D4, D810, V3, 14-24 F2.8, 24-70 f2.8, 70-200 f2.8, 80-400, 105 macro.
  • FreezeActionFreezeAction Posts: 893Member
    rmp said:

    I like my z7. I liked my D800. I have a hard time visualizing why 60MP would be that much better than a 47MP image. I read the discussions about circle, square, and rectangular sensors and images. I just do not see a big enough advantage to care. What advantage would make you upgrade?

    I passed on my D810 that I dearly loved to a son as an early inheritance so I'm not really upgrading myself. It is true that there is only a small increase in crop room going from 47 to 60 but I believe there will be more to the story than just the increase in pixels. When shooting for high resolution stitched panoramas with minimal upsizing a few more pixels will not hurt as long as they don't get noisy. There may be just enough more pixels to eliminate one or two frames in a set to be stitched. And sometimes even eliminate the need to stitch for some large prints. The D810 at 36 was good for a 40x60 so I believe that a 60MP sensor should yield a 54x81" print. I haven't ran the math to the pixel but in my head it sounds close. I've seen 32"x80" prints that were very good panoramas and a Z8/D860 with good software should make the same.
  • donaldejosedonaldejose Posts: 3,675Member
    I remember reading that it takes about a 20% increase in pixels to yield a noticeably better image to the average person. But noticeable at what size (not counting pixel peeping because that is now how we view images? Surely moving from 47 to 60 megapixels not noticeable to the average person below poster size, I would think.
  • WestEndFotoWestEndFoto Posts: 3,742Member
    I would not base a buying decision on 13 megapixels. But cumulatively, that is a 24mp improvement over my D800’s. That is a material improvement.

    I think the real story here is continuous small improvements that add up to big improvements over time.
  • FreezeActionFreezeAction Posts: 893Member

    I remember reading that it takes about a 20% increase in pixels to yield a noticeably better image to the average person. But noticeable at what size (not counting pixel peeping because that is now how we view images? Surely moving from 47 to 60 megapixels not noticeable to the average person below poster size, I would think.



    I whole heartedly agree that there's probably not a noticeable difference with 24x36 poster size prints but at 30x72 and even 32x80 then all bets are off. That extra 25% or so increase may make all the difference. If I only wanted to work with poster size and smaller I'd be very happy with a D810 if it only had focus stacking built in. I'd opt for a D750 also if it had focus stacking. The Bonneville Salt Flats serve a purpose for speed lovers to test their wares and an Epson SureColor P20000 is there for a few of us to test our pixels with. One thing in common with those of us who relish speed and print size is that for no other reason than it is there to do. I know that I have friends and family that think I am loony for liking ultra large format print making while they sit on the lake in a 50k boat that was towed to the boat ramp with another 40-50k in tow vehicle. Whether drag boats, offshore racing, or the concept of printing images so large than when one walks up to them they feel as if they are in the image and can walk to the back side of what they see. In reality a D3300 used correctly can make stunning poster prints when all conditions are right. Clyde Butcher made a lasting impression on me with his dark room black and white landscapes printed at 60"x108". Never did I have the privilege of meeting Lee Mann but a trip to see some of his gallery work is on my bucket list if not this summer then next. For those interested in large print making and marketing take a peek at Lee Mann's work here. He went from 4x5 to an 11MP Canon FF 1Ds when I first started paying attention to his work. I've observed very large gallery showings lead to a steady ringing at the cash register of smaller scale prints. It is my belief that many purchase smaller prints while remembering what they saw at the gallery. We have a big beautiful world out there for landscape lovers to turn into their playgrounds and that's how I want to live out my days with the most and best pixels that are available within a reasonable budget. One thing that interests me also is the idea of putting 60MP on a microscope for such things as just the eye of a fly... Now if the focus shift would just work on such fine subjects. A grain of pollen might be most interesting also along with many more. Pixels can reveal things photographers have never really seen before when used to extreme.
  • FreezeActionFreezeAction Posts: 893Member
    If memory and math are right when pixels are doubled the print size can increase by 40%. That's from my head and not from a spreadsheet at the moment. At some point the IQ will suffer from increased pixels will it not because of the pixel pitch? Just where is that point is what I'd like to know. Of course he larger the sensor the more can be crammed in before the pixel scales are tilted to far and IQ degrades. Is it not true that when the pixel pitch is the same on a FF sensor as that of DX sensor then the high ISO noise level will be the same if the firmware is constant? If the proposed Z8 becomes reality at 60MP then let's hope that the Expeed processor can at least keep the ISO clarity equal to the D850. For my personal use ISO noise level is not so much of an issue as it would have been a few years ago when depending on the night arena lights as most of my shots are taken on at least partly sunny days now. At events I had to go by the clock instead of picking my times and places to suit my needs. Right now I am as impatient for Nikon to deliver a body with that 60MP sensor as I was for them to deliver a pro grade DX body for action. Soon B&H will have to list a SUV as an accessory for new gear along with suggestions on how to make sure we can get a tax break on all gear besides the sales tax aide. :smiley:
  • FreezeActionFreezeAction Posts: 893Member
    rmp said:

    I'm so old, it is hard for me to imagine a camera/lens better than the z7/24-70 f4. But when new toys arrive, boys will play.

    retread said:

    D850, Z7 = 47 MP; D860, Z8 = 60 MP. Interesting thought. Do you think there will be a D850 replacement? Maybe a crown jewel for the last Nikon DSLR.

    At an older age I still want to upgrade but not to a whole new system. Just don't see spending that much.

    New technology shouldn't slow us down, rather it should keep us oldies going.

Sign In or Register to comment.