The Future of Optics?

Golf007sdGolf007sd Posts: 2,840Moderator
edited October 2013 in General Discussions
Good day my fellow NRF members...hope all is well.

I can across this very intriguing article tonight while surfing one of my favorite web sites and thought it would make for a good conversation amongst us. So here it is.

High Rez PDF file: High-Quality Computational Imaging Through Simple Lenses





The topic is open for your input....
Post edited by Golf007sd on
D4 & D7000 | Nikon Holy Trinity Set + 105 2.8 Mico + 200 F2 VR II | 300 2.8G VR II, 10.5 Fish-eye, 24 & 50 1.4G, 35 & 85 1.8G, 18-200 3.5-5.6 VR I SB-400 & 700 | TC 1.4E III, 1.7 & 2.0E III, 1.7 | Sigma 35 & 50 1.4 DG HSM | RRS Ballhead & Tripods Gear | Gitzo Monopod | Lowepro Gear | HDR via Promote Control System |

Comments

  • spraynprayspraynpray Posts: 6,545Moderator
    I wonder if the intention is to include the computational part of the image capture in the camera (slow fps?) or in post which would be risky as you wouldn't know if you got the image on location. Maybe one of the mathematicians on here would have an idea of what's possible?
    Always learning.
  • heartyfisherheartyfisher Posts: 3,186Member
    edited October 2013
    interesting .. :-) sure argues for sensors with even more MP.. .. zooms would be a fun problem to solve as there would need to be a different formula at every mm. I also wonder how complex the PSF(Point spread functions) can be.

    Could probably be done right now with data that could be collected for current lenses + current sensors. A bit like what DXO does but applying this new maths. my 18-200 vr could get pretty sharp :-) ! looks like we could soon get 12-500mm F3.5-F8 lenses with same size as my 18-200 !
    Post edited by heartyfisher on
    Moments of Light - D610 D7K S5pro 70-200f4 18-200 150f2.8 12-24 18-70 35-70f2.8 : C&C very welcome!
    Being a photographer is a lot like being a Christian: Some people look at you funny but do not see the amazing beauty all around them - heartyfisher.

  • sevencrossingsevencrossing Posts: 2,800Member
    edited October 2013
    Lightroom can already "correct" distortion and vignetting for a given lens so yes It could "correct" other things as well

    If the lens could scan, in the same way as the eye, and build up an image, then yes this could be the future
    Post edited by sevencrossing on
  • MsmotoMsmoto Posts: 5,398Moderator
    It would appear this is doing what our brain does....computationally fill in the blanks, or Interpolation. And, fro many situations this will be a very powerful tool.

    The drawback is, the algorithms upon which all the computations are based cannot include certain items which as a result of non focus of the image are invisible. The real test would be to take a large printed image maybe 10' x 15', where one can see pixels, photograph it with a simple lens camera, and examine the results pixel by pixel. My guess is there would be some errors. And, it is these errors which make this useful only for limited subjects.
    Msmoto, mod
  • heartyfisherheartyfisher Posts: 3,186Member
    edited October 2013
    I dont think its "filling in the blanks" seem to me like its taking any area of the image as made up of several
    PSF of that region then using that the PSF to re composite the image by solving the equation.

    eg if you know that the PFS is 2,4,6,3,2 => 0,2,9,0,0 then for a section with values 2,4,8,7,8,3,2 would solve to 0,2,9,2,9,0,0 . ie instead of seeing a blur mess you will be able to solve the equation to 2 points of light. the issue is then to find out the PFS of every part of the image. ie get the whole PSF profile of the lens at every region of the image. .. they also state that each color has its own PSF as well.. so there is quite a lot of computation needed.
    Post edited by heartyfisher on
    Moments of Light - D610 D7K S5pro 70-200f4 18-200 150f2.8 12-24 18-70 35-70f2.8 : C&C very welcome!
    Being a photographer is a lot like being a Christian: Some people look at you funny but do not see the amazing beauty all around them - heartyfisher.

  • haroldpharoldp Posts: 984Member
    We are already seeing some benefits in current lens designs, since all corrections compete with one another, modern designers are letting easily predictable and digitally correctable issues like distortion, and some chromatic aberrations go to pot and concentrating on optically correcting factors that are not easily correctable digitally.

    ..... H
    D810, D3x, 14-24/2.8, 50/1.4D, 24-70/2.8, 24-120/4 VR, 70-200/2.8 VR1, 80-400 G, 200-400/4 VR1, 400/2.8 ED VR G, 105/2 DC, 17-55/2.8.
    Nikon N90s, F100, F, lots of Leica M digital and film stuff.

  • spraynprayspraynpray Posts: 6,545Moderator
    Although I welcome anything that results in top quality at lower prices, I remain unconvinced until I actually see the results from a production sample with my own eyes. I would be great if they are as good, and I don't care where the corrections come from (in camera or post) as long as they are up to snuff.
    Always learning.
  • WestEndBoyWestEndBoy Posts: 1,456Member
    If the corrections are in post, they are not really accessible to the masses and are a pain even for professionals and serious amateurs.
  • spraynprayspraynpray Posts: 6,545Moderator
    edited December 2013
    Depends on the extent. If the necessary corrections are small, like up to and a little beyond a poor conventional lens, then setting up an import preset in Lightroom to correct them is fine, but if the image is incomprehensible before correction, then it has to be in camera. For click and post social media types then it has to be in camera, but who cares? Those images are usually sub-standard anyway. :P
    Post edited by spraynpray on
    Always learning.
  • TaoTeJaredTaoTeJared Posts: 1,306Member
    I saw something similar on one of NASA's videos how they showed what they did to help Hubble and other telescopes. It is an interesting and fun to see and is close to the Lytro system as well as Adobe's own trials to correct camera shake. I find it all amazing what they are doing with software and look forward to see what additional shots it can save. That said, you can't replace great optics.
    D800, D300, D50(ir converted), FujiX100, Canon G11, Olympus TG2. Nikon lenses - 24mm 2.8, 35mm 1.8, (5 in all)50mm, 60mm, 85mm 1.8, 105vr, 105 f2.5, 180mm 2.8, 70-200vr1, 24-120vr f4. Tokina 12-24mm, 16-28mm, 28-70mm (angenieux design), 300mm f2.8. Sigma 15mm fisheye. Voigtlander R2 (olive) & R2a, Voigt 35mm 2.5, Zeiss 50mm f/2, Leica 90mm f/4. I know I missed something...
  • WestEndBoyWestEndBoy Posts: 1,456Member
    If you have to do anything in any program, it is not really accessible to the masses. They expect a great picture right of the camera just before they upload it to Instagram and any camera manufacturer that ignores this is silly. Still may be a bit of a pain for serious amateurs and professionals - but once you establish a workflow, then less I acknowledge.
  • henrik1963henrik1963 Posts: 567Member
    As I understand it they start with a very poor lens and end up with a good image. Imagine what can be done if you start with an OK lens. If I understand the possibilities with this tech you will be able to do a lot with smaller sensors and OK lenses. What if a N1 V5 can compete with a D800?
  • AdeAde Posts: 1,071Member
    They are starting with a simple lens, not necessarily a poor one.

    By definition, a simple lens is a lens consisting of just one element. In the paper, they used one plano-convex lens, a simple lens that is flat on one side and convex on the other side.

    image
    Types of simple lenses, from Wikimedia

    All simple lenses, no matter how finely made, will exhibit aberrations. I.e., even the most perfectly ground plano-convex lens in the world will have aberrations. This is a physical limitation, not a manufacturing defect or a flaw in the lens.

    Aberrations, combined with other phenomena such as noise and diffraction, produces blur. A goal of optical design is to minimize blur, using techniques such as:

    - Combining simple lens elements to create more complex compound lenses
    - Shaping lens surfaces in a non-spherical form (i.e., using aspheric lenses)
    - Computationally correct the blur ("in software") using "deconvolution" algorithms

    All three are in common use today. The paper cited above describes an improvement to the last technique (deconvolution).
Sign In or Register to comment.