Good day my fellow NRF members...hope all is well.
I can across this very intriguing article tonight while surfing one of my favorite web sites and thought it would make for a good conversation amongst us. So here it is.
I wonder if the intention is to include the computational part of the image capture in the camera (slow fps?) or in post which would be risky as you wouldn't know if you got the image on location. Maybe one of the mathematicians on here would have an idea of what's possible?
interesting .. :-) sure argues for sensors with even more MP.. .. zooms would be a fun problem to solve as there would need to be a different formula at every mm. I also wonder how complex the PSF(Point spread functions) can be.
Could probably be done right now with data that could be collected for current lenses + current sensors. A bit like what DXO does but applying this new maths. my 18-200 vr could get pretty sharp :-) ! looks like we could soon get 12-500mm F3.5-F8 lenses with same size as my 18-200 !
Post edited by heartyfisher on
Moments of Light - D610 D7K S5pro 70-200f4 18-200 150f2.8 12-24 18-70 35-70f2.8 : C&C very welcome! Being a photographer is a lot like being a Christian: Some people look at you funny but do not see the amazing beauty all around them - heartyfisher.
It would appear this is doing what our brain does....computationally fill in the blanks, or Interpolation. And, fro many situations this will be a very powerful tool.
The drawback is, the algorithms upon which all the computations are based cannot include certain items which as a result of non focus of the image are invisible. The real test would be to take a large printed image maybe 10' x 15', where one can see pixels, photograph it with a simple lens camera, and examine the results pixel by pixel. My guess is there would be some errors. And, it is these errors which make this useful only for limited subjects.
I dont think its "filling in the blanks" seem to me like its taking any area of the image as made up of several PSF of that region then using that the PSF to re composite the image by solving the equation.
eg if you know that the PFS is 2,4,6,3,2 => 0,2,9,0,0 then for a section with values 2,4,8,7,8,3,2 would solve to 0,2,9,2,9,0,0 . ie instead of seeing a blur mess you will be able to solve the equation to 2 points of light. the issue is then to find out the PFS of every part of the image. ie get the whole PSF profile of the lens at every region of the image. .. they also state that each color has its own PSF as well.. so there is quite a lot of computation needed.
Post edited by heartyfisher on
Moments of Light - D610 D7K S5pro 70-200f4 18-200 150f2.8 12-24 18-70 35-70f2.8 : C&C very welcome! Being a photographer is a lot like being a Christian: Some people look at you funny but do not see the amazing beauty all around them - heartyfisher.
We are already seeing some benefits in current lens designs, since all corrections compete with one another, modern designers are letting easily predictable and digitally correctable issues like distortion, and some chromatic aberrations go to pot and concentrating on optically correcting factors that are not easily correctable digitally.
..... H
D810, D3x, 14-24/2.8, 50/1.4D, 24-70/2.8, 24-120/4 VR, 70-200/2.8 VR1, 80-400 G, 200-400/4 VR1, 400/2.8 ED VR G, 105/2 DC, 17-55/2.8. Nikon N90s, F100, F, lots of Leica M digital and film stuff.
Although I welcome anything that results in top quality at lower prices, I remain unconvinced until I actually see the results from a production sample with my own eyes. I would be great if they are as good, and I don't care where the corrections come from (in camera or post) as long as they are up to snuff.
Depends on the extent. If the necessary corrections are small, like up to and a little beyond a poor conventional lens, then setting up an import preset in Lightroom to correct them is fine, but if the image is incomprehensible before correction, then it has to be in camera. For click and post social media types then it has to be in camera, but who cares? Those images are usually sub-standard anyway. :P
I saw something similar on one of NASA's videos how they showed what they did to help Hubble and other telescopes. It is an interesting and fun to see and is close to the Lytro system as well as Adobe's own trials to correct camera shake. I find it all amazing what they are doing with software and look forward to see what additional shots it can save. That said, you can't replace great optics.
If you have to do anything in any program, it is not really accessible to the masses. They expect a great picture right of the camera just before they upload it to Instagram and any camera manufacturer that ignores this is silly. Still may be a bit of a pain for serious amateurs and professionals - but once you establish a workflow, then less I acknowledge.
As I understand it they start with a very poor lens and end up with a good image. Imagine what can be done if you start with an OK lens. If I understand the possibilities with this tech you will be able to do a lot with smaller sensors and OK lenses. What if a N1 V5 can compete with a D800?
They are starting with a simple lens, not necessarily a poor one.
By definition, a simple lens is a lens consisting of just one element. In the paper, they used one plano-convex lens, a simple lens that is flat on one side and convex on the other side.
Types of simple lenses, from Wikimedia
All simple lenses, no matter how finely made, will exhibit aberrations. I.e., even the most perfectly ground plano-convex lens in the world will have aberrations. This is a physical limitation, not a manufacturing defect or a flaw in the lens.
Aberrations, combined with other phenomena such as noise and diffraction, produces blur. A goal of optical design is to minimize blur, using techniques such as:
- Combining simple lens elements to create more complex compound lenses - Shaping lens surfaces in a non-spherical form (i.e., using aspheric lenses) - Computationally correct the blur ("in software") using "deconvolution" algorithms
All three are in common use today. The paper cited above describes an improvement to the last technique (deconvolution).
Comments
Could probably be done right now with data that could be collected for current lenses + current sensors. A bit like what DXO does but applying this new maths. my 18-200 vr could get pretty sharp :-) ! looks like we could soon get 12-500mm F3.5-F8 lenses with same size as my 18-200 !
Being a photographer is a lot like being a Christian: Some people look at you funny but do not see the amazing beauty all around them - heartyfisher.
If the lens could scan, in the same way as the eye, and build up an image, then yes this could be the future
The drawback is, the algorithms upon which all the computations are based cannot include certain items which as a result of non focus of the image are invisible. The real test would be to take a large printed image maybe 10' x 15', where one can see pixels, photograph it with a simple lens camera, and examine the results pixel by pixel. My guess is there would be some errors. And, it is these errors which make this useful only for limited subjects.
PSF of that region then using that the PSF to re composite the image by solving the equation.
eg if you know that the PFS is 2,4,6,3,2 => 0,2,9,0,0 then for a section with values 2,4,8,7,8,3,2 would solve to 0,2,9,2,9,0,0 . ie instead of seeing a blur mess you will be able to solve the equation to 2 points of light. the issue is then to find out the PFS of every part of the image. ie get the whole PSF profile of the lens at every region of the image. .. they also state that each color has its own PSF as well.. so there is quite a lot of computation needed.
Being a photographer is a lot like being a Christian: Some people look at you funny but do not see the amazing beauty all around them - heartyfisher.
..... H
Nikon N90s, F100, F, lots of Leica M digital and film stuff.
By definition, a simple lens is a lens consisting of just one element. In the paper, they used one plano-convex lens, a simple lens that is flat on one side and convex on the other side.
Types of simple lenses, from Wikimedia
All simple lenses, no matter how finely made, will exhibit aberrations. I.e., even the most perfectly ground plano-convex lens in the world will have aberrations. This is a physical limitation, not a manufacturing defect or a flaw in the lens.
Aberrations, combined with other phenomena such as noise and diffraction, produces blur. A goal of optical design is to minimize blur, using techniques such as:
- Combining simple lens elements to create more complex compound lenses
- Shaping lens surfaces in a non-spherical form (i.e., using aspheric lenses)
- Computationally correct the blur ("in software") using "deconvolution" algorithms
All three are in common use today. The paper cited above describes an improvement to the last technique (deconvolution).