Home

Please support mflenses.com if you need any graphic related work order it from us, click on above banner to order!

SearchSearch MemberlistMemberlist RegisterRegister ProfileProfile Log in to check your private messagesLog in to check your private messages Log inLog in

Vintage lenses: APS-C troublesome, FF not
View previous topic :: View next topic  


PostPosted: Wed Feb 03, 2021 12:00 am    Post subject: Reply with quote

Raw is raw uncorrected sensor data. https://en.wikipedia.org/wiki/Raw_image_format


PostPosted: Wed Feb 03, 2021 12:27 am    Post subject: Reply with quote

That was also my understanding. Meanwhile I read some phrases of the article in the sense of preprocessing RAWs to eliminate CAs. Wrong reading deduced from the text's defaults?

Even though the Wikipedia article indicates the minimal processing of the image even in RAW cases. For example: "This RGB raw data still needs to be processed to make an image file, because the raw RGB values correspond to the responses of the sensors, not to a standard color space like sRGB." And even more here: "for example, Nikon has been criticized by astrophotographers for applying noise reduction before saving the raw file."


PostPosted: Wed Feb 03, 2021 6:44 am    Post subject: Reply with quote

alex ph wrote:
Blazer0ne wrote:
Quote:
Wipe your tears off the sensor and move on...


Laugh 1

Stevemark, Kymarto, thank you for your contributions. It's passionately interesting to follow your competitive exchange which makes clearer some techie dimensions of the use of manual optics.

Meanwhile it's striking to observe that you consider only the material base of digital image, taking sensors for its only source. Sensors evolve, including their pixel density, that's the fact. But camera processors evolve too. While the image is produced in sensor-processor workflow, you have plenty reasons to question the way how newer processors operate the growing pixel density and manage CAs, including RAW files.

Here is an example of a good sci-pop article aiming precisely this topic, which starts from the same basic problem of pixel-denser sensors pointed out by Stevemark.

Would you comment on that?


Computational photography is the next big step, I think. Smartphones already use a huge amount of in-camera processing and new ways of combining or computing pixels.

First we had DxO, which created corrective algorithms for lens/body combinations, to correct CA, vignetting, and optical distortions of various lenses used on different bodies at different apertures and focal lengths for zoom lenses. Now more and more makers are including algorithms to eliminate CA and straighten straight lines and eliminate vignetting for their lenses on their bodies. This has allowed makers to considerably cheapen their optics and simplify designs by NOT correcting for deficiencies in the optics, but rather handling them afterwards. If you are familiar with Topaz, they have made some amazing plugins for noise reduction and sharpening without losing detail and without adding (many) artifacts. A discontinued plugin called Piccure Plus uses deconvolution algorithms to recover detail. The main problem up until now is that these are computationally complex and need a lot of time and cpu power. I'm certain we will see more of this being included in larger cameras instead of just phones. Sony and Pentax have pixel shift features for extremely high resolution, very big files.

However most all of these features, AFAIK, require conversion of the raw data into an image file, with the attendant loss of processing potential of a raw file. In addition, as you can see from the images in the article showing corrected CA, it is possible to remove the color fringing very effectively, but it is not possible to restore the contrast and sharpness lost at the edges where CA occurs.

In terms of vintage lenses, I'd venture to say that nobody is going to do the computations necessary to reduce CA and distortion in vintage lenses, not even for PCs and certainly not for cameras. Each lens is unique and I don't believe that it is or will be any time soon possible to write general algorithms that will auto detect CA and distortion in the image file and correct them. Who knows, though. Certainly the possiblility exists for pixel level correction, and we just need some AI program to figure out how write code that will detect and correct it Smile

Here is a picture I am just working on, shot with the FJW90mm f1 lens. It is about 10% of the original FF image. I have run it through Topaz DeNoise AI and then Piccure Plus, just to demonstrate how nice it can be with vintage lenses, at least to my eye. There was not much noise, so you cannot really see the power of the noise reduction, it is mostly to demonstrate deconcolution. Yes, there is some small amount of artifacting, but for casual viewing it increases apparent sharpness quite a lot. However it takes about 10 minutes to do its work, and one has to set the parameters manually, so not going to be included in cameras any time soon, I think. If you click on it an enlarge you are looking at ~200% magnification.

#1


PostPosted: Wed Feb 03, 2021 5:15 pm    Post subject: Reply with quote

stevemark wrote:

I've never claimed that the sensor is creating these problems!
You may have misunderstood, or - more probably - my English is too poor to clearly express what I know.


I think this is the issue here. Look at the title of your post:
Vintage lenses: APS-C troublesome, FF not
You are saying APS-C is troublesome but FF is not. APS-C/FF are sensor sizes. This implies you are blaming the sensor for these problems.

The rest of what you wrote is well known. The smaller the pixel the more pixels will be effected by uncorrected color aberrations. However, this is more an argument against high resolution sensors with small pixels than it is APS-C vs FF. Take that 24MP FF sensor to 36MP and you will increase the number of pixels effected by the aberration.

The other issue I have is you picked an exceptionally bad example. Although vintage lenses are generally less color corrected than modern lenses this varies. I have a few lenses from 1960s-1970s that are as good or even slightly better corrected for color aberrations than my equivalent modern Fuji prime. Amazing and not typical but it does happen. The lens here is more to blame than the size of the sensor. Most vintage lenses do not perform as poorly as your example so saying it's vintage lenses is misleading.

My next issue is framing. 24MP FF vs 24MP APS-C with a full frame lens. APS-C will have a smaller pixel size. However, the FOV will be different on APS-C than FF with the same lens. So if you stand in the same place with APS-C more pixels will be used to render the same objects in the frame because it's cropped 1.5x. More pixels will then be effected by color aberrations. Now step back with your APS-C sensor to make the framing the same. Now you will have roughly the same amount of pixels used to render the same object. Are the color aberrations still as bad or are they less? In theory you should now have the same number of pixels per object.

Now go back and read my post about the sensor stack and focal reducer. Knowing that vintage film lenses were designed without taking into account the sensor stack and that almost all digital full frame camera's have a filter stack that negatively effects performance below f/2.8 and showing a focal reducer's optics can improve filter stack issues which is now the better platform for vintage lenses? APS-C with a focal reducer or FF?

stevemark wrote:

You may - or may not - know that, by formation, I'm a scientist.

I have been working in a field between chemistry and physics for years before changing to other fields like traditional medicine and photography. There's a solid scientific background here.


I have been staying at a holiday inn for the last week and scored 98 percentile on my standardized science exam in the 8th grade. Sorry I couldn't resist. That was in good fun and hopefully you don't get offended.


PostPosted: Wed Feb 03, 2021 10:22 pm    Post subject: Reply with quote

Thank you, Kymarto, for the extended addendum. It made me think over a statement that I often see in photo discussions, kind of "These issues are easily/automatically corrected in Photoshop/Lightroom". That always made me think of an already high degree of automation achieved. That is not the case, and human intervention is still requaired. Following your description, Topaz software is similar to some elements included in Nik Efex package.

I presume an AI algorithm will come out soon capable to correct any type of CAs basing on human action patterns, similar to Google and other AI human assisted translators, which lately improved in a wow degree. It seems to me more probable than an algorythm based on pure reinterpretation/correction of sensor data.

Cbass, thank you for your input to the discussion which adds a great dael of good sense! In fact, distance from object you add as a variable and the respective siwe of the image on the sensor seem to be an important factor. I ommited the link from your first post, but read now the experiment withthe filter stack and amend the idea I had about the lens-sensor-processor workflow generating the digital image. Your document shows very well that the real workflow lokks like lens-filter-sensor-processor. It's completely reasonable and worth noting. An important extension of the viewing angle.


PostPosted: Wed Feb 03, 2021 11:21 pm    Post subject: Reply with quote

...