View previous topic :: View next topic |
Author |
Message |
visualopsins
Joined: 05 Mar 2009 Posts: 11059 Location: California
Expire: 2025-04-11
|
Posted: Wed Feb 03, 2021 12:00 am Post subject: |
|
|
visualopsins wrote:
Raw is raw uncorrected sensor data. https://en.wikipedia.org/wiki/Raw_image_format _________________ ☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮ like attracts like! ☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮☮
Cameras: Sony ILCE-7RM2, Spotmatics II, F, and ESII, Nikon P4
Lenses:
M42 Asahi Optical Co., Takumar 1:4 f=35mm, 1:2 f=58mm (Sonnar), 1:2.4 f=58mm (Heliar), 1:2.2 f=55mm (Gaussian), 1:2.8 f=105mm (Model I), 1:2.8/105 (Model II), 1:5.6/200, Tele-Takumar 1:5.6/200, 1:6.3/300, Macro-Takumar 1:4/50, Auto-Takumar 1:2.3 f=35, 1:1.8 f=55mm, 1:2.2 f=55mm, Super-TAKUMAR 1:3.5/28 (fat), 1:2/35 (Fat), 1:1.4/50 (8-element), Super-Multi-Coated Fisheye-TAKUMAR 1:4/17, Super-Multi-Coated TAKUMAR 1:4.5/20, 1:3.5/24, 1:3.5/28, 1:2/35, 1:3.5/35, 1:1.8/85, 1:1.9/85 1:2.8/105, 1:3.5/135, 1:2.5/135 (II), 1:4/150, 1:4/200, 1:4/300, 1:4.5/500, Super-Multi-Coated Macro-TAKUMAR 1:4/50, 1:4/100, Super-Multi-Coated Bellows-TAKUMAR 1:4/100, SMC TAKUMAR 1:1.4/50, 1:1.8/55
M42 Carl Zeiss Jena Flektogon 2.4/35
Contax Carl Zeiss Vario-Sonnar T* 28-70mm F3.5-4.5
Pentax K-mount SMC PENTAX-A ZOOM 1:3.5 35~105mm, SMC PENTAX ZOOM 1:4 45~125mm
Nikon Micro-NIKKOR-P-C Auto 1:3.5 f=55mm, NIKKOR-P Auto 105mm f/2.5 Pre-AI (Sonnar), Micro-NIKKOR 105mm 1:4 AI, NIKKOR AI-S 35-135mm f/3,5-4,5
Tamron SP 17mm f/3.5 (51B), Tamron SP 17mm f/3.5 (151B), SP 500mm f/8 (55BB), SP 70-210mm f/3.5 (19AH)
Vivitar 100mm 1:2.8 MC 1:1 Macro Telephoto (Kiron)
|
|
Back to top |
|
|
alex ph
Joined: 16 Mar 2013 Posts: 1674
|
Posted: Wed Feb 03, 2021 12:27 am Post subject: |
|
|
alex ph wrote:
That was also my understanding. Meanwhile I read some phrases of the article in the sense of preprocessing RAWs to eliminate CAs. Wrong reading deduced from the text's defaults?
Even though the Wikipedia article indicates the minimal processing of the image even in RAW cases. For example: "This RGB raw data still needs to be processed to make an image file, because the raw RGB values correspond to the responses of the sensors, not to a standard color space like sRGB." And even more here: "for example, Nikon has been criticized by astrophotographers for applying noise reduction before saving the raw file." |
|
Back to top |
|
|
kymarto
Joined: 30 Nov 2016 Posts: 409 Location: Portland, OR and Milan, Italy
|
Posted: Wed Feb 03, 2021 6:44 am Post subject: |
|
|
kymarto wrote:
alex ph wrote: |
Blazer0ne wrote:
Quote: |
Wipe your tears off the sensor and move on... |
Stevemark, Kymarto, thank you for your contributions. It's passionately interesting to follow your competitive exchange which makes clearer some techie dimensions of the use of manual optics.
Meanwhile it's striking to observe that you consider only the material base of digital image, taking sensors for its only source. Sensors evolve, including their pixel density, that's the fact. But camera processors evolve too. While the image is produced in sensor-processor workflow, you have plenty reasons to question the way how newer processors operate the growing pixel density and manage CAs, including RAW files.
Here is an example of a good sci-pop article aiming precisely this topic, which starts from the same basic problem of pixel-denser sensors pointed out by Stevemark.
Would you comment on that? |
Computational photography is the next big step, I think. Smartphones already use a huge amount of in-camera processing and new ways of combining or computing pixels.
First we had DxO, which created corrective algorithms for lens/body combinations, to correct CA, vignetting, and optical distortions of various lenses used on different bodies at different apertures and focal lengths for zoom lenses. Now more and more makers are including algorithms to eliminate CA and straighten straight lines and eliminate vignetting for their lenses on their bodies. This has allowed makers to considerably cheapen their optics and simplify designs by NOT correcting for deficiencies in the optics, but rather handling them afterwards. If you are familiar with Topaz, they have made some amazing plugins for noise reduction and sharpening without losing detail and without adding (many) artifacts. A discontinued plugin called Piccure Plus uses deconvolution algorithms to recover detail. The main problem up until now is that these are computationally complex and need a lot of time and cpu power. I'm certain we will see more of this being included in larger cameras instead of just phones. Sony and Pentax have pixel shift features for extremely high resolution, very big files.
However most all of these features, AFAIK, require conversion of the raw data into an image file, with the attendant loss of processing potential of a raw file. In addition, as you can see from the images in the article showing corrected CA, it is possible to remove the color fringing very effectively, but it is not possible to restore the contrast and sharpness lost at the edges where CA occurs.
In terms of vintage lenses, I'd venture to say that nobody is going to do the computations necessary to reduce CA and distortion in vintage lenses, not even for PCs and certainly not for cameras. Each lens is unique and I don't believe that it is or will be any time soon possible to write general algorithms that will auto detect CA and distortion in the image file and correct them. Who knows, though. Certainly the possiblility exists for pixel level correction, and we just need some AI program to figure out how write code that will detect and correct it
Here is a picture I am just working on, shot with the FJW90mm f1 lens. It is about 10% of the original FF image. I have run it through Topaz DeNoise AI and then Piccure Plus, just to demonstrate how nice it can be with vintage lenses, at least to my eye. There was not much noise, so you cannot really see the power of the noise reduction, it is mostly to demonstrate deconcolution. Yes, there is some small amount of artifacting, but for casual viewing it increases apparent sharpness quite a lot. However it takes about 10 minutes to do its work, and one has to set the parameters manually, so not going to be included in cameras any time soon, I think. If you click on it an enlarge you are looking at ~200% magnification.
#1
_________________ Vintage lens aficionado |
|
Back to top |
|
|
cbass
Joined: 27 Jul 2019 Posts: 450
|
Posted: Wed Feb 03, 2021 5:15 pm Post subject: |
|
|
cbass wrote:
stevemark wrote: |
I've never claimed that the sensor is creating these problems!
You may have misunderstood, or - more probably - my English is too poor to clearly express what I know.
|
I think this is the issue here. Look at the title of your post:
Vintage lenses: APS-C troublesome, FF not
You are saying APS-C is troublesome but FF is not. APS-C/FF are sensor sizes. This implies you are blaming the sensor for these problems.
The rest of what you wrote is well known. The smaller the pixel the more pixels will be effected by uncorrected color aberrations. However, this is more an argument against high resolution sensors with small pixels than it is APS-C vs FF. Take that 24MP FF sensor to 36MP and you will increase the number of pixels effected by the aberration.
The other issue I have is you picked an exceptionally bad example. Although vintage lenses are generally less color corrected than modern lenses this varies. I have a few lenses from 1960s-1970s that are as good or even slightly better corrected for color aberrations than my equivalent modern Fuji prime. Amazing and not typical but it does happen. The lens here is more to blame than the size of the sensor. Most vintage lenses do not perform as poorly as your example so saying it's vintage lenses is misleading.
My next issue is framing. 24MP FF vs 24MP APS-C with a full frame lens. APS-C will have a smaller pixel size. However, the FOV will be different on APS-C than FF with the same lens. So if you stand in the same place with APS-C more pixels will be used to render the same objects in the frame because it's cropped 1.5x. More pixels will then be effected by color aberrations. Now step back with your APS-C sensor to make the framing the same. Now you will have roughly the same amount of pixels used to render the same object. Are the color aberrations still as bad or are they less? In theory you should now have the same number of pixels per object.
Now go back and read my post about the sensor stack and focal reducer. Knowing that vintage film lenses were designed without taking into account the sensor stack and that almost all digital full frame camera's have a filter stack that negatively effects performance below f/2.8 and showing a focal reducer's optics can improve filter stack issues which is now the better platform for vintage lenses? APS-C with a focal reducer or FF?
stevemark wrote: |
You may - or may not - know that, by formation, I'm a scientist.
I have been working in a field between chemistry and physics for years before changing to other fields like traditional medicine and photography. There's a solid scientific background here.
|
I have been staying at a holiday inn for the last week and scored 98 percentile on my standardized science exam in the 8th grade. Sorry I couldn't resist. That was in good fun and hopefully you don't get offended. |
|
Back to top |
|
|
alex ph
Joined: 16 Mar 2013 Posts: 1674
|
Posted: Wed Feb 03, 2021 10:22 pm Post subject: |
|
|
alex ph wrote:
Thank you, Kymarto, for the extended addendum. It made me think over a statement that I often see in photo discussions, kind of "These issues are easily/automatically corrected in Photoshop/Lightroom". That always made me think of an already high degree of automation achieved. That is not the case, and human intervention is still requaired. Following your description, Topaz software is similar to some elements included in Nik Efex package.
I presume an AI algorithm will come out soon capable to correct any type of CAs basing on human action patterns, similar to Google and other AI human assisted translators, which lately improved in a wow degree. It seems to me more probable than an algorythm based on pure reinterpretation/correction of sensor data.
Cbass, thank you for your input to the discussion which adds a great dael of good sense! In fact, distance from object you add as a variable and the respective siwe of the image on the sensor seem to be an important factor. I ommited the link from your first post, but read now the experiment withthe filter stack and amend the idea I had about the lens-sensor-processor workflow generating the digital image. Your document shows very well that the real workflow lokks like lens-filter-sensor-processor. It's completely reasonable and worth noting. An important extension of the viewing angle. |
|
Back to top |
|
|
Blazer0ne
Joined: 12 Sep 2018 Posts: 836
Expire: 2024-12-07
|
Posted: Wed Feb 03, 2021 11:21 pm Post subject: |
|
|
Blazer0ne wrote:
... |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|