Recent research on digital camera performance evaluation introduced the Natural Scene Spatial Frequency Response (NS-SFR) framework, shown to provide a comparable measure to the ISO12233 edge SFR (e-SFR) but derived outside laboratory conditions. The framework extracts step-edges captured from pictorial natural scenes to evaluate the camera SFR. It is in 2-parts. The first utilizes the ISO12233 slanted-edge algorithm to produce an ‘envelope’ of NS-SFRs. The second estimates the system e-SFR from this NS-SFR data. One drawback of this proposed methodology has been the computation time. The process was not optimized, as it first derived NS-SFRs from all suitable step-edges and then further validated and statistically treated the results to estimate the e-SFR. This paper presents changes to the framework processes, aiming to optimize the computation time so that it is practical for real-world implementation. The developments include an improved framework structure, a pixel-stretching filter alternative, and the capability to utilize Graphics Processing Unit (GPU) acceleration. In addition, the methodology was updated to utilize the latest e-SFR algorithm implementation. The resulting code has been incorporated into a self-executable user interface prototype, available in GitHub. Future goals include making it an open-access, cloud-based solution to be used by scientists, camera evaluation labs and the general public.
The edge-based Spatial Frequency Response (e-SFR) method was first developed for evaluating camera image resolution and image sharpness. The method was described in the first version of the ISO 12233 standard. Since then, the method has been applied in a wide range of applications, including medical, security, archiving, and document processing. However, with this broad application, several of the assumptions of the method are no longer closely followed. This has led to several improvements aimed at broadening its application, for example for lenses with spatial distortion. We can think of the evaluation of image quality parameters as an estimation problem, based on the gathered data, often from digital images. In this paper, we address the mitigation of measurement error that is introduced when the analysis is applied to low-exposure (and therefore, noisy) applications and those with small analysis regions. We consider the origins of both bias and variation in the resulting SFR measurement and present practical ways to reduce them. We describe the screening of outlier edge-location values as a method for improved edge detection. This, in turn, is related to a reduction in negative bias in the resulting SFR.
The edge-based Spatial Frequency Response (e-SFR) method is well established and has been included in the ISO 12233 standard since the first version in 2000. A new 4th edition of the standard is proceeding, with additions and changes that are intended to broaden its application and improve reliability. We report on results for advanced edge-fitting which, although reported before, was not previously included in the standard. The application of the e-SFR method to a range of edge-feature angles is enhanced by the inclusion of an angle-based correction, and use of a new test chart. We present examples of the testing completed for a wider range of edge test features than previously addressed by ISO 12233, for near-zero- and -45-degree orientations. Various smoothing windows were compared, including the Hamming and Tukey forms. We also describe a correction for image non-uniformity, and the computing of an image sharpness measure (acutance) that will be included in the updated standard.
The edge-based Spatial Frequency Response (e-SFR) is an established measure for camera system quality performance, traditionally measured under laboratory conditions. With the increasing use of Deep Neural Networks (DNNs) in autonomous vision systems, the input signal quality becomes crucial for optimal operation. This paper proposes a method to estimate the system e-SFR from pictorial natural scene derived SFRs (NSSFRs) as previously presented, laying the foundation for adapting the traditional method to a real-time measure.In this study, the NS-SFR input parameter variations are first investigated to establish suitable ranges that give a stable estimate. Using the NS-SFR framework with the established parameter ranges, the system e-SFR, as per ISO 12233, is estimated. Initial validation of results is obtained from implementing the measuring framework with images from a linear and a non-linear camera system. For the linear system, results closely approximate the ISO 12233 e-SFR measurement. Non-linear system measurements exhibit scene-dependant characteristics expected from edge-based methods. The requirements to implement this method in real-time for autonomous systems are then discussed.
The ISO 12233 standard for digital camera resolution includes two methods for the evaluation of camera performance in terms of a Spatial Frequency Response (SFR). In many cases, the measured SFR can be taken as a measurement of the camera-system Modulation Transfer Function (MTF), used in optical design. In this paper, we investigate how the ISO 12233 method for slantededge analysis can be applied to such an optical design. Recent improvements to the ISO method aid in the computing of both sagittal and tangential MTF, as commonly specified for optical systems. From computed optical simulations of actual designs, we apply the slanted-edge analysis over the image field. The simulations include the influence of optical aberrations, and these can present challenges to the ISO methods. We find, however, that when the slanted-edge methods are applied with care, consistent results can be obtained.
As digital imaging becomes more widespread in a variety of industries, new standards for measuring resolution and sharpness are being developed. Some differ significantly from ISO 12233:2014 Modulation Transfer Function (MTF) measurements. We focus on the ISO 16505 standard for automotive Camera Monitor Systems, which uses high contrast hyperbolic wedges instead of slantededges to measure system resolution, defined as MTF10 (the spatial frequency where MTF = 10% of its low frequency value). Wedges were chosen based on the claim that slanted-edges are sensitive to signal processing. While this is indeed the case, we have found that wedges are also highly sensitive and present a number of measurement challenges: Sub-pixel location variations cause unavoidable inconsistencies; wedge saturation makes results more stable at the expense of accuracy; MTF10 can be boosted by sharpening, noise, and other artifacts, and may never be reached. Poor quality images can exhibit high MTF10. We show that the onset of aliasing is a more stable performance indicator, and we discuss methods of getting the most accurate results from wedges as well as misunderstandings about low contrast slanted-edges, which correlate better with system performance and are more representative of objects of interest in automotive and security imaging.
This paper proposes several adjustments to the ISO 12233 slanted edge algorithm for estimating camera MTF. First, the Ridler-Calvard binary image segmentation method is used to find the line. Secondly, total least squares, rather than ordinary least squares, is used to compute the line parameters. Finally, the pixel values are projected in the reverse direction from the 1D array to the 2D image, rather than from the 2D image to the 1D array. Together, these changes yield an algorithm that exhibits significantly less variation than existing techniques when applied to real images. In particular, the proposed algorithm is largely invariant to the rotation angle of the edge as well as to the size of the image crop.