The Modulation Transfer Function (MTF) is an important image quality metric typically used in the automotive domain. However, despite the fact that optical quality has an impact on the performance of computer vision in vehicle automation, for many public datasets, this metric is unknown. Additionally, wide field-of-view (FOV) cameras have become increasingly popular, particularly for low-speed vehicle automation applications. To investigate image quality in datasets, this paper proposes an adaptation of the Natural Scenes Spatial Frequency Response (NS-SFR) algorithm to suit cameras with a wide field-of-view.
Recent research on digital camera performance evaluation introduced the Natural Scene Spatial Frequency Response (NS-SFR) framework, shown to provide a comparable measure to the ISO12233 edge SFR (e-SFR) but derived outside laboratory conditions. The framework extracts step-edges captured from pictorial natural scenes to evaluate the camera SFR. It is in 2-parts. The first utilizes the ISO12233 slanted-edge algorithm to produce an ‘envelope’ of NS-SFRs. The second estimates the system e-SFR from this NS-SFR data. One drawback of this proposed methodology has been the computation time. The process was not optimized, as it first derived NS-SFRs from all suitable step-edges and then further validated and statistically treated the results to estimate the e-SFR. This paper presents changes to the framework processes, aiming to optimize the computation time so that it is practical for real-world implementation. The developments include an improved framework structure, a pixel-stretching filter alternative, and the capability to utilize Graphics Processing Unit (GPU) acceleration. In addition, the methodology was updated to utilize the latest e-SFR algorithm implementation. The resulting code has been incorporated into a self-executable user interface prototype, available in GitHub. Future goals include making it an open-access, cloud-based solution to be used by scientists, camera evaluation labs and the general public.
The edge-based Spatial Frequency Response (e-SFR) method is well established and has been included in the ISO 12233 standard since the first version in 2000. A new 4th edition of the standard is proceeding, with additions and changes that are intended to broaden its application and improve reliability. We report on results for advanced edge-fitting which, although reported before, was not previously included in the standard. The application of the e-SFR method to a range of edge-feature angles is enhanced by the inclusion of an angle-based correction, and use of a new test chart. We present examples of the testing completed for a wider range of edge test features than previously addressed by ISO 12233, for near-zero- and -45-degree orientations. Various smoothing windows were compared, including the Hamming and Tukey forms. We also describe a correction for image non-uniformity, and the computing of an image sharpness measure (acutance) that will be included in the updated standard.
The edge-based Spatial Frequency Response (e-SFR) is an established measure for camera system quality performance, traditionally measured under laboratory conditions. With the increasing use of Deep Neural Networks (DNNs) in autonomous vision systems, the input signal quality becomes crucial for optimal operation. This paper proposes a method to estimate the system e-SFR from pictorial natural scene derived SFRs (NSSFRs) as previously presented, laying the foundation for adapting the traditional method to a real-time measure.In this study, the NS-SFR input parameter variations are first investigated to establish suitable ranges that give a stable estimate. Using the NS-SFR framework with the established parameter ranges, the system e-SFR, as per ISO 12233, is estimated. Initial validation of results is obtained from implementing the measuring framework with images from a linear and a non-linear camera system. For the linear system, results closely approximate the ISO 12233 e-SFR measurement. Non-linear system measurements exhibit scene-dependant characteristics expected from edge-based methods. The requirements to implement this method in real-time for autonomous systems are then discussed.
The Modulation Transfer Function (MTF) is a wellestablished measure of camera system performance, commonly employed to characterize optical and image capture systems. It is a measure based on Linear System Theory; thus, its use relies on the assumption that the system is linear and stationary. This is not the case with modern-day camera systems that incorporate non-linear image signal processes (ISP) to improve the output image. Nonlinearities result in variations in camera system performance, which are dependent upon the specific input signals. This paper discusses the development of a novel framework, designed to acquire MTFs directly from images of natural complex scenes, thus making the use of traditional test charts with set patterns redundant. The framework is based on extraction, characterization and classification of edges found within images of natural scenes. Scene derived performance measures aim to characterize non-linear image processes incorporated in modern cameras more faithfully. Further, they can produce ‘live’ performance measures, acquired directly from camera feeds.
Objective measurements of imaging system sharpness (Modulation Transfer Function; MTF) are typically derived from test chart images. It is generally assumed that if testing recommendations are followed, test chart sharpness (which we also call “chart quality”) will have little impact on overall measurements. Standards such as ISO 12233 [1] ignore test chart sharpness. Situations where this assumption is not valid are becoming increasingly frequent, in part because extremely high-resolution cameras (over 30 megapixels) are becoming more common and in part because manufacturing test stations, which have limited space, often use charts that are smaller than optimum. Inconsistent MTF measurements caused by limited chart sharpness can be problematic in manufacturing supply chains that require consistency in measurements taken at different locations. We describe how to measure test chart sharpness, fit the measurement to a model, quantify the effects of chart sharpness on camera system MTF measurements, then compensate for these effects using deconvolution–by dividing measured system MTF by a model of the chart MTF projected on the image sensor. We use results of measurements with and without MTF compensation to develop a set of empirical guidelines to determine when chart quality is • good enough so that no compensation is needed, and • too low to be reliably compensated.