It is possible to achieve improved color accuracy with a color camera by placing a color filter in front of the camera. Unfortunately, the color filter will block some of the light entering the camera, which will result in additional noise in the recorded data. This paper provides an initial investigation into finding an optimal solution to the filter design, in the presence of noise.
The still imaging portion of FADGI [1] continues to be a living document that has evolved from its theoretical digital imaging principles of a decade ago into adaptations for the realities of day-to-day cultural heritage workflows. While the initial document was a bit disjointed, the 2016 version is a solid major improvement and has proven very useful in gauging digital imaging goodness. [2] With coaching, encouragement and focused attention to detail many users, even the unschooled, have achieved 3-star compliance, sometimes with high-speed sheet-fed document scanners. 4-star levels are not far behind. This is a testimony to an improved digital image literacy for the cultural heritage sector that the authors articulated at the beginning of the last decade. This objective and science based literacy has certainly evolved and continues to do so. It is fair to say that no other imaging sector has such comprehensive objective imaging guidelines as those of FADGI, especially in the context of high volume imaging workflows. While initial efforts focused on single instance device benchmarking, future work will concentrate on performance consistency over the long term. Image digitization for cultural heritage will take on a decidedly industrial tone. With practice, we continue to learn and refine the practical application of FADGI guidelines in the preservation of meaningful information. Like rocks in a farm field, every year new issues and errors with current practices surface that were previously hidden from view. Some are incidental, others need short term resolution. The goal of this paper is to highlight these and make proposals for easier, less costly, and less frustrating ways to improve imaging goodness through the FADGI guidelines.
Driven by the mandated adoption of advanced safety features enforced by governments around the global as well as strong demands for upgraded safety and convenience experience from the consumer side, the automotive industry is going through an intensified arms race of equipping vehicles with more sensors and boosted computation capacity. Among various sensors, camera and radar stand out as a popular combination offering complementary capabilities. As a result, camera radar fusion (or CRF in short) has been regarded as one of the key technology trends for future advanced driving assistant system (ADAS). This paper reports a camera radar fusion system developed at TI, which is powered by a broad set of TI silicon products, including CMOS radar, TDA SoC processor, FPD-Link II/III SerDes, PMIC, and so forth. The system is developed to not only showcase algorithmic benefits of fusion, but also the competitiveness of TI solutions as a whole in terms of coverage of capabilities, balance between performance and energy efficiency, and rich supports from the associated HW and SW ecosystem.
High Dynamic Range (HDR) imaging has recently been applied to video systems, including the next-generation ultrahigh definition television (UHDTV) format. This format requires a camera with a dynamic range of over 15 f-stops and an S/N ratio that is the same as that of HDTV systems. Current UHDTV cameras cannot satisfy these conditions, as their small pixel size decreases the full-well capacity of UHDTV camera image sensors in comparison with that of HDTV sensors. We propose a four-chip capturing method combining threechip and single-chip systems. A prism divides incident light into two rays with intensities in the ratio m:1. Most of the incident light is directed to the three-chip capturing block; the remainder is directed to a single-chip capturing block, avoiding saturation in high-exposure videos. High quality HDR video can then be obtained by synthesizing the high-quality image obtained from the three-chip system with the low saturation image from the singlechip. Herein, we detail this image synthesis method, discuss the smooth matching method between spectrum characteristics of the two systems, and consider the modulation transfer function (MTF) response differences between the three- and single-chip capturing systems by means of analyzing using human visual models.
The 8K ultra-high-definition television (UHDTV) is a next generation television system with a high realistic sensation. High dynamic range (HDR) is a new standard for television systems, defined in Recommendation ITU-R BT. 2100. Hybrid log-gamma (HLG) is one of the HDR standards, jointly suggested by NHK (Japan Broadcasting Corporation) and BBC (British Broadcasting Corporation), and is highly compatible with the conventional standard dynamic range system. Although a "full-featured" 8K camera with HLG has already been developed, most existing 8K cameras do not comply with the HLG standard. In this paper, we describe a method for adapting existing 8K cameras to HLG, thus enhancing their dynamic range. Based on subjective image quality evaluation results, we propose a guideline to choose the dynamic range setting for each shooting scene, considering the noise performance of the particular used 8K camera.
Subjective testing has long been used to quantify user preference in the field of imaging. The majority of subjective testing is done to analyze still images, leaving the ever-growing field of video overlooked. With little work put into this area of study, not much is known about the preferential behavior of dynamic auto control functions such as automatic exposure (AE). In this study, we focus on subjective preferences for two aspects of video auto exposure convergence: convergence time and convergence curve type, with each tested individually. This experiment utilizes a novel framework for subjective testing, where a collection of videos are captured with simulated changes in light. This method allows for much more precise control of the capture device and constitutes better repeatability of experiments, as opposed to recording real changes. A paired comparison model is employed to conduct the subjective analysis of the videos. In a web application, two videos are played side by side with a slight delay and the user is asked to pick which video they prefer. Results from the experiments show that users prefer monotonic, gradual transition in AE, with no sharp or abrupt changes. Users also preferred transition times of 266-500 milliseconds.
We have developed an 8K "full-resolution," 60-fps, portable camera system using a 133-megapixel complementary metaloxide-semiconductor (CMOS) single-chip image sensor. The camera head weighs less than 7 kg, whereas the conventional 8K full-resolution three-chip camera weighs over 50 kg. The new camera can use both commercial 35-mm full-frame lenses and super 35-mm lenses by using lens adapters. It employs a compact 100-Gbps optical transceiver that can transmit the 8K fullresolution video signal to the camera control unit (CCU). The size of the CCU is 3U, which is comparable to CCUs used for broadcasting high definition television (HDTV) cameras. The camera supports the wide-color-gamut and high dynamic range (HDR) video formats, which were standardized in ITU-R BT. 2020 and BT. 2100, respectively. Moreover, a streaking noise correction circuit is implemented in the CCU. The 8K signal output interface from the CCU is compliant with the ITU-R BT. 2077 (U-SDI) standard. By performing an image shooting experiment, we confirmed that the limiting resolution of this camera was more than 4000 TV lines and the signal-to-noise (S/N) ratio was 57 dB at a sensitivity of F4.0/2000 lux.