This paper presents a new PDAF correction method to improve the binning mode image quality in the world’s first 0.8um 108 mega pixel CMOS Image Sensor with Samsung Nonacell and Super PD technology. PDAF pixels had been fixed by bad-pixel-correction (BPC), referring to the adjacent non-PDAF pixels in the conventional correction method. We demonstrated a new method, named Dilution mode which output their own seed value within the 3x3 same color-channel pixels to video images and deliver AF information through the separate embedded data. As a result, the PDAF artifact, such as a false color, broken line and dot artifact in a high frequency pattern and overall image detail have dramatically improved in dilution mode.
CMOS Image sensors play a vital role in the exponentially growing field of Artificial Intelligence (AI). Applications like image classification, object detection and tracking are just some of the many problems now solved with the help of AI, and specifically deep learning. In this work, we target image classification to discern between six categories of fruits — fresh/ rotten apples, fresh/ rotten oranges, fresh/ rotten bananas. Using images captured from high speed CMOS sensors along with lightweight CNN architectures, we show the results on various edge platforms. Specifically, we show results using ON Semiconductor’s global-shutter based, 12MP, 90 frame per second image sensor (XGS-12), and ON Semiconductor’s 13 MP AR1335 image sensor feeding into MobileNetV2, implemented on NVIDIA Jetson platforms. In addition to using the data captured with these sensors, we utilize an open-source fruits dataset to increase the number of training images. For image classification, we train our model on approximately 30,000 RGB images from the six categories of fruits. The model achieves an accuracy of 97% on edge platforms using ON Semiconductor’s 13 MP camera with AR1335 sensor. In addition to the image classification model, work is currently in progress to improve the accuracy of object detection using SSD and SSDLite with MobileNetV2 as the feature extractor. In this paper, we show preliminary results on the object detection model for the same six categories of fruits.