Computer vision algorithms are often burdened for embedded hardware implementation due to integration time and system complexity. Many commercial systems prevent low-level image processing customization and hardware optimization due to the largely proprietary nature of the algorithms and architectures, hindering research development by the larger community. This work presents DevCAM- an open-source multi-camera environment, targeted at hardware-software research for vision systems, specifically for co-located multi-sensor processor systems. The objective being to facilitate the integration of multiple latest generation sensors, abstracting interfacing difficulties to high-bandwidth sensors, enable user defined hybrid processing architectures on FPGA, CPU and GPU, and to unite multi-module systems with networking and high-speed storage storage. The system architecture can accommodate up to six 4-lane MIPI sensor modules which are electronically synchronized, alongside support for an RTK-GPS receiver and a 9-axis IMU. We demonstrate a number of available configurations that can be achieved for stereo, quadnocular, 360, and light-field image acquisition tasks. The development framework includes mechanical, PCB, FPGA and software components for the rapid integration into any system. System capabilities are demonstrated with the focus on opening new research frontiers such as distributed edge processing, inter system synchronization, sensor synchronization, and hybrid hardware acceleration of image processing tasks.
This paper proposes a novel method to correct saturated pixels in images. This method is based on the YCbCr color space and separately corrects the chrominance and the luminance of saturated pixels. In this algorithm, the saturated image is processed on the scan line, which is beneficial to the hardware implementation and the correction effect is good. Through the results of the joint simulation of MATLAB and Modelsim, it can be concluded that the hardware algorithm of this paper can use less resources to achieve fast correction. This paper uses Altera DE4 high-level development platform for hardware implementation. The calculation results show that highspeed image and video processing by FPGA is feasible and efficient, and it can be done frame by frame for highdefinition video. It has broad practical application prospects.
A system-on-chip (SoC) platform having a dual-core microprocessor (μP) and a field-programmable gate array (FPGA), as well as interfaces for sensors and networking, is a promising architecture for edge computing applications in computer vision. In this paper, we consider a case study involving the low-cost Zynq- 7000 SoC, which is used to implement a three-stage image signal processor (ISP), for a nonlinear CMOS image sensor (CIS), and to interface the imaging system to a network. Although the highdefinition imaging system operates efficiently in hard real time, by exploiting an FPGA implementation, it sends information over the network on demand only, by exploiting a Linux-based μP implementation. In the case study, the Zynq-7000 SoC is configured in a novel way. In particular, to guarantee hard real time performance, the FPGA is always the master, communicating with the μP through interrupt service routines and direct memory access channels. Results include a validation of the overall system, using a simulated CIS, and an analysis of the system complexity. On this low-cost SoC, resources are available for significant additional complexity, to integrate a computer vision application, in future, with the nonlinear CMOS imaging system.
Pixel saturation is very common in the process of digital color imaging. From the perspective of optics, the CCD or CMOS achieve the maximum charge. It is important to relate an image to the light of the scene from which the image was captured. This paper presents a hardware implementation with a FPGA circuit of an algorithm to estimate saturated pixels in RAW image based on the principle of Bayesian estimation. In order to improve the accuracy of Bayesian estimation, the digital morphological dilation and connected component labeling are used to divide the saturated region. There may be three kinds of color saturation for each region. The Bayesian algorithm based on Xu’ work was used to deal with 1-channel saturation. We improved the 2-channel saturation algorithm using the unsaturated channel to predict the saturation. We proposed the 3-channel saturation using surrounding pixels. Experiments show the proposed method in hardware implementation is more effective in correcting two or three color channel saturation.