Task requirements for image acquisition systems vary substantially between applications: requirements for consumer photography may be irrelevant - or may even interfere - with requirements for automotive, medical and other applications. The remarkable capabilities of the imaging industry to create lens and sensor designs for specific applications has been demonstrated in the mobile computing market. We might expect that the industry can further innovate if we specify the requirements for other markets. This paper explains an approach to developing image system designs that meet the task requirements for autonomous vehicle applications. It is impractical to build a large number of image acquisition systems and evaluate each of them with real driving data; therefore, we assembled a simulation environment to provide guidance at an early stage. The open-source and freely available software (isetcam, iset3d, and isetauto) uses ray tracing to compute quantitatively how scene radiance propagates through a multi-element lens to form the sensor irradiance. The software then transforms the irradiance into the sensor pixel responses, accounting for a large number of sensor parameters. This enables the user to apply different types of image processing pipelines to generate images that are used to train and test convolutional networks used in autonomous driving. We use the simulation environment to assess performance for different cameras and networks.
In this paper, we propose a new method for accelerating stereo matching in autonomous vehicles using an upright pinhole camera model. It is motivated by that stereo videos are more restricted when the camera is fixed on the vehicles driving on the road. Assuming that the imaging plane is perpendicular to the road and the road is generally flat, we can derive the current disparity based on the previous one and the flow. The prediction is very efficient that only requires two multiplications per pixel. In practice, this model may not hold strictly but we still can use it for disparity initialization. Results on real datasets demonstrate the our method reduces the disparity search range from 128 to 61 with only slightly accuracy decreasing.