Task requirements for image acquisition systems vary substantially between applications: requirements for consumer photography may be irrelevant - or may even interfere - with requirements for automotive, medical and other applications. The remarkable capabilities of the imaging industry to create lens and sensor designs for specific applications has been demonstrated in the mobile computing market. We might expect that the industry can further innovate if we specify the requirements for other markets. This paper explains an approach to developing image system designs that meet the task requirements for autonomous vehicle applications. It is impractical to build a large number of image acquisition systems and evaluate each of them with real driving data; therefore, we assembled a simulation environment to provide guidance at an early stage. The open-source and freely available software (isetcam, iset3d, and isetauto) uses ray tracing to compute quantitatively how scene radiance propagates through a multi-element lens to form the sensor irradiance. The software then transforms the irradiance into the sensor pixel responses, accounting for a large number of sensor parameters. This enables the user to apply different types of image processing pipelines to generate images that are used to train and test convolutional networks used in autonomous driving. We use the simulation environment to assess performance for different cameras and networks.
Camera arrays are used to acquire the 360° surround video data presented on 3D immersive displays. The design of these arrays involves a large number of decisions ranging from the placement and orientation of the cameras to the choice of lenses and sensors. We implemented an open-source software environment (iset360) to support engineers designing and evaluating camera arrays for virtual and augmented reality applications. The software uses physically based ray tracing to simulate a 3D virtual spectral scene and traces these rays through multi-element spherical lenses to calculate the irradiance at the imaging sensor. The software then simulates imaging sensors to predict the captured images. The sensor data can be processed to produce the stereo and monoscopic 360° panoramas commonly used in virtual reality applications. By simulating the entire capture pipeline, we can visualize how changes in the system components influence the system performance. We demonstrate the use of the software by simulating a variety of different camera rigs, including the Facebook Surround360, the GoPro Odyssey, the GoPro Omni, and the Samsung Gear 360.