A multicamera, array camera, cluster camera, or "supercamera" incorporates two or more component cameras in a single system that functions as a camera with superior performance or special capabilities. Many camera arrays have been built by many organizations, yet creating an effective multicamera has not become significantly easier. This paper attempts to provide some useful insights toward simplifying the design, construction, and use of multicameras. Nine multicameras our group built for diverse purposes between 1999 and 2017 are described in some detail, including four built during Summer 2017 using some of the proposed simplifications.
The modern world is filled with plenty of photo and video cameras, adapted to address a huge range of tasks. Using multiple fixation devices can extend the region surveillance and build a three-dimensional model of the observation. A further increase in the number of cameras gives a chance improve the sharp images or single objects on the image. Modern automated systems built by combining photos and video streams require an integrated approach to process and analyze the data. Of particular importance for the analysis of visual information have a mosaic image, allowing to observe a continuous scene entirely, instead of viewing parts. The problem of obtaining united image is relevant, since is need to: in security systems, with the analysis of the overall control of the zone; in medicine, for Xray; the construction of cartographic images received from the satellite; in solving problems of photogrammetry ; in the preparation of 3D images used in construction; in microbiology, while creating images of biological objects of small dimensions taken with the microscope; security, about combining data obtained fingerprint reader; in genetics, about the creation of a single snapshot of nucleic acids; in industrial processes, for example in the production of films and glass to detect inclusions and irregularities in the casting or stretching products etc. The paper presents a mathematical model of the image stitching process based on the use of linear algebra and the foundations of optics. This mathematical model takes into account the importance of objects on the image. The paper proposes the use of an algorithm based on the following features: Find objects on the images (using salience map). Selecting base points in the frames this data. Search for correspondences between base points is performed using the analysis of distances of the mutual arrangement the data points and the correlation analysis. Changing the image produced using projective transformations, as a criterion serves the boundaries of divergence. For an optimal combination boundaries will apply neural networks, with a deep learning. The use of this type of networks to minimize the difference and reduce or eliminate the visual distinction of the merger field. Using a unified color palette is based on the analysis of previously important areas and finding generalized correction factors. To eliminate double contours we used several approaches: the first is based on a combination of background gradient fields, the second on the analysis generic objects for closed contour in accordance with the weight of the image. On the set of test images will be shown the effectiveness of the proposed algorithm. As test images used pair of medical images, satellite images and other cameras.