High quality, 360 capture for Cinematic VR is a relatively new and rapidly evolving technology. The field demands very high quality, distortionfree 360 capture which is not possible with cameras that depend on fisheye lenses for capturing a 360 field of view. The Facebook Surround 360 Camera, one of the few "players" in this space, is an open-source license design that Facebook has released for anyone that chooses to build it from off-the-shelf components and generate 8K stereo output using open-source licensed rendering software. However, the components are expensive and the system itself is extremely demanding in terms of computer hardware and software. Because of this, there have been very few implementations of this design and virtually no real deployment in the field. We have implemented the system, based on Facebook's design, and have been testing and deploying it in various situations; even generating short video clips. We have discovered in our recent experience that high quality, 360 capture comes with its own set of new challenges. As an example, even the most fundamental tools of photography like "exposure" become difficult because one is always faced with ultra-high dynamic range scenes (one camera is pointing directly at the sun and the others may be pointing to a dark shadow). The conventional imaging pipeline is further complicated by the fact that the stitching software has different effects on various aspects of the calibration or pipeline optimization. Most of our focus to date has been on optimizing the imaging pipeline and improving the quality of the output for viewing in an Oculus Rift headset. We designed a controlled experiment to study 5 key parameters in the rendering pipeline – black level, neutral balance, color correction matrix (CCM), geometric calibration and vignetting. By varying all of these parameters in a combinatorial manner, we were able to assess the relative impact of these parameters on the perceived image quality of the output. Our results thus far indicate that the output image quality is greatly influenced by the black level of the individual cameras (the Facebook camera comprised of 17 cameras whose output need to be stitched to obtain a 360 view). Neutral balance is least sensitive. We are most confused about the results we obtain from accurately calculating and applying the CCM for each individual camera. We obtained improved results by using the average of the matrices for all cameras. Future work includes evaluating the effects of geometric calibration and vignetting on quality.
Conventional camera calibration methods regard camera as ideal pinhole model and require well-focused images, which can't be satisfied for long-range photogrammetry or low depth-of-field lens. In this paper, we propose a novel active calibration method for out-of-focus camera using LCD monitor. Firstly, we estimate the defocus map by the temporal coded binary-shift patterns, which makes our method more accurate. Secondly, based on the defocus map, we encode LCD pixel's coordinates into phase-shift patterns with optimal frequency and step properties, and then deblur captured patterns. Finally, deblurred patterns are decoded to generate dense phases map to extract accurate feature points coordinates. Our method significantly improves camera calibrations robustness to lens' defocus, noises, glass refraction compared with state-of-art methods. Experimental results demonstrate that our method is superior to conventional methods whether camera is in- or out-of- focus.
Accurate characterization (profiling) of a capture system is essential to have the system accurately reproduce the colors in a scene. ISO 17321 [1][2] describe two methods to achieve this calibration. One based on standard reflective targets (chart-based method) and the other on making accurate measurements of the cameras responsivity functions and the spectral power distribution of the deployed illuminant (spectral characterization). The most prominent of the two is the chart-based method for the reason that it involves a simple capture of an inexpensive, standard color pattern (e.g., Macbeth/Xrite Color Checker). However, the results obtained from this method are illuminant specific and are very sensitive to the technique used in the capture process. Lighting non-uniformity on the chart, incorrect framing, and flare can all erroneously affect the results. ISO also recommends a more robust technique, involving the measurement of the camera's responsivity and the spectral power distribution of the capture illuminant. Measurements of these features can require the use of expensive and sophisticated instruments such as monochromators and spectro-radiometers. Both methods involve tradeoffs in cost, ease of use, and most importantly in the accuracy of the final capture system characterization. The results obtained are very sensitive to the technique of capture and precision of measurements of the various parameters involved. The end-user is often left confused asking such questions as, What accuracy is needed in individual measurements?, What are the tradeoffs (particularly in color accuracy) in using the chart-based method vs. the spectral characterization based method?, also, How sensitive is the system to the various parameters? In this study, both of the ISO recommended techniques are utilized for camera calibration on a broad range of professional cameras and illuminants. Such characterization was conducted by approximately ten different users so as to capture the variability of the deployed capture technique. The collected data was used to calculate and quantify the system characterization accuracy using the color inconstancy index for a set of evaluation colors as the metric. Sensitivity analysis techniques were used to attempt to answer the question "How much of an advantage, if any, does the spectral characterization of the camera offer over the chartbased approach?" In answering the question, parameters (and their sensitivities) were identified to most influence the results.