Driven by the mandated adoption of advanced safety features enforced by governments around the global as well as strong demands for upgraded safety and convenience experience from the consumer side, the automotive industry is going through an intensified arms race of equipping vehicles with more sensors and boosted computation capacity. Among various sensors, camera and radar stand out as a popular combination offering complementary capabilities. As a result, camera radar fusion (or CRF in short) has been regarded as one of the key technology trends for future advanced driving assistant system (ADAS). This paper reports a camera radar fusion system developed at TI, which is powered by a broad set of TI silicon products, including CMOS radar, TDA SoC processor, FPD-Link II/III SerDes, PMIC, and so forth. The system is developed to not only showcase algorithmic benefits of fusion, but also the competitiveness of TI solutions as a whole in terms of coverage of capabilities, balance between performance and energy efficiency, and rich supports from the associated HW and SW ecosystem.
RGBD cameras capturing color and depth information are highly promising for various industrial, consumer and creative applications. Among others, these applications are segmentation, gesture control or deep compositing. Depth maps captured with Time-of-Flight sensors, as a potential alternative to vision-based approaches, still suffer from low depth resolution. Various algorithms are available for RGB-guided depth upscaling but they also introduce filtering artifacts like depth bleeding or texture copying. We propose a novel superpixel-based upscaling algorithm, which employs an iterative superpixel clustering strategy to achieve improved boundary reproduction at depth discontinuities without aforementioned artifacts. Concluding, a rich ground-truth-based evaluation validates that our upscaling method is superior compared to competing state-of-the-art algorithms with respect to depth jump reproduction. Reference material is collected from a real RGBD camera as well as the Middlebury 2005 and 2014 data sets. The effectiveness of our method is also confirmed by usage in a depth-jump-critical computational imaging use case.