Fusion of data from multiple sensors is a difficult problem. Most recent work centers on techniques that allow image data from multiple similar sources to be aligned and used to improve apparent image quality or field of view. In contrast, the current work centers on modeling and representation of uncertainty in real-time fusion of data from fundamentally dissimilar sensors. Where multiple sensors of differing type, resolution, field of view, and sample rate are providing scene data, the proposed scheme directly models uncertainty and provides an intuitive mechanism for visually representing the time-varying level of confidence in the correctness of fused sensor data producing a live image stream.