Time domain continuous imaging (TDCI) models scene appearance as a set of continuous waveforms, each recording how the value of an individual pixel changes over time. When a set of timestamped still images is converted into a TDCI stream, pixel value change records are created based on when the pixel value becomes more different from the previous value than the value error model classifies as noise. Virtual exposures may then be rendered from the TDCI stream data for arbitrary time intervals by integrating the area under the pixel value waveforms. Using conventional cameras, multispectral and high dynamic range imaging both involve combining multiple exposures; the needed variations in exposure and/or spectral filtering generally skew the time periods represented by the component exposures or compromise capture quality in other ways. This paper describes a simple approach in which converting the image data to a TDCI representation is used to support generation of a higher-quality fusion of the separate captures.