The primary goal in most uses of a camera is not to capture properties of light, but to use light to construct a model of the appearance of the scene being photographed. That model should change over time as the scene changes, but how does it change over different timescales? At
low framerates, there often are large changes between temporally adjacent images, and many are attributed to motion. However, as the scene appearance is sampled in ever finer time intervals, the changes in the scene become simpler and eventually insignificant compared to noise in the sampling
process (e.g., photon shot noise). Thus, increasing the temporal resolution of the scene model can be expected to produce a decreasing amount of additional data. This property can be leveraged to allow virtual still exposures, or video at other framerates, to be computationally extracted after
capture of a high-temporal-resolution scene model, providing a variety of benefits. The current work attempts to quantify how scene appearance models change over time by examining properties of high-framerate video, with the goal of characterizing the relationship between temporal resolution
and effectiveness of data compression.