The Magdalena Ridge Observatory Interferometer (MROI) utilizes Shack-Hartmann Wavefront Sensing (SH-WFS) for the back-end stability of its beam relay systems in a unique design. The SH-WFS, however, is sensitive to atmospheric turbulence scintillation which can drastically affect its precision in calculating the position of the beam profile it sees. A large number of images are needed to counteract the turbulence effect. Here we use deep learning as an alternative to long averaging cycles. A CNN was trained to map from a number of initial images of a series of star frames to the average image of the entire series at different positions of the beam profile. Under typical seeing conditions expected at MROI, the results showed that the network can map 10 input frames to the average of 100 within the permissible error margin of 0.1 pixels and furnish proper generalization to beam position movements not seen during training. The network can also outperform the averaging technique when both techniques operate on small numbers of input frames such as 10 or 20.