Computed Tomography (CT) is a non-invasive imaging technique that reconstructs cross-sectional images of scenes from a series of projections acquired at different angles. In applications such as airport security luggage screening, the presence of dense metal clutter causes beam hardening and streaking in the resulting conventionally formed images. These artifacts can lead to object splitting and intensity shading that make subsequent labeling and identification inaccurate. Conventional approaches to metal artifact reduction (MAR) have post-processed the artifact-filled images or interpolated the metal regions of the sinogram projection data. In this work, we examine the use of deep-learning-based methods to directly correct the observed sinogram projection data prior to reconstruction using a fully convolutional network (FCN). In contrast to existing learning-based CT artifact reduction work, we work completely in the sinogram domain and train a network over the entire sinogram (versus just local image patches). Since the information in sinograms pertaining to objects is non-local, patch-based methods are not well matched to the nature of CT data. The use of an FCN provides better computational scaling than historical perceptron-based approaches. Using a poly-energetic CT simulation, we demonstrate the potential of this new approach in mitigating metal artifacts in CT.