In this paper, we present a novel high-resolution projector photometric compensation method named HRPC. This method leverages a convolutional neural network architecture to compensate the projector input image before projection. The network incorporates multi-scale image feature pyramids and Laplacian pyramids to capture features at different levels. This enables scale-invariant learning of complex mappings among the projection surface image, the uncompensated projected image, and the ground truth image. Additionally, a non-local attention module and a multi-layer perceptron module are introduced into the bottleneck to enhance long-range dependency modeling and non-linear transformation abilities. Experiments on high-resolution projection datasets demonstrate HRPC’s ability to effectively compensate images with reduced color inconsistencies, illumination variations, and detail loss compared to state-of-the-art methods.