In Spectral Reconstruction (SR), we recover hyperspectral images from their RGB counterparts. Most of the recent approaches are based on Deep Neural Networks (DNN), where millions of parameters are trained mainly to extract and utilize the contextual features in large image patches as part of the SR process. On the other hand, the leading Sparse Coding method ‘A+’—which is among the strongest point-based baselines against the DNNs—seeks to divide the RGB space into neighborhoods, where locally a simple linear regression (comprised by roughly 102 parameters) suffices for SR. In this paper, we explore how the performance of Sparse Coding can be further advanced. We point out that in the original A+, the sparse dictionary used for neighborhood separations are optimized for the spectral data but used in the projected RGB space. In turn, we demonstrate that if the local linear mapping is trained for each spectral neighborhood instead of RGB neighborhood (and theoretically if we could recover each spectrum based on where it locates in the spectral space), the Sparse Coding algorithm can actually perform much better than the leading DNN method. In effect, our result defines one potential (and very appealing) upper-bound performance of point-based SR.