The purpose of this study is to prepare a source of realistic looking images in which optimal steganalysis is possible by enforcing a known statistical model on image pixels to assess the efficiency of detectors implemented using machine learning. Our goal is to answer the questions that researchers keep asking: “Are our empirical detectors close to what can be possibly detected? How much room is there for improvement?” or simply “Are we there yet?” Our goal is achieved by applying denoising to natural images to remove complex statistical dependencies introduced by processing and, subsequently, adding noise of simpler and known statistical properties that allows deriving the likelihood ratio test in a closed form. This theoretical upper bound informs us about the amount of further possible improvement. Three content-adaptive stego algorithms in the spatial domain and non-adaptive LSB matching are used to contrast the upper bound with the performance of two modern detection paradigms: a convolutional neural network and a classifier with the maxSRMd2 rich model. The short answer to the posed question is “We are much closer now but there is still non-negligible room for improvement.”