Blur is one of the most encountered visual distortions in images. It can be either deliberately introduced to highlight some objects, or caused by acquisition/processing. Both cases usually induce spatially-varying blur or out-of-focus blur. Despite its wide occurrence, only a few dedicated image quality metrics can be found in the literature. Most of the proposed metrics are based on the assumption of uniformly blurred images. Consequently, in this paper, we propose a quality assessment framework handling both types of blur and predicting their inherent level of annoyance. To achieve this aim, a local perceptual blurriness map providing the level of blur at each location in an image is first generated. Then, depth ordering is obtained from the image in order to characterize the placement of the image objects in the scene. Next, the visual saliency information is computed to take into account the visual importance of each object. Finally, the local perceptual blurriness map is weighted using both objects depth ordering and saliency maps to provide final scores of blur. Experimental results show that the proposed metric achieves good prediction performance compared to state-of-the-art metrics.