Mobile phones are used ubiquitously to capture all kinds of images – food, travel, friends, family, receipts, documents, grocery products and many more. Often times when looking back on photos to relive memories, we want to see images that actually represent experiences and
not quick convenience photos that were taken for note-keeping and not deleted. Thus, we need to have a solution that presents only the relevant pictures without showing images of receipts, grocery products etc. – termed in general as utility images. This is in the context of a photobook
which compiles and shows relevant images from the photo album of a mobile device. Further, all this has to be done on a mobile device since all the media resides there – introducing the need for our system to work on low power devices. In this paper, we present a work that can distinguish
between utility and non-utility images. We also present a dataset of utility images and non-utility images with images for each category mentioned. Furthermore, we present a comparison between accuracies of popular pre-trained neural networks and show the trade-off between size and accuracy.