The 21st century witnesses a blooming of online fashion retailing business, as well as the Peer-to-Peer (P2P) marketplaces. However, many listings on the P2P marketplace lack complete and accurate information. In this research, we target the fashion online marketplace, and try to retrieve garment color information to help sellers and buyers gain a more comprehensive knowledge of the fashion items. We focus on fashion product portraits, and propose a system that autonomously finds the garment region in the image and retrieves the color information and patterns of the item. This system contains three modules: First, image segmentation is deployed to partition the image into perceptually meaningful areas, and then some features are designed to differentiate the garment, region from nongarment region. Secondly, we propose a classifier module based on unsupervised clustering methods to select the garment region based on the feature vector. For the last module, we study current color naming systems, and color naming schemes on fashion websites, and propose a computational model that matches the color coordinates with the pre-defined color labels in the marketplace. Compared with other methods, thanks to the unsupervised learning methods that we use, our approach does not require a huge amount of training data labeled by human subjects.
Zhi Li, Gautam Golwala, Sathya Sundaram, Jan Allebach, "Use of Color Information in the Analysis of Fashion Photographs" in Proc. IS&T Int’l. Symp. on Electronic Imaging: Imaging and Multimedia Analytics in a Web and Mobile World, 2018, pp 446-1 - 446-13, https://doi.org/10.2352/ISSN.2470-1173.2018.10.IMAWM-446