![]() ![]() A fast and effective fashion image retrieval method is currently the most urgent need for users. It is impractical to require users provide ideal example images as query input, which makes the fashion image retrieval even more challenging. However, the example images uploaded by users often suffer some problems during the actual retrieval process, such as poor light, posture changes, different shooting angles, and other factors. Due to the limited key words provided by online shopping platforms, it is difficult for consumers to retrieve the interested fashion image from the massive commodities by using text-based fashion image retrieval methods, while research on exemplar-based retrieval, where users provide an example image as the query, has recently received lots of interest in the community. When consumers search for fashion images in online stores, mainstream retrieval methods are constrained by using text or example images as input. ![]() Many research works have been reported on the tasks of clothing recognition, clothing classification, and clothing retrieval due to their huge potential value to all walks of life. In recent years, the issue of fashion image retrieval has attracted increasing attention. Extensive experiments conducted on our dataset and two fine-grained instance-level datasets, i.e., QMUL-shoes and QMUL-chairs, show that our model has achieved a better performance than other existing methods. Specifically, when retrieving on our Fashion Image dataset, the accuracy of our model ranks the correct match at the top-1 which is 96.6%, 92.1%, 91.0%, and 90.5% for clothes, pants, skirts, and shoes, respectively. Thus, we contribute a fine-grained sketch-based fashion image retrieval dataset, which includes 36,074 sketch-photo pairs. Moreover, the existing fashion image datasets mostly contain photos only and rarely contain the sketch-photo pairs. Then, the sketch domain similarity and the photo domain similarity are calculated, respectively, and fused to improve the retrieval accuracy of fashion images. In our approach, the sketch and photo are first transformed into the same domain. In this work, we propose a new algorithm for sketch-based fashion image retrieval based on cross-domain transformation. Due to the large cross-domain discrepancy between the free-hand sketch and fashion images, retrieving fashion images by sketches is a significantly challenging task. Different from the traditional text-based and exemplar-based image retrieval techniques, sketch-based image retrieval (SBIR) provides a more intuitive and natural way for users to specify their search need. For huge commodity databases, it remains a long-standing unsolved problem for users to find the interested products quickly. However, the current mainstream retrieval methods are still limited to using text or exemplar images as input. Due to the rise of e-commerce platforms, online shopping has become a trend. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |