Joint multi-modal representations for e-commerce catalog search driven by visual attributes
published: Oct. 12, 2016, recorded: August 2016, views: 1300
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
In many visual domains (like fashion, furniture etc.) the search for products on online platforms is highly driven by visual attributes. Conventional search requires that all items in the catalog are manually tagged with all possible attribute values which is not scalable. In this paper we propose a novel paradigm for multi-modal catalog search via joint representations. The user provides a search query in natural language (e.g.,pink floral top) and the returned results are of a different modality (that is the set of images of pink floral tops). Specifically we use a correlational autoencoder based model to learn the joint representation for both the image and its corresponding description such that the two representations are embedded in the same space and as close as possible to each other. These representations are learnt over a large curated fashion dataset of over 700 thousand images crawled from multiple fashion e-commerce portals. Our experimental results show that these representations are a viable alternative for searching large fashion catalogs without manual tagging. The same representations can also be used for visual search, image tagging, and query expansion.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !