Clothing Retrieval is a task that is increasingly becoming popular with the rise of online shopping and social media’s popularity. We propose to solve the clothing retrieval problem using landmarks based on the clothing type and features surrounding the landmarks to get a more ingrained view of the design. We compare this method with other models most of which use the whole image as inputs and show the superiority of the model which gives importance to the crucial parts of the images. For the blouses sub-set from of the Deep Fashion dataset[1], we get an 16% increase in the accuracy for the top 3, 14% in top 5 and 11% top 10 retrieval results using the keypoints extraction methods combined with whole images compared to whole images as inputs. We also observe that the clothes retrieved are more similar in terms or design as well as high level properties like sleeve sizes, folded v/s non-folded sleeves etc.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.