TY - GEN
T1 - LEARNING OUTFIT COMPATIBILITY WITH GRAPH ATTENTION NETWORK AND VISUAL-SEMANTIC EMBEDDING
AU - Wang, Jianfeng
AU - Cheng, Xiaochun
AU - Wang, Ruomei
AU - Liu, Shaohui
N1 - Publisher Copyright:
© 2021 IEEE
PY - 2021
Y1 - 2021
N2 - Fashion recommendation is an essential component of user shopping that it is capable of selecting and presenting fascinating items to customers. The fact that humans exhibit inconsistencies for fashion items in their choice is known to all due to the visual aesthetic features and fine-grained differences of fashion items. Previous research on fashion recommendations mainly focuses on sequential models, most of them only consider complex similarity relationships in fashion compatibility while neglecting the real-world compatible information often desired in practical applications. To learn the fashion compatibility and generate for the outfit, we propose an approach that jointly learns latent fashion concepts in visual-semantic space to measure compatibility between items. The fashion concepts are shaped by design elements such as color, material, and silhouette. Accordingly, we model a unified representation to learn different notions of similarity by mapping text descriptors and images into latent space to learn high-level representations. Experimental results reveal that our method effectively reaches the aimed results on the fill-in-the-blank and outfit compatibility tasks.
AB - Fashion recommendation is an essential component of user shopping that it is capable of selecting and presenting fascinating items to customers. The fact that humans exhibit inconsistencies for fashion items in their choice is known to all due to the visual aesthetic features and fine-grained differences of fashion items. Previous research on fashion recommendations mainly focuses on sequential models, most of them only consider complex similarity relationships in fashion compatibility while neglecting the real-world compatible information often desired in practical applications. To learn the fashion compatibility and generate for the outfit, we propose an approach that jointly learns latent fashion concepts in visual-semantic space to measure compatibility between items. The fashion concepts are shaped by design elements such as color, material, and silhouette. Accordingly, we model a unified representation to learn different notions of similarity by mapping text descriptors and images into latent space to learn high-level representations. Experimental results reveal that our method effectively reaches the aimed results on the fill-in-the-blank and outfit compatibility tasks.
KW - Fashion recommendation
KW - Outfits style
KW - Visual compatibility
KW - Visual-semantic space
UR - https://www.scopus.com/pages/publications/85118500301
U2 - 10.1109/ICME51207.2021.9428401
DO - 10.1109/ICME51207.2021.9428401
M3 - 会议稿件
AN - SCOPUS:85118500301
T3 - Proceedings - IEEE International Conference on Multimedia and Expo
BT - 2021 IEEE International Conference on Multimedia and Expo, ICME 2021
PB - IEEE Computer Society
T2 - 2021 IEEE International Conference on Multimedia and Expo, ICME 2021
Y2 - 5 July 2021 through 9 July 2021
ER -