TY - GEN
T1 - Toward Intelligent Interactive Design
T2 - 31st ACM International Conference on Multimedia, MM 2023
AU - Shi, Jianyang
AU - Zhang, Haijun
AU - Zhou, Dongliang
AU - Zhang, Zhao
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/10/27
Y1 - 2023/10/27
N2 - Traditional fashion design typically requires the expertise of designers, which limits the involvement of ordinary users during the design process. While it would be desirable for users to participate in the preliminary design phase, their lack of basic design knowledge may render them too inexperienced to produce satisfactory designs. To improve the design efficiency for common users, we present a novel interactive fashion design framework based on generative adversarial network (GAN). This framework can assist users in designing fashion items by drawing only rough scribbles and providing simple fashion styles. Specifically, we propose a new cross-domain feature fusion encoder network that maps design image features from different domains into a series of style vectors which are then fed into a generator. We demonstrate that the learned style vectors can decouple the representations of cross-domain design elements and control the design results through scribbles and style images. Furthermore, we propose a method for rewriting our model with scribbles and style images, to allow designers to train our model more easily. To examine the effectiveness of our proposed model, we constructed a large-scale dataset containing 90,000 pairs of fashion item images. Experimental results show that our proposed method outperforms state-of-the-art methods and can effectively control cross-domain image features, suggesting the potential of our model for providing users with an intelligence-driven interactive design tool.
AB - Traditional fashion design typically requires the expertise of designers, which limits the involvement of ordinary users during the design process. While it would be desirable for users to participate in the preliminary design phase, their lack of basic design knowledge may render them too inexperienced to produce satisfactory designs. To improve the design efficiency for common users, we present a novel interactive fashion design framework based on generative adversarial network (GAN). This framework can assist users in designing fashion items by drawing only rough scribbles and providing simple fashion styles. Specifically, we propose a new cross-domain feature fusion encoder network that maps design image features from different domains into a series of style vectors which are then fed into a generator. We demonstrate that the learned style vectors can decouple the representations of cross-domain design elements and control the design results through scribbles and style images. Furthermore, we propose a method for rewriting our model with scribbles and style images, to allow designers to train our model more easily. To examine the effectiveness of our proposed model, we constructed a large-scale dataset containing 90,000 pairs of fashion item images. Experimental results show that our proposed method outperforms state-of-the-art methods and can effectively control cross-domain image features, suggesting the potential of our model for providing users with an intelligence-driven interactive design tool.
KW - disentanglement.
KW - fashion data
KW - generative adversarial network
KW - image translation
KW - interactive fashion design
UR - https://www.scopus.com/pages/publications/85179551199
U2 - 10.1145/3581783.3612376
DO - 10.1145/3581783.3612376
M3 - 会议稿件
AN - SCOPUS:85179551199
T3 - MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia
SP - 7152
EP - 7163
BT - MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia
PB - Association for Computing Machinery, Inc
Y2 - 29 October 2023 through 3 November 2023
ER -