Parsing-Conditioned Anime Translation: A New Dataset and Method

  • Zhansheng Li
  • , Yangyang Xu
  • , Nanxuan Zhao
  • , Yang Zhou
  • , Yongtuo Liu
  • , Dahua Lin
  • , Shengfeng He*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Anime is an abstract art form that is substantially different from the human portrait, leading to a challenging misaligned image translation problem that is beyond the capability of existing methods. This can be boiled down to a highly ambiguous unconstrained translation between two domains. To this end, we design a new anime translation framework by deriving the prior knowledge of a pre-Trained StyleGAN model. We introduce disentangled encoders to separately embed structure and appearance information into the same latent code, governed by four tailored losses. Moreover, we develop a FaceBank aggregation method that leverages the generated data of the StyleGAN, anchoring the prediction to produce in-domain animes. To empower our model and promote the research of anime translation, we propose the first anime portrait parsing dataset, Danbooru-Parsing, containing 4,921 densely labeled images across 17 classes. This dataset connects the face semantics with appearances, enabling our new constrained translation setting. We further show the editability of our results, and extend our method to manga images, by generating the first manga parsing pseudo data. Extensive experiments demonstrate the values of our new dataset and method, resulting in the first feasible solution on anime translation.

Original languageEnglish
Article number30
JournalACM Transactions on Graphics
Volume42
Issue number3
DOIs
StatePublished - 10 Apr 2023
Externally publishedYes

Keywords

  • Generative adversarial networks
  • image editing
  • image-To-image translation

Fingerprint

Dive into the research topics of 'Parsing-Conditioned Anime Translation: A New Dataset and Method'. Together they form a unique fingerprint.

Cite this