Skip to main navigation Skip to search Skip to main content

Knowledge Interpolated Conditional Variational Auto-Encoder for Knowledge Grounded Dialogues

  • Xingwei Liang
  • , Jiachen Du
  • , Taiyu Niu
  • , Lanjun Zhou*
  • , Ruifeng Xu*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In the Knowledge Grounded Dialogue (KGD) generation, the explicit modeling of instance-variety of knowledge specificity and its seamless fusion with the dialogue context remains challenging. This paper presents an innovative approach, the Knowledge Interpolated conditional Variational auto-encoder (KIV), to address these issues. In particular, KIV introduces a novel interpolation mechanism to fuse two latent variables: independently encoding dialogue context and grounded knowledge. This distinct fusion of context and knowledge in the semantic space enables the interpolated latent variable to guide the decoder toward generating more contextually rich and engaging responses. We further explore deterministic and probabilistic methodologies to ascertain the interpolation weight, capturing the level of knowledge specificity. Comprehensive empirical analysis conducted on the Wizard-of-Wikipedia and Holl-E datasets verifies that the responses generated by our model performs better than strong baselines, with notable performance improvements observed in both automatic metrics and manual evaluation.

Original languageEnglish
Article number8707
JournalApplied Sciences (Switzerland)
Volume13
Issue number15
DOIs
StatePublished - Aug 2023
Externally publishedYes

Keywords

  • Conditional Variational auto-encoder (CAVE)
  • Knowledge Grounded Dialogue (KGD)
  • Knowledge Interoplated conditional Variational auto-encoder (KIV)
  • interpolation of latent variables

Fingerprint

Dive into the research topics of 'Knowledge Interpolated Conditional Variational Auto-Encoder for Knowledge Grounded Dialogues'. Together they form a unique fingerprint.

Cite this