Skip to main navigation Skip to search Skip to main content

LETSmix: a spatially informed and learning-based domain adaptation method for cell-type deconvolution in spatial transcriptomics

  • Yangen Zhan
  • , Yongbing Zhang*
  • , Zheqi Hu
  • , Yifeng Wang
  • , Zirui Zhu
  • , Sijing Du
  • , Xiangming Yan
  • , Xiu Li*
  • *Corresponding author for this work
  • Tsinghua University
  • Harbin Institute of Technology

Research output: Contribution to journalArticlepeer-review

Abstract

Spatial transcriptomics (ST) enables the study of gene expression in spatial context, but many ST technologies face challenges due to limited resolution, leading to cell mixtures at each spot. We present LETSmix to deconvolve cell types by integrating spatial correlations through a tailored LETS filter, which leverages layer annotations, expression similarities, image texture features, and spatial coordinates to refine ST data. Additionally, LETSmix employs a mixup-augmented domain adaptation strategy to address discrepancies between ST and reference single-cell RNA sequencing data. Comprehensive evaluations across diverse ST platforms and tissue types demonstrate its high accuracy in estimating cell-type proportions and spatial patterns, surpassing existing methods (URL: https://github.com/ZhanYangen/LETSmix).

Original languageEnglish
Article number16
JournalGenome Medicine
Volume17
Issue number1
DOIs
StatePublished - Dec 2025
Externally publishedYes

Keywords

  • Cell-type deconvolution
  • Deep learning
  • Domain adaptation
  • Histological image
  • Mixup
  • Single-cell RNA-seq
  • Spatial correlation
  • Spatial transcriptomics

Fingerprint

Dive into the research topics of 'LETSmix: a spatially informed and learning-based domain adaptation method for cell-type deconvolution in spatial transcriptomics'. Together they form a unique fingerprint.

Cite this