Skip to main navigation Skip to search Skip to main content

Group sparse additive machine with average top-k loss

  • Peipei Yuan
  • , Xinge You*
  • , Hong Chen
  • , Qinmu Peng
  • , Yue Zhao
  • , Zhou Xu
  • , Xiao Yuan Jing
  • , Zhenyu He
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Sparse additive models have shown competitive performance for high-dimensional variable selection and prediction due to their representation flexibility and interpretability. Despite their theoretical properties have been studied extensively, few works have addressed the robustness for the sparse additive models. In this paper, we employ the robust average top-k (ATk) loss as classification error measure and propose a new sparse algorithm, named ATk group sparse additive machine (ATk-GSAM). Besides the robust concern, the ATk-GSAM has well adaptivity by integrating the data dependent hypothesis space and group sparse regularizer together. Generalization error bound is established by the concentration estimate with empirical covering numbers. In particular, our error analysis shows that ATk-GSAM can achieve the learning rate O(n−1/2) under appropriate conditions. We further analyze the robustness of ATk-GSAM via a sample-weighted procedure interpretation, and the theoretical guarantees on grouped variable selection. Experimental evaluations on both simulated and benchmark datasets validate the effectiveness and robustness of the new algorithm.

Original languageEnglish
Pages (from-to)1-14
Number of pages14
JournalNeurocomputing
Volume395
DOIs
StatePublished - 28 Jun 2020
Externally publishedYes

Keywords

  • Additive models
  • Average top-k loss
  • Data dependent hypothesis space
  • Generalization error
  • Robustness

Fingerprint

Dive into the research topics of 'Group sparse additive machine with average top-k loss'. Together they form a unique fingerprint.

Cite this