Abstract
Sparse additive models have shown competitive performance for high-dimensional variable selection and prediction due to their representation flexibility and interpretability. Despite their theoretical properties have been studied extensively, few works have addressed the robustness for the sparse additive models. In this paper, we employ the robust average top-k (ATk) loss as classification error measure and propose a new sparse algorithm, named ATk group sparse additive machine (ATk-GSAM). Besides the robust concern, the ATk-GSAM has well adaptivity by integrating the data dependent hypothesis space and group sparse regularizer together. Generalization error bound is established by the concentration estimate with empirical covering numbers. In particular, our error analysis shows that ATk-GSAM can achieve the learning rate O(n−1/2) under appropriate conditions. We further analyze the robustness of ATk-GSAM via a sample-weighted procedure interpretation, and the theoretical guarantees on grouped variable selection. Experimental evaluations on both simulated and benchmark datasets validate the effectiveness and robustness of the new algorithm.
| Original language | English |
|---|---|
| Pages (from-to) | 1-14 |
| Number of pages | 14 |
| Journal | Neurocomputing |
| Volume | 395 |
| DOIs | |
| State | Published - 28 Jun 2020 |
| Externally published | Yes |
Keywords
- Additive models
- Average top-k loss
- Data dependent hypothesis space
- Generalization error
- Robustness
Fingerprint
Dive into the research topics of 'Group sparse additive machine with average top-k loss'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver