Abstract
Kernel methods provide powerful and flexible tools for nonlinear learning in high dimensional data analysis, but feature selection remains a challenge in kernel learning. The proposed DOSK method provides a new unified framework to implement kernel methods while automatically selecting important variables and identifying a subset of parsimonious knots at the same time. A double penalty is employed to encourage sparsity in both feature weights and representer coefficients. The authors have presented the computational algorithm and as well as theoretical properties of the DOSK method. In this discussion, we first highlight the DOSK's major contributions to the machine learning toolbox. Then we discuss its connections to other nonparametric methods in the literature and point out some possible future research directions. AMS 2000 subject classifications: Primary 62H20, 62F07; secondary 62J05.
Original language | English (US) |
---|---|
Pages (from-to) | 425-428 |
Number of pages | 4 |
Journal | Statistics and its Interface |
Volume | 11 |
Issue number | 3 |
DOIs | |
State | Published - 2018 |
Keywords
- High dimensional data analysis
- Kernel methods
- Penalty
- Reproducing kernel Hilbert space (RKHS)
- Variable selection
ASJC Scopus subject areas
- Statistics and Probability
- Applied Mathematics