Nonconvex regularizations for feature selection in ranking with sparse SVM.Laporte, L., Flamary, R., Canu, S. and 2 more (2014) IEEE Transactions on Neural Networks and Learning Systems, 25 (6), pp. 1118-1130.
Feature selection in learning to rank has recently emerged as a crucial issue. Whereas several preprocessing approaches have been proposed, only a few have focused on integrating feature selection into the learning process. In this paper, we propose a general framework for feature selection in learning to rank using support vector machines with a sparse regularization term. We investigate both classical convex regularizations, such as ℓ 1 or weighted ℓ 1 , and nonconvex regularization terms, such as log penalty, minimax concave penalty, or ℓ p pseudo-norm with p<;1. Two algorithms are proposed: the first, an accelerated proximal approach for solving the convex problems, and, the second, a reweighted ℓ 1 scheme to address nonconvex regularizations. We conduct intensive experiments on nine datasets from Letor 3.0 and Letor 4.0 corpora. Numerical results show that the use of nonconvex regularizations we propose leads to more sparsity in the resulting models while preserving the prediction performance. The number of features is decreased by up to a factor of 6 compared to the ℓ 1 regularization. In addition, the software is publicly available on the web.