Please use this identifier to cite or link to this item:
|Type:||Artigo de evento|
|Title:||Gradient Hyper-parameter Optimization For Manifold Regularization|
|Abstract:||Semi-supervised learning can be defined as the ability to improve the predictive performance of an algorithm by providing it with data which hasn't been previously labeled. Manifold Regularization is a semi-supervised learning approach that extends the regularization framework so as to include additional regularization penalties that are based on the graph Laplacian as the empirical estimator of the underlying manifold. The incorporation of such terms rely on additional hyper-parameters, which, together with the original kernel and regularization parameters, are known to influence algorithm behavior. This paper proposes a gradient approach to the optimization of such hyper-parameters which is based on the closed form for the generalized cross validation estimate, being valid when the learning optimality conditions can be represented as a linear system, such as is the case for Laplacian Regularized Least Squares. For the subset hyper-parameters that are integer quantities, as is the case for the Laplacian matrix hyper-parameters, we propose the optimization of the weight components of a sum of base terms. Results of computational experiments are presented to illustrate the technique proposed. © 2013 IEEE.|
|Editor:||IEEE Computer Society|
|Appears in Collections:||Unicamp - Artigos e Outros Documentos|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.