Please use this identifier to cite or link to this item:
Type: Artigo
Title: Empirical Evaluation Of Resampling Procedures For Optimising Svm Hyperparameters
Author: Wainer
Jacques; Cawley
Abstract: Tuning the regularisation and kernel hyperparameters is a vital step in optimising the generalisation performance of kernel methods, such as the support vector machine (SVM). This is most often performed by minimising a resampling/cross-validation based model selection criterion, however there seems little practical guidance on the most suitable form of resampling. This paper presents the results of an extensive empirical evaluation of resampling procedures for SVM hyperparameter selection, designed to address this gap in the machine learning literature. We tested 15 different resampling procedures on 121 binary classification data sets in order to select the best SVM hyperparameters. We used three very different statistical procedures to analyse the results: the standard multi-classifier/multidata set procedure proposed by Demsar, the confidence intervals on the excess loss of each procedure in relation to 5-fold cross validation, and the Bayes factor analysis proposed by Barber. We conclude that a 2-fold procedure is appropriate to select the hyperparameters of an SVM for data sets for 1000 or more datapoints, while a 3-fold procedure is appropriate for smaller data sets.
Subject: Hyperparameters
Editor: Microtome Publ
Citation: Journal Of Machine Learning Research. Microtome Publ, v. 18, p. , 2017.
Rights: aberto
Date Issue: 2017
Appears in Collections:Unicamp - Artigos e Outros Documentos

Files in This Item:
File SizeFormat 
000399838300001.pdf394.01 kBAdobe PDFView/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.