Please use this identifier to cite or link to this item:
|Type:||Artigo de periódico|
|Title:||A data-driven detection optimization framework|
de Melo, VHC
|Abstract:||Due to the large amount of data to be processed by visual applications aiming at extracting high-level understanding of the scene, low-level methods such as object detection are required to have not only high accuracy but also low computational cost in order to provide fast and reliable information. Training sets containing samples representing multiple scenes are used to learn object detectors that can be reliably used in different scenarios. In general, information extracted from multiple feature channels is combined to capture the large variability present in these different environments. Although this approach provides accurate detection results, it usually leads to a high computational cost. On the other hand, if characteristics of the scene are known before-hand, a set of simple and fast computing features might be sufficient to provide high accuracy at a low computational cost. Therefore, it is valuable to seek a balance between these two extremes such that the detection method not only works well in different scenarios but also is able to extract enough information from a scene. We integrate a set of data-driven regression models with a multi-stage based human detection method trained to be used in different environments. The regressions are used to estimate the detector response at each stage and the location of the objects. The use of the regression models allows the method to reject large number of detection windows quickly. Experimental results based on human detection show that the addition of the regression models reduces the computational cost by as much as ten times with very small or no degradation on detection accuracy. (C) 2012 Elsevier B.V. All rights reserved.|
Partial least squares
|Editor:||Elsevier Science Bv|
|Appears in Collections:||Unicamp - Artigos e Outros Documentos|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.