Please use this identifier to cite or link to this item:
|Type:||Artigo de periódico|
|Title:||QuMinS: Fast and scalable querying, mining and summarizing multi-modal databases|
|Abstract:||Given a large image set, in which very few images have labels, how to guess labels for the remaining majority? How to spot images that need brand new labels different from the predefined ones? How to summarize these data to route the user's attention to what really matters? Here we answer all these questions. Specifically, we propose QuMinS, a fast, scalable solution to two problems: (i) Low-labor labeling (LLL) - given an image set, very few images have labels, find the most appropriate labels for the rest; and (ii) Mining and attention routing - in the same setting, find clusters, the top-N-O outlier images, and the N-R images that best represent the data. Experiments on satellite images spanning up to 2.25 GB show that, contrasting to the state-of-the-art labeling techniques, QuMinS scales linearly on the data size, being up to 40 times faster than top competitors (GCap), still achieving better or equal accuracy, it spots images that potentially require unpredicted labels, and it works even with tiny initial label sets, i.e., nearly five examples. We also report a case study of our method's practical usage to show that QuMinS is a viable tool for automatic coffee crop detection from remote sensing images. (C) 2013 Elsevier Inc. All rights reserved.|
Query by example
|Editor:||Elsevier Science Inc|
|Appears in Collections:||Unicamp - Artigos e Outros Documentos|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.