Please use this identifier to cite or link to this item: http://repositorio.unicamp.br/jspui/handle/REPOSIP/69475
Type: Artigo de periódico
Title: Open set source camera attribution and device linking
Author: Costa, FD
Silva, E
Eckmann, M
Scheirer, WJ
Rocha, A
Abstract: Camera attribution approaches in digital image forensics have most often been evaluated in a closed set context, whereby all devices are known during training and testing time. However, in a real investigation, we must assume that innocuous images from unknown devices will be recovered, which we would like to remove from the pool of evidence. In pattern recognition, this corresponds to what is known as the open set recognition problem. This article introduces new algorithms for open set modes of image source attribution (identifying whether or not an image was captured by a specific digital camera) and device linking (identifying whether or not a pair of images was acquired from the same digital camera without the need for physical access to the device). Both algorithms rely on a new multi-region feature generation strategy, which serves as a projection space for the class of interest and emphasizes its properties, and on decision boundary carving, a novel method that models the decision space of a trained SVM classifier by taking advantage of a few known cameras to adjust the decision boundaries to decrease false matches from unknown classes. Experiments including thousands of unconstrained images collected from the web show a significant advantage for our approaches over the most competitive prior work. (C) 2013 Elsevier B.V. All rights reserved.
Subject: Open set recognition
Camera attribution
Device linking
Decision boundary carving
Country: Holanda
Editor: Elsevier Science Bv
Rights: fechado
Identifier DOI: 10.1016/j.patrec.2013.09.006
Date Issue: 2014
Appears in Collections:Artigos e Materiais de Revistas Científicas - Unicamp

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.