Please use this identifier to cite or link to this item: http://repositorio.unicamp.br/jspui/handle/REPOSIP/320184
Type: Artigo de Periódico
Title: Rate-energy-accuracy Optimization Of Convolutional Architectures For Face Recognition
Author: Bondi
L; Baroffio
L; Cesana
M; Tagliasacchi
M; Chiachia
G; Rocha
A
Abstract: Face recognition systems based on Convolutional Neural Networks (CNNs) or convolutional architectures currently represent the state of the art, achieving an accuracy comparable to that of humans. Nonetheless, there are two issues that might hinder their adoption on distributed battery-operated devices (e.g., visual sensor nodes, smartphones, and wearable devices). First, convolutional architectures are usually computationally demanding, especially when the depth of the network is increased to maximize accuracy. Second, transmitting the output features produced by a CNN might require a bitrate higher than the one needed for coding the input image. Therefore, in this paper we address the problem of optimizing the energy-rate-accuracy characteristics of a convolutional architecture for face recognition. We carefully profile a CNN implementation on a Raspberry Pi device and optimize the structure of the neural network, achieving a 17-fold speedup without significantly affecting recognition accuracy. Moreover, we propose a coding architecture custom-tailored to features extracted by such model. (C) 2015 Elsevier Inc. All rights reserved.
Subject: Convolutional Architectures
Convolutional Neural Networks (cnns)
Optimization
Coding
Face Recognition
Analyze-then-compress (atc)
Deep Learning
Deep Neural Networks
Editor: ACADEMIC PRESS INC ELSEVIER SCIENCE
Rights: fechado
Identifier DOI: 10.1016/j.jvcir.2015.12.015
Address: http://www-sciencedirect-com.ez88.periodicos.capes.gov.br/science/article/pii/S1047320315002552
Date Issue: 2016
Appears in Collections:Unicamp - Artigos e Outros Documentos

Files in This Item:
File SizeFormat 
000371280200012.pdf1.4 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.