000002196 001__ 2196
000002196 005__ 20181220113733.0
000002196 020__ $$a978-3-319-68344-7
000002196 0247_ $$2DOI$$a10.1007/978-3-319-68345-4_16
000002196 037__ $$aCHAPTER
000002196 041__ $$aeng
000002196 245__ $$aSemi-automatic training of an object recognition system in scene camera data using gaze tracking and accelerometers
000002196 260__ $$c2017$$bSpringer$$aCham
000002196 269__ $$a2017-07
000002196 300__ $$app. 175-184
000002196 506__ $$avisible
000002196 520__ $$aObject detection and recognition algorithms usually require large, annotated training sets. The creation of such datasets requires expensive manual annotation. Eye tracking can help in the annotation procedure. Humans use vision constantly to explore the environment and plan motor actions, such as grasping an object. In this paper we investigate the possibility to semi-automatically train object recognition with eye tracking, accelerometer in scene camera data, learning from the natural hand-eye coordination of humans. Our approach involves three steps. First, sensor data are recorded using eye tracking glasses that are used in combination with accelerometers and surface electromyography that are usually applied when controlling prosthetic hands. Second, a set of patches are extracted automatically from the scene camera data while grasping an object. Third, a convolutional neural network is trained and tested using the extracted patches. Results show that the parameters of eye-hand coordination can be used to train an object recognition system semi-automatically. These can be exploited with proper sensors to fine-tune a convolutional neural network for object detection and recognition. This approach opens interesting options to train computer vision and multi-modal data integration systems and lays the foundations for future applications in robotics. In particular, this work targets the improvement of prosthetic hands by recognizing the objects that a person may wish to use. However, the approach can easily be generalized.$$9eng
000002196 546__ $$aEnglish
000002196 540__ $$acorrect
000002196 592__ $$aHEG-VS
000002196 592__ $$bInstitut Informatique de gestion
000002196 592__ $$cEconomie et Services
000002196 65017 $$aEconomie/gestion
000002196 6531_ $$9eng$$asemi-automatic training
000002196 6531_ $$9eng$$aobject recognition
000002196 6531_ $$9eng$$aeye tracking
000002196 700__ $$uUniversity of Applied Sciences and Arts Western Switzerland (HES-SO Valais-Wallis) ; Rehabilitation Engineering LaboratoryETH Zurich$$aCognolato, Matteo
000002196 700__ $$uUniversity of Rome “La Sapienza”$$aGraziani, Mara
000002196 700__ $$uUniversity of Rome “La Sapienza”, Italy$$aGiordaniello, Francesca
000002196 700__ $$uDepartment of Neurology, University Hospital of Zurich, Switzerland$$aSaetta, Gianluca
000002196 700__ $$uFranco : linic of Plastic SurgeryPadova University Hospital, Italy$$aBassetto
000002196 700__ $$uDepartment of Neurology, University Hospital of Zurich, Switzerland$$aBrugger, Peter
000002196 700__ $$uUniversity of Rome “La Sapienza”, Italy$$aCaputo, Barbara
000002196 700__ $$uUniversity of Applied Sciences and Arts Western Switzerland (HES-SO Valais-Wallis)$$aMüller, Henning
000002196 700__ $$aAtzori, Manfredo$$uUniversity of Applied Sciences and Arts Western Switzerland (HES-SO Valais-Wallis)
000002196 773__ $$tComputer Vision Systems : 11th International Conference, ICVS 2017, Shenzhen, China, July 10-13, 2017
000002196 8564_ $$uhttps://hesso.tind.io/record/2196/files/Cognolato_2017_semi-automatic_training.pdf$$s1328942
000002196 8564_ $$xpdfa$$uhttps://hesso.tind.io/record/2196/files/Cognolato_2017_semi-automatic_training.pdf?subformat=pdfa$$s2135266
000002196 909CO $$pGLOBAL_SET$$ooai:hesso.tind.io:2196
000002196 906__ $$aGREEN
000002196 950__ $$aI2
000002196 980__ $$achapitre