Vision-based human activity recognition can provide rich contextual information but has traditionally been computationally prohibitive. We present a characterisation of five convolutional neural networks (DenseNet169, MobileNet, ResNet50, VGG16, VGG19) implemented with TensorFlow Lite running on three state of the art Android mobile phones. The networks have been trained to recognise 8 modes of transportation from camera images using the SHL Locomotion and Transportation dataset. We analyse the effect of thread count and back-ends services (CPU, GPU, Android Neural Network API) to classify the images provided by the rear camera of the phones. We report processing time and classification accuracy.
Einzelheiten
Titel
Benchmarking deep classifiers on mobile devices for vision-based transportation recognition
Autor(en)/ in(nen)
Richoz, Sebastien (University of Sussex, Brighton, United Kingdom) Perez-Uribe, Andres (School of Engineering and Management Vaud, HES-SO, University of Applied Sciences and Arts Western Switzerland) Birch, Philip (University of Sussex, Brighton, United Kingdom) Roggen, Daniel (University of Sussex, Brighton, United Kingdom)
Datum
2019-09
Veröffentlich in
UbiComp/ISWC '19 Adjunct: Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, 9-13 September 2019, London, United Kingdom
Band
pp. 803-807
Verlag
London, United Kingdom, 9-13 September 2019
Umfang
5 p.
Vorgestellt auf
2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers - UbiComp/ISWC '19, London, United Kingdom, 2019-09-09, 2019-09-13