Automated design space exploration for optimised deployment of DNN on arm cortex-A CPUs

de Prado, Miguel (School of Engineering – HE-Arc Ingénierie, HES-SO // University of Applied Sciences Western Switzerland ; ETH Zürich, Zürich, Switzerland) ; Mundy, Andrew (School of Engineering – HE-Arc Ingénierie, HES-SO // University of Applied Sciences Western Switzerland ; ETH Zürich, Zürich, Switzerland) ; Saeed, Rabia (School of Engineering – HE-Arc Ingénierie, HES-SO // University of Applied Sciences Western Switzerland ; ETH Zürich, Zürich, Switzerland) ; Denna, Maurizio (School of Engineering – HE-Arc Ingénierie, HES-SO // University of Applied Sciences Western Switzerland ; ETH Zürich, Zürich, Switzerland) ; Pazos Escudero, Nuria (School of Engineering – HE-Arc Ingénierie, HES-SO // University of Applied Sciences Western Switzerland ; ETH Zürich, Zürich, Switzerland) ; Benini, Luca (School of Engineering – HE-Arc Ingénierie, HES-SO // University of Applied Sciences Western Switzerland ; ETH Zürich, Zürich, Switzerland)

The spread of deep learning on embedded devices has prompted the development of numerous methods to optimise the deployment of deep neural networks (DNN). Works have mainly focused on: i) efficient DNN architectures, ii) network optimisation techniques such as pruning and quantisation, iii) optimised algorithms to speed up the execution of the most computational intensive layers and, iv) dedicated hardware to accelerate the data flow and computation. However, there is a lack of research on cross-level optimisation as the space of approaches becomes too large to test and obtain a globally optimised solution. Thus, leading to suboptimal deployment in terms of latency, accuracy, and memory. In this work, we first detail and analyse the methods to improve the deployment of DNNs across the different levels of software optimisation. Building on this knowledge, we present an automated exploration framework to ease the deployment of DNNs. The framework relies on a Reinforcement Learning search that, combined with a deep learning inference framework, automatically explores the design space and learns an optimised solution that speeds up the performance and reduces the memory on embedded CPU platforms. Thus, we present a set of results for state-of-the-art DNNs on a range of Arm Cortex-A CPU platforms achieving up to 4× improvement in performance and over 2× reduction in memory with negligible loss in accuracy with respect to the BLAS floating-point implementation.


Keywords:
Article Type:
scientifique
Faculty:
Ingénierie et Architecture
School:
HE-Arc Ingénierie
Institute:
Aucun institut
Date:
2020-12
Pagination:
14 p.
Published in:
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Numeration (vol. no.):
early access
DOI:
ISSN:
0278-0070
Appears in Collection:

Note: The status of this file is: restricted


 Record created 2021-04-21, last modified 2021-04-29

Fulltext:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)