Delay-aware DNN inference throughput maximization in edge computing via jointly exploring partitioning and parallelism
Loading...
Date
Authors
Li, Jing
Liang, Weifa
Li, Yuchen
Xu, Zichuan
Jia, Xiaohua
Journal Title
Journal ISSN
Volume Title
Publisher
IEEE
Abstract
Mobile Edge Computing (MEC) has emerged as a promising paradigm catering to overwhelming explosions of mobile applications, by offloading the compute-intensive tasks to an MEC network for processing. The surging of deep learning brings new vigor and vitality to shape the prospect of intelligent Internet of Things (IoT), and edge intelligence arises to provision real-time deep neural network (DNN) inference services for users. To accelerate the processing of the DNN inference of a request in an MEC network, the DNN inference model usually can be partitioned into two connected parts: one part is processed on the local IoT device of the request; and another part is processed on a cloudlet (server) in the MEC network. Also, the DNN inference can be further accelerated by allocating multiple threads of the cloudlet in which the request is assigned.In this paper, we study a novel delay-aware DNN inference throughput maximization problem with the aim to maximize the number of delay-aware DNN service requests admitted, by accelerating each DNN inference through jointly exploring DNN model partitioning and multi-thread parallelism of DNN inference. To this end, we first show that the problem is NP-hard. We then devise a constant approximation algorithm for it. We finally evaluate the performance of the proposed algorithm through experimental simulations. Experimental results demonstrate that the proposed algorithm is promising.
Description
Keywords
Citation
Collections
Source
2021 IEEE 46th Conference on Local Computer Networks (LCN)
Type
Book Title
Entity type
Access Statement
License Rights
Restricted until
2099-12-31
Downloads
File
Description