[2006.14090] Neural Architecture Design for GPU-Efficient Networksopen searchopen navigation menucontact arXivsubscribe to arXiv mailings

Many mission-critical systems are based on GPU for inference. It requires not only high recognition accuracy but also low latency in responding time. Although many studies are devoted to optimizing the structure of deep models for efficient inference, most of them do not leverage the architecture of \textbf{modern GPU} for fast inference, leading to suboptimal performance. To address this issue, we propose a general principle for designing GPU-efficient networks based on extensive empirical studies. This design principle enables us to search for GPU-efficient network structures effectively by a simple and lightweight method as opposed to most Neural Architecture Search (NAS) methods that are complicated and computationally expensive. Based on the proposed framework, we design a family of GPU-Efficient Networks, or GENets in short. We did extensive evaluations on multiple GPU platforms and inference engines. While achieving $\geq 81.3\%$ top-1 accuracy on ImageNet, GENet is up to $6.4$

Keywords: gpu
Date: 2020/06/28 11:21

Related Entries

Read more [2002.02385] Product Kanerva Machines: Factorized Bayesian Memorycontact arXivarXiv Twitter
0 users, 1 mentions 2020/02/23 06:51
Read more [2002.09402] Accessing Higher-level Representations in Sequential Transformers with Feedback Memoryc...
0 users, 2 mentions 2020/02/24 23:21
Read more [2004.02105] Unsupervised Domain Clusters in Pretrained Language Modelscontact arXivarXiv Twitter
0 users, 2 mentions 2020/04/08 23:21
Read more [2004.07437] Non-Autoregressive Machine Translation with Latent Alignmentsopen searchopen navigation...
0 users, 1 mentions 2020/04/18 02:21
Read more [2006.11834] AdvAug: Robust Adversarial Augmentation for Neural Machine Translationopen searchopen n...
0 users, 1 mentions 2020/06/23 23:21