[1902.10222] ROMANet: Fine-Grained Reuse-Driven Off-Chip Memory Access Management and Data Organization for Deep Neural Network Acceleratorsopen searchopen navigation menucontact arXivsubscribe to arX

Enabling high energy efficiency is crucial for embedded implementations of deep learning. Several studies have shown that the DRAM-based off-chip memory accesses are one of the most energy-consuming operations in deep neural network (DNN) accelerators, and thereby limit the designs from achieving efficiency gains at the full potential. DRAM access energy varies depending upon the number of accesses required as well as the energy consumed per-access. Therefore, searching for a solution towards the minimum DRAM access energy is an important optimization problem. Towards this, we propose the ROMANet methodology that aims at reducing the number of memory accesses, by searching for the appropriate data partitioning and scheduling for each layer of a network using a design space exploration, based on the knowledge of the available on-chip memory and the data reuse factors. Moreover, ROMANet also targets decreasing the number of DRAM row buffer conflicts and misses, by exploiting the DRAM mul

1 mentions: @ElectronNest
Date: 2020/08/04 02:21

Related Entries

Read more GitHub - IAMAl/machine_learning_hardware: Paper Collection for Machine Learning Hardware
0 users, 1 mentions 2020/02/10 15:01
Read more [1801.00746] Bridging the Gap Between Neural Networks and Neuromorphic Hardware with A Neural Networ...
0 users, 1 mentions 2020/03/03 08:20
Read more ACM AI | Intro to Machine Learning: Advanced Track Workshops | CS
0 users, 1 mentions 2020/03/07 12:54
Read more AIDArc 2020 – The 3rd International Workshop on AI-assisted Design for Architecture
0 users, 1 mentions 2020/04/12 05:21
Read more 27th Reconfigurable Architectures Workshop (RAW 2020)
0 users, 2 mentions 2020/06/05 02:21
Read more Source reading of ONNX Runtime: overview of model reasoning process
0 users, 1 mentions 2020/08/10 05:21