[2102.08619] Rethinking Co-design of Neural Architectures and Hardware Accelerators

Neural architectures and hardware accelerators have been two driving forces for the progress in deep learning. Previous works typically attempt to optimize hardware given a fixed model architecture or model architecture given fixed hardware. And the dominant hardware architecture explored in this prior work is FPGAs. In our work, we target the optimization of hardware and software configurations on an industry-standard edge accelerator. We systematically study the importance and strategies of co-designing neural architectures and hardware accelerators. We make three observations: 1) the software search space has to be customized to fully leverage the targeted hardware architecture, 2) the search for the model architecture and hardware architecture should be done jointly to achieve the best of both worlds, and 3) different use cases lead to very different search outcomes. Our experiments show that the joint search method consistently outperforms previous platform-aware neural architectu

2 mentions: @arankomatsuzaki@arxivabs
Date: 2021/02/21 15:52

Referring Tweets

@arankomatsuzaki Rethinking Co-design of Neural Architectures and Hardware Accelerators Reduces energy consumption of Edge TPU by up to 2x over EfficientNet w/ the same accuracy on Imagenet by jointly searching over hardware & neural architecture design spaces. t.co/jh7hIgA9Fh t.co/lycQk5ihvV
@arxivabs @Underfox3 Check out the abstract! t.co/sgx9lIN5Rm

Related Entries

Read more Representation and Bias in Multilingual NLP: Insights from Controlled Experiments on Conditional Lan...
0 users, 1 mentions 2020/10/03 02:22
Read more Transformer protein language models are unsupervised structure learners | OpenReview
0 users, 1 mentions 2020/10/03 03:52
Read more Context-Aware Temperature for Language Modeling | OpenReview
0 users, 1 mentions 2020/10/03 23:22
Read more [2012.15127] Improving Zero-Shot Translation by Disentangling Positional Information
0 users, 1 mentions 2021/01/01 03:51
Read more [2102.11107] Towards Causal Representation Learning
0 users, 13 mentions 2021/02/23 03:51