Learn how to make BERT smaller and faster

Learn how to make BERT smaller and faster

Let's look at compression methods for neural networks, such as quantization and pruning. Then, we apply one to BERT using TensorFlow Lite.

8 mentions: @alanmnichol@Rasa_HQ@rlebron_bioinfo@data4gud@chetanhere
Keywords: bert
Date: 2019/08/08 14:56

Referring Tweets

@alanmnichol Great work by our (undergraduate!) ML intern Sam on compressing big language models like BERT t.co/uIKhkSx4y1 If you'd like to intern with us too, just ping me (DMs open) or apply for a research role and mention internships.
@Rasa_HQ Check out our recent blogpost about how to apply compression methods to #BERT using #TensorFlow Lite. #conversationalAI #NLP t.co/f1EbKQmvMV
@chetanhere The article, below, is a good start to build the model for future. This is a good start to build the model that actually works in production. #machinelearning #computervision #python t.co/rILctRQZgD

Related Entries

Read more GitHub - soskek/bert-chainer: Chainer implementation of "BERT: Pre-training of Deep Bidirectional Tr...
7 users, 0 mentions 2018/12/02 18:01
Read more [DL Hacks]BERT: Pre-training of Deep Bidirectional Transformers for L…
4 users, 5 mentions 2018/12/07 04:31
Read more [DL輪読会]BERT: Pre-training of Deep Bidirectional Transformers for Lang…
0 users, 0 mentions 2018/10/20 12:15
Read more GitHub - huggingface/pytorch-pretrained-BERT: The Big-&-Extending-Repository-of-Transformers: PyTorc...
1 users, 7 mentions 2019/03/04 21:47
Read more tensorflow2でhuggigfaceのtransformersを使ってBERTを文書分類モデルに転移学習する - メモ帳
0 users, 1 mentions 2019/10/22 12:50