Applying Linearly Scalable Transformers to Model Longer Protein Sequences | Synced

Applying Linearly Scalable Transformers to Model Longer Protein Sequences | Synced

Researchers proposed a new transformer architecture called “Performer” — based on what they call fast attention via orthogonal random features (FAVOR).

2 mentions: @Synced_Global
Keywords: transformer
Date: 2020/07/31 18:04

Referring Tweets

@Synced_Global Researchers proposed a new transformer architecture called "Performer" — based on what they call fast attention via orthogonal random features (FAVOR). #MachineLearning #Research #AI t.co/2yiXQnCYkh

Related Entries

Read more Say Woof? AI in Animal Language Translation - SyncedReview - Medium
0 users, 1 mentions 2020/01/18 16:24
Read more MonoLayout | Bird’s-Eye Layout Estimation from A Single Image
0 users, 2 mentions 2020/02/25 20:18
Read more Google DeepMind Agent57 and Microsoft Suphx Play Games Better Than You; What a Stanford PhD Learned ...
0 users, 1 mentions 2020/04/06 15:51
Read more Use the Force! AI Predicts Human-Object Contact Points and Forces From Video
0 users, 1 mentions 2020/04/09 15:58
Read more What to Expect About Space Exploration and Machine Learning After Elon Musk’s SpaceX Makes History
0 users, 2 mentions 2020/05/30 21:46