The Stanford Natural Language Processing Group

This talk is part of the NLP Seminar Series. h2 b, i Date: 10:00am - 11:00am PT, Jan 14 2021 Venue: Zoom (link hidden) h3 As models like BERT, T5, and GPT-* have grown larger, more powerful, and more widespread, we've also grown from seeing them as black boxes to having some understanding of what they learn and how they behave. Viewing these models as contextual encoders, I'll present a few of our recent findings about what kind of knowledge they capture, how this knowledge is organized, a

Date: 2021/01/14 18:53

Related Entries

Read more GitHub - technicolor-research/sodeep
0 users, 1 mentions 2019/08/13 11:16
Read more Manning | Data Pipelines with Apache Airflow
1 users, 2 mentions 2019/10/09 18:49
Read more GitHub - JuliaHomotopyContinuation/HomotopyContinuation.jl: A Julia package for solving systems of p...
1 users, 0 mentions 2020/01/22 12:52
Read more GitHub - austinvhuang/awesome-haskell-deep-learning: In the tradition of "awesome" (curated) lists, ...
1 users, 0 mentions 2020/07/19 02:21
Read more Haskellの気持ち - Qiita
1 users, 0 mentions 2020/12/26 09:51