Weeks 1-3: Background LecturesNeural network basics/history
Neural network architectures
Week 4: Behavioral analysesRequired
Linzen et al. (2016). Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
Gulordava et al. (2018). Colorless Green Recurrent Networks Dream Hierarchically
Wilcox et al. (2018). What do RNN Language Models Learn about Filler–Gap Dependencies?
Chaves (2020). What Don't RNN Language Models Learn About Filler-Gap Dependencies?
Schuster et al. (2020). Harnessing the linguistic signal to predict scalar inferences
Week 5: Diagnostic classifiersRequired
Giulianelli et al. (2018). Under the Hood
Qian et al. (2016). Analyzing Linguistic Knowledge in Sequential Model of Sentence
Adi et al. (2016). Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks
Week 6: Adaptation-as-primingRequired
Prasad et al. (2019). Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models
van Schijndel and Linzen (2018). A Neural Model of Adaptation in Reading
Lepori et al. (2020). Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs.
Weeks 7-8: Probe validationRequired (Week 7)
McCoy et al. (2019). Right for the Wrong Reasons
Required (Week 8)
Voita and Titov. (2020). Information-Theoretic Probing with Minimum Description Length
Hewitt and Liang. (2019). Designing and Interpreting Probes with Control Tasks
Pimentel et al. (2020). Information-Theoretic Probing for Linguistic Structure
Weeks 9-10: Group projectsTuesdays: Project outlines/discussion
Thursdays: Group-suggested paper discussion
Group 1: NMT attention probing
Reading: Wiegreffe and Pinter (2019). Attention is Not Not Explanation
Optional: Jain and Wallace (2019) Attention is Not Explanation
Reading: Response post by Byron Wallace
Group 2: Morphology in word-level LMs
Reading: Xu et al. (2018). Incorporating Latent Meanings of Morphological Compositions to Enhance Word Embeddings
Paper overview: Gulordava et al. (2018). Colorless Green Recurrent Networks Dream Hierarchically
Week 11: Misc probingTuesday Reading (RSA): Chrupala and Alishahi (2019). Correlating neural and symbolic representations of language
Thursday Reading (ablation; optional): Lakretz et al. (2019). The emergence of number and syntax units in LSTM language models
Weeks 12-13: Semi-finals (No class)There are two CL conferences happening (virtually) during this period.
Registering for one gets the other free. Register here.
Cost: $100 (50 for EMNLP/CoNLL and 50 to become an ACL member)
Register by Oct 30.
The 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
The SIGNLL Conference on Computational Natural Language Learning (CoNLL)
Week 14: Poverty of the StimulusBender and Koller. (2020). Climbing towards NLU
Davis & van Schijndel (2020). Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment
Bisk et al. (unpublished). Experience grounds language
Week 15: Misc ProbingTuesday Reading (Iterated Learning - TA): Ren et al. (2020). Compositional Languages Emerge in a Neural Iterated Learning Model
Thursday: Mini-discussions of other topics of interest