Syllabus
pdfTentative Schedule
Weeks 1-3: Background Lectures
Neural network basics/historyPyTorch overview
Neural network architectures
Week 4: Behavioral analyses
RequiredLinzen et al. (2016). Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
Optional
Gulordava et al. (2018). Colorless Green Recurrent Networks Dream Hierarchically
Wilcox et al. (2018). What do RNN Language Models Learn about Filler–Gap Dependencies?
Chaves (2020). What Don't RNN Language Models Learn About Filler-Gap Dependencies?
Schuster et al. (2020). Harnessing the linguistic signal to predict scalar inferences
Week 5: Diagnostic classifiers
RequiredGiulianelli et al. (2018). Under the Hood
Optional
Qian et al. (2016). Analyzing Linguistic Knowledge in Sequential Model of Sentence
Adi et al. (2016). Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks
Jumelet et al. (2019). Analysing Neural Language Models: Contextual Decomposition Reveals Default Reasoning in Number and Gender Assignment.
Week 6: Adaptation-as-priming
RequiredPrasad et al. (2019). Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models
Optional
van Schijndel and Linzen (2018). A Neural Model of Adaptation in Reading
Lepori et al. (2020). Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs.
Debasmita and van Schijndel (2020). Filler-gaps that neural networks fail to generalize.
Weeks 7-8: Probe validation
Required (Week 7)McCoy et al. (2019). Right for the Wrong Reasons
Required (Week 8)
Voita and Titov. (2020). Information-Theoretic Probing with Minimum Description Length
Optional
Hewitt and Liang. (2019). Designing and Interpreting Probes with Control Tasks
Pimentel et al. (2020). Information-Theoretic Probing for Linguistic Structure
Weeks 9-10: Group projects
Tuesdays: Project outlines/discussionThursdays: Group-suggested paper discussion