Computational Linguistics 2 (Fall 2022)

While LING 4424: Computational Linguistics 1 focuses on symbolic computational linguistics methods (n-gram smoothing, hidden markov modeling, probabilistic context-free grammars, etc), CL2 provides an introduction to neural networks and the techniques we can use for inferring the linguistic knowledge they encode. This course is a work in progress, so any feedback/suggestions are appreciated.

Syllabus

pdf

Tentative Schedule

Weeks 1-3: Background Lectures

Neural network basics/history
PyTorch overview
Neural network architectures

Week 4: Behavioral analyses

Required
Linzen et al. (2016). Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
Optional
Gulordava et al. (2018). Colorless Green Recurrent Networks Dream Hierarchically
Wilcox et al. (2018). What do RNN Language Models Learn about Filler–Gap Dependencies?
Chaves (2020). What Don't RNN Language Models Learn About Filler-Gap Dependencies?
Schuster et al. (2020). Harnessing the linguistic signal to predict scalar inferences

Week 5: Diagnostic classifiers

Required
Giulianelli et al. (2018). Under the Hood
Optional
Qian et al. (2016). Analyzing Linguistic Knowledge in Sequential Model of Sentence
Adi et al. (2016). Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks
Jumelet et al. (2019). Analysing Neural Language Models: Contextual Decomposition Reveals Default Reasoning in Number and Gender Assignment.

Week 6: Adaptation-as-priming

Required
Prasad et al. (2019). Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models
Optional
van Schijndel and Linzen (2018). A Neural Model of Adaptation in Reading
Lepori et al. (2020). Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs.
Debasmita and van Schijndel (2020). Filler-gaps that neural networks fail to generalize.

Weeks 7-8: Probe validation

Required (Week 7)
McCoy et al. (2019). Right for the Wrong Reasons
Required (Week 8)
Voita and Titov. (2020). Information-Theoretic Probing with Minimum Description Length
Optional
Hewitt and Liang. (2019). Designing and Interpreting Probes with Control Tasks
Pimentel et al. (2020). Information-Theoretic Probing for Linguistic Structure

Weeks 9-10: Group projects

Tuesdays: Project outlines/discussion
Thursdays: Group-suggested paper discussion