Computational Seminar 2022 - Chaos Theory in Computational Linguistics

Neural networks compress corpus statistics down using intermediate layers. This compression process has been argued to strip away surface features and, in language processing, distill the statistics down to abstract linguistic groupings that exist underlyingly in language data. Thus linguistic abstractions can theoretically emerge in the network representation space simply due to the most efficient features along which to compress or cluster the observed statistics. In this class, we will read and discuss research that has examined these emergent properties and theorized about the ways emergent meaning can be structured.

Schedule

Week 1: Chaos Theory in CL

Syllabus
Course Introduction

Week 2: Linking Up and Digging In

Guest and Martin (2021). On logical inference over brains, behaviour, and artificial neural networks
Tabor and Hutchins (2004). Evidence for Self-Organized Sentence Processing: Digging-In Effects

Week 3: Dynamical Modelling with Discrete States

Smith and Vasishth (2021). A software toolkit for modeling human sentence parsing: An approach using continuous-time, discrete-state stochastic dynamical systems

Week 4: Cue-Based Retrieval

Lewis, Vasishth, and Van Dyke (2006). Computational principles of working memory in sentence comprehension
Vasishth, Nicenboim, Engelmann, and Burchert (2019). Computational Models of Retrieval Processes in Sentence Processing

Week 5: Gradient Symbolic Computation 1

Cho, Goldrick, and Smolensky (2017). Incremental parsing in a continuous dynamical system: sentence processing in Gradient Symbolic Computation
McCoy, Linzen, Dunbar, and Smolensky (2019). RNNs Implicitly Implement Tensor Product Representations

(Week 6: No class)

Week 7: Gradient Symbolic Computation 2

Cho, Goldrick, and Smolensky (2022). Parallel parsing in a Gradient Symbolic Computation parser

Week 8: Models of Memory

Kahana (2020). Computational Models of Memory Search

(Weeks 9-10: Group project planning/discussion)

(Week 11: Spring Break)

Week 12: Processing Errors in Case-Marked Languages

Apurva and Husain (2021). Parsing errors in Hindi: Investigating limits to verbal prediction in an SOV language

(Weeks 13-16: Group project analysis and discussion)