I am an assistant professor of linguistics at Cornell University. I am very interested in the incremental representations that humans use to process language, and in differences between how language is used and how it is processed. To explore these topics, I study the relationships between computational language models and psycholinguistic data (e.g., reading times) and I study neural network representations of language to understand what aspects of language can be learned from language statistics directly without having experiences in the real world (i.e. through ungrounded learning).

Outside of work, I enjoy travel, dinner parties, and gardening.

I manage the Computational Psycholinguistics Discussions research group (C.Psyd) and am part of the Cornell Computational Linguistics Lab (CLab) and the Cornell Natural Language Processing Group (Cornell NLP).

Recent News

Sept 18: 2 papers accepted at CoNLL.
1) Bhattacharya and van Schijndel (2020): Neural networks encode abstract filler-gap existence but do not learn more abstract clusterings over kinds of filler-gaps.
2) Davis and van Schijndel (2020): Transformers encode implicit causality verb biases but fail to use that knowledge to make correct predictions. Validates Hartshorne’s theory that IC is learnable from language sequences, but suggests that the language modeling objective prevents models from using this knowledge.

Aug 27: Submitted a paper showing that garden path effects cannot be predicted solely by surprisal. Feedback appreciated! Single-stage prediction models do not explain the magnitude of syntactic disambiguation difficulty

May 13: Gave an invited talk to CPL Lab at MIT: Language is not Language Processing

April 30: Forrest Davis’ paper was accepted to CogSci! Explores the ability to learn situation knowledge and discourse representations from plain text.

April 4: Forrest Davis’ paper was accepted to ACL! Explores aspects of language comprehension that cannot be learned by current language models.

March 19: Forrest Davis’ CUNY presentation went well. Explores the ability to learn situation knowledge and discourse representations from plain text.