Mission: Impossible Language Models
ACL• 2024
Abstract
Chomsky and others have very directly claimed that large language models
(LLMs) are equally capable of learning languages that are possible and
impossible for humans to learn. However, there is very little published
experimental evidence to support such a claim. Here, we develop a set of
synthetic impossible languages of differing complexity, each designed by
systematically altering English data with unnatural word orders and grammar
rules. These languages lie on an impossibility continuum: at one end are
languages that are inherently impossible, such as random and irreversible
shuffles of English words, and on the other, languages that may not be
intuitively impossible but are often considered so in linguistics, particularly
those with rules based on counting word positions. We report on a wide range of
evaluations to assess the capacity of GPT-2 small models to learn these
uncontroversially impossible languages, and crucially, we perform these
assessments at various stages throughout training to compare the learning
process for each language. Our core finding is that GPT-2 struggles to learn
impossible languages when compared to English as a control, challenging the
core claim. More importantly, we hope our approach opens up a productive line
of inquiry in which different LLM architectures are tested on a variety of
impossible languages in an effort to learn more about how LLMs can be used as
tools for these cognitive and typological investigations.