The induction problems facing language learners have played a central role in debates about the types of learning biases that exist in the human brain. Many linguists have argued that the necessary learning biases to solve these language induction problems must be both innate and language-specific (i.e., the Universal Grammar (UG) hypothesis). Though there have been several recent high-profile investigations of the necessary types of learning biases, the UG hypothesis is still the dominant assumption for a large segment of linguists due to the lack of studies addressing central phenomena in generative linguistics. To address this, we focus on how to learn constraints on long-distance dependencies, sometimes called syntactic islands. We use formal acceptability judgment data to identify the target state of learning for syntactic island constraints, and conduct a corpus analysis of child-directed data to affirm that there does appear to be an induction problem when learning these constraints. We then create a computational model that successfully learns the pattern of acceptability judgments observed in formal experiments, based on realistic input data. Crucially, while this modeled learner does require several types of learning biases to work in concert, it does not require any (clearly) innate, domain-specific biases. This suggests that syntactic island constraints can in principle be learned without relying on UG. We discuss the consequences of this learner for the learning bias debates, as well as questions raised by the nature of the linguistic knowledge that is required by this learner.

Back to the main schedule