-----

Phonotactics refers to restrictions languages place on how sounds can be combined into words. Phonotactic knowledge is relevant for various aspects of speech production and perception, and speakers of a language develop strong intuitions about what constitutes a possible word. Infants also display sensitivity to phonotactics of their language(s) as early as 5 months, suggesting that phonotactic learning occurs in parallel with word learning. This talk will describe two recent studies that combine experimental work and computational model comparison to better understand the mechanisms underlying phonotactic learning. The first study compares different models of phonotactic acquisition in infancy based on their ability to predict results from a series of infant looking time experiments. The second study compares a pair of commonly used models of phonotactic probability with differing theoretical assumptions based on their ability to predict results from eight adult experiments across four languages. In addition to providing greater clarity into the mechanisms by which phonotactic learning progresses, these studies also demonstrate how computational modeling can serve as a bridge between theoretical and empirical work and a vital tool for theory comparison.

-----