Transformational grammars are generally thought to not sit particularly well with models of linguistic performance. Early versions of transformational grammar were formalized and amenable in principle to being integrated into models of performance, but unfortunately these potential connections did not last long: interpreting the transformational competence component via the most obvious linking hypotheses (e.g. taking transformations to be real time operations) led to incorrect predictions, and the computational complexity of the formalism (perhaps among other factors) made it difficult to formulate more subtle alternative linking hypotheses. In this talk Hunter will review a growing body of work that allows us to see transformational grammars in a somewhat new light, bringing with it new prospects for integrating them into models of tasks such as sentence comprehension and language acquisition. The new perspective in part grows out of certain empirically-driven developments in mainstream syntactic theory since the mid-1990s, and draws on ideas from the theory of tree automata. I'll show a couple of concrete "toy examples" from my own work where the adjusted perspective provides the basis for formulating new linking hypotheses that can broaden the empirical footprint of transformational grammars: one connecting to information-theoretic complexity metrics for sentence comprehension, and one generalizing stack-based parsing algorithms for context-free grammars.

© UC Irvine School of Social Sciences - 3151 Social Sciences Plaza, Irvine, CA 92697-5100 - 949.824.2766