In recent years, rapid progress has been made in the learning of context free and context sensitive languages using distributional learning (Clark & Eyraud, 2007, Clark, 2010); the recent extension of these results to Multiple Context Free Grammars (Yoshinaka, 2011), and therefore to Minimalist Grammars, means that the classes of languages learnable using these techniques now plausibly include the class of natural languages. That said, all of these results concern merely weak learnability - learnability of the set of strings, and as a result it has been claimed (Berwick et al. 2011) that these results are irrelevant to linguistics, which should be concerned with strong learnability - that is to say learnability of appropriate sets of structural descriptions.

In this talk, I will argue for three points. First, for the methodological point that strong learnability is either technically incoherent or irrelevant for linguistics, and that what should be studied is the weak learnability of sound/meaning pairings. Secondly, that weak learnability is in fact not that far from strong learning when we consider the fact that the examples from which the child is learning consists in reality not just of syntactically well-formed utterances, but rather of utterances that are also semantically well formed. Weak learning of this sublanguage requires implicitly learning the appropriate set of semantic dependencies -- indeed it is approximately equivalent to a naive version of strong learning. Finally, I will discuss some recent research on deriving appropriate sets of structural descriptions from these learned representations; making the implicit dependencies explicit. These structural descriptions turn out to be similar in some respects to those hypothesised by syntacticians: tree structured representations with bundles of features as labels.

Back to the main schedule