We will be reading the following articles, or using them as references:

Andre, E., Rehm, M., Minker, W. & Buehler, D. 2004. Endowing Spoken Language Dialogue Systems with Emotional Intelligence. Proceedings of Affective Dialogue Systems.

Allen, J. & Perrault, C. R. 1980. Analyzing Intention in Utterances. Artificial Intelligence, 15, 143-178.

Bever, T. & Poeppel, D. 2010. Analysis by Synthesis: A (Re-)Emerging Program of Research for Language and Vision. Biolinguistics, 4:2-3, 174-200.

Carlson, A., Betteridge, J., Kisiel, B., Settles, B., Hruschka Jr., E.R., & Mitchell, T.M. 2010. Toward an Architecture for Never-Ending Language Learning. Proceedings of the Conference on Artificial Intelligence (AAAI).

Chan, E. & Lignos, C. 2011. Investigating the Relationship Between Linguistic Representation and Computation through an Unsupervised Model of Human Morphology Learning. Research on Language and Computation, 8(2-3), 209-238.

Elsner, M., Goldwater, S. & Eisenstein, J. 2012. Bootstrapping a Unified Model of Lexical and Phonetic Acquisition. Proceedings of the 50th Annual Meeting of the Association of Computational Linguistics.

Fazly, A., Alishahi, A., & Stevenson, S. 2010. A probabilistic computational model of cross-situational word learning.Cognitive Science, 34, 1017-1063.

Frank, S., Goldwater, S., & Keller, F. 2010. Using Sentence Type Information for Syntactic Category Acquisition. Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, ACL.

Hidaka, S. & Smith, L. 2010. A Single Word in a Population of Words. Language Learning and Development, 6, 206-222.

Hinton, G. 2007. Learning multiple layers of representation. Trends in Cognitive Science, 11(10), 428-434.

Hinton, G. & Salakhutdinov, R. 2011. Discovering Binary Codes for Documents by Learning Deep Generative Models. Topics in Cognitive Science, 3, 74-91.

Klein, D. & Manning, C. 2002. A Generative Constituent-Context Model for Improved Grammar Induction. Proceedings of ACL.

Klein, D. & Manning, C. 2004. Corpus-Based Induction of Syntactic Structure: Models of Dependency and Constituency. Proceedings of ACL.

Krishnamurthy, J. & Mitchell, T. 2011. Which Noun Phrases Denote Which Concepts? Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics.

Kwiatkowski, T., Goldwater, S., Zettlemoyer, L., & Steedman, M. 2012. A Probabilistic Model of Syntactic and Semantic Acquisition from Child-Directed Utterances and their Meanings. Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics.

Lignos, C. 2011. Modeling Infant Word Segmentation. Proceedings of the Fifteenth Conference on Computational Natural Language Learning, Portland, Oregon. 29-38.

Lignos, C. 2012. Infant Word Segmentation: An Incremental, Integrated Model. Proceedings of the 30th West Coast Conference on Formal Linguistics, ed. Nathan Arnett and Ryan Bennett, 237-247. Somerville, MA: Cascadilla Proceedings Project.

Martin, A. 2011. Grammars leak: Modeling how phonotactic generalizations interact within the grammar. Language, 87(4), 751-770.

Martin, A., Peperkamp, S., Dupoux, E. 2012. Learning Phonemes with a Proto-Lexicon. Cognitive Science, 37, 103-124.

McInnes, F. & Goldwater, S. 2011. Unsupervised Extraction of Recurring Words from Infant-Directed Speech. Proceedings of the 33rd Annual Conference of the Cognitive Science Society.

Mnih, A. & Hinton, G. 2007. Three new graphical models for statistical language modelling. Proceedings of the 24th international conference on Machine Learning, 641-648.

Mohamed, A., Hinton, G., & Penn, G. 2012. Understanding how Deep Belief networks perform acoustic modeling, ICASSP 2012.

Parisien, C. & Stevenson, S. 2010. Generalizing between form and meaning using learned verb classes. Proceedings of the 33rd Annual Conference of the Cognitive Science Society, Boston, MA.

Pearl, L. & Steyvers, M. 2013 in press. "C'mon - You Should Read This": Automatic Identification of Tone from Language Text. International Journal of Computational Linguistics.

Perfors, A. 2011. Memory limitations alone do not lead to over-regularization: An experimental and computational investigation. In L. Carlson, C. Hoelscher & T. F. Shipley (eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society: 3274-3279.

Perfors, A. 2012. Probability matching vs over-regularization in language: Participant behavior depends on their interpretation of the task. In N. Miyake, D. Peebles, & R. P. Cooper (eds.), Proceedings of the 34th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society: 845-850.

Perfors, A. 2013. When do memory limitations lead to regularization? An experimental and computational investigation. Manuscript, University of Adelaide.

Ponvert, E., Baldridge, J., & Erk, K. 2011. Simple Unsupervised Grammar Induction from Raw Text with Cascaded Finite State Models. Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, Portland, Oregon, 1077–1086.

Rafferty, A. & Griffiths, T. 2010. Optimal Language Learning: The Importance of Starting Representative. Proceedings of The 32nd Annual Conference of the Cognitive Science Society.

Sankaran, B. 2010. A Survey of Unsupervised Grammar Induction. Manuscript, Simon Fraser University.

Stevens, J., Trueswell, J., Yang, C., & Gleitman, L. 2013. The Pursuit of Word Meanings. Manuscript, University of Pennsylvania.

Stone, M. 2003a. Communicative Intentions and Conversational Processes in Human-Human and Human-Computer Dialogue. In Trueswell, J. & Tanenhaus, M. (Eds.), World Situated Language Use: Psycholinguistic, Linguistic, and Computational Perspectives on Bridging the Product and Action Traditions.

Stone, M. 2003b. Linguistic Representation and Gricean Inference. Proceedings of International Workshop on Computational Semantics.

Talukdar, P., Wijaya, D., & Mitchell, T. 2012. Acquiring Temporal Constraints between Relations. Proceedings of the Conference on Information and Knowledge Management (CIKM).