Readings from previous years can be found below:
[2022-2023]
[2021-2022]
[2020-2021]
[2019-2020]
[2018-2019]
[2017-2018]
[2016-2017]
[2015-2016]
[2014-2015]
[2013-2014]
[2012-2013]
[2011-2012]
[2010-2011]
Discussed Winter 2024
Huang, K., Arehalli, S., Kugemoto, M., Muxica, C., Prasad, G., Dillon, B., & Linzen, T. 2023.
Surprisal does not explain syntactic disambiguation difficulty: Evidence from a large-scale benchmark. https://doi.org/10.31234/osf.io/z38u6.
Discussed Fall 2023
Frank, M. C. 2023. Baby steps in evaluating the capacities of large language models. Nature Reviews Psychology, 2(8), 451-452.
Frank, M. C. 2023. Bridging the data gap between children and large language models. Trends in Cognitive Sciences, 27(1), 990-992.
McCoy, R. T., & Griffiths, T. L. 2023. Modeling rapid language learning by distilling Bayesian priors into artificial neural networks. arXiv preprint arXiv:2305.14701.
Frank, M. C. 2023. Bridging the data gap between children and large language models. Trends in Cognitive Sciences, 27(1), 990-992.
McCoy, R. T., & Griffiths, T. L. 2023. Modeling rapid language learning by distilling Bayesian priors into artificial neural networks. arXiv preprint arXiv:2305.14701.