Echoes of the Human Mind? AI Systems as Cognitive Models of Language
-----
How can artificial minds help us understand our own minds? Blank will discuss two
aspects of this question in the domain of language, using both simple and complex
artificial intelligence (AI) systems. First, AI systems trained on highly constrained
input can help disentangle representational formats that are difficult to differentiate
in human minds. As an example, Blank will ask what kinds of world knowledge (e.g.,
about dogs) are embedded in our knowledge of language (e.g., how the word “dog” is
used). Blank will demonstrate that a simple AI system (a word embedding), which is
only exposed to patterns of word co-occurrences, acquires complex common-sense knowledge,
which is represented in an intuitive and structured way. Because humans also track
word co-occurrences, such world knowledge is also represented in our mental lexicon.
Second, Blank will emphasize that such inferential leaps hinge on how closely an AI
system represents language “like us”. This resemblance is easier to establish for
word embeddings, but harder for complex Large Language Models (LLMs, e.g., GPT models).
Two studies comparing LLMs to humans will illustrate this point. One study asks whether
semantic information can “penetrate” and influence syntactic processing in LLMs—like
it does in humans—or whether some syntactic processing stages in LLMs are “encapsulated”
from meaning. The second study asks whether LLMs represent a fundamental aspect of
linguistic meaning: distinguishing between agents and patients in sentences. These
studies reveal both similarities and differences between LLMs and humans.
-----
connect with us: