Uri Hasson runs a lab in Princeton, where he investigates the underlying neural basis of natural language acquisition and processing as it unfolds in the real world. As Uri visited Tübingen (where I am doing my master's), we were able to meet in person. Originally, I planned to talk about his idea of temporal receptive windows, and how different brain regions (e.g. default mode network) operate at different timescales. However, we ended up talking more about Wittgenstein, evolution, and ChatGPT. An underlying thread throughout the conversation was that (for both biological and artificial agents), language is not clever symbol and rule manipulation but a brute force fitting to statistics across (Wittgensteinian) 'contexts'. This view is best articulated in Uri's Direct Fit paper. We also connect this to transformers and discuss what's missing in AI. The answer here is multimodal integration, episodic memory, and interactive sociality). At the end, I ask Uri about his 1000 days project, talking to crows, and "understanding" in neuroscience/AI.
Timestamps:
(00:00:00) - Intro
(00:04:52) - Studying language in the real world
(00:07:57) - Wittgenstein
(00:11:10) - Evolution and the default mode network
(00:20:54) - Overparameterized deep learning works
(00:25:02) - Direct Fit paper and generalization
(00:39:37) - Episodic memory and sociality in language models
(00:47:15) - 1000 days project and talking to crows
(00:52:14) - "Understanding" in neuroscience
Uri's Website
Twitter: @HassonLab
Uri's publications & talks:
Hasson et al., 2015 - Hierarchical process memory: memory as an integral component of information processing Temporal receptive windows paper
Hasson et al., 2020 - Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks paper
Yeshurun et al., 2021 - The default mode network: where the idiosyncratic self meets the shared social world paper
Goldstein et al., 2022 - The Temporal Structure of Language Processing in the Human Brain Corresponds to The Layered Hierarchy of Deep Language Models preprint
Nguyen et al., 2022 - Teacher student neural coupling during teaching and learning paper
Goldstein et al., 2022 - Shared computational principles for language processing in humans and deep language models paper
Music: Space News, License: Z62T4V3QWL
Comments