I study how the brain integrates noisy sensory information to make decisions, using Bayesian modeling, information theory, and neural data analysis.
Perceptual Decision-Making
The resource-rational dynamics of evidence accumulation Sequential integration of noisy sensory evidence is dynamically controlled by optimal resource allocation.
Abstract Paper
Evidence accumulation is a fundamental aspect of human decision-making. However, how the precise temporal structure of evidence shapes the accumulation process has not been systemati- cally studied. As a result, current understanding of evidence accumulation remains largely limited to its time-averaged behavior. We tested human subjects in a visual estimation task in which they inferred the angular position of an unknown source from a noisy stimulus sequence. Introducing systematic temporal perturbations, i.e., breaks of different durations and at different positions in the otherwise regular evidence sequence, revealed that subjects actively compensated for the mem- ory loss endured during the break by dynamically enhancing evidence integration and memory maintenance immediately after the break. We derived a new time-continuous Bayesian updat- ing model that is dynamically constrained by optimal performance-effort trade-offs. With two free parameters determining the overall resource-efficiencies of encoding and memory maintenance, the model accurately predicts the rich dependencies of subjects’ accumulation behavior on the evidence schedule, including subjects’ individual tendencies to emphasize either early (primacy) or late (recency) samples in the evidence sequence. Our results suggest that evidence accumu- lation is a non-stationary, dynamically controlled process that optimally balances the information gained from incoming evidence against the cognitive effort required to acquire and maintain it. The proposed model is general and should apply broadly across many task domains.
Categorical representations in evidence accumulation How categorical structure shapes sequential evidence accumulation.
Abstract Paper
Perceptual decision-making often involves sequential evidence accumulation. Previous work has shown that category-level stimulus representations can play an important role in perceptual inference, even when not explicitly required. Here, we conducted a visual discrimination task to investigate how categorical representations can affect sequential evidence accumulation. Subjects discriminated the angular position (CW/CCW) of an unknown source relative to a reference based on 8 stimulus samples drawn from a Gaussian with fixed variance centered at the source position. Stimuli were presented in rapid sequence (150ms ISI). Subjects reported their categorical choice by pressing the corresponding button on a gamepad. After each trial, visual feedback displayed both the correct category and the source position. The reference was adjusted using a staircase procedure. All subjects performed the task under two conditions. In the first condition, they were asked to make a preliminary decision based on partial evidence within a 1.75s time-window, before then making their final choice after seeing all samples. The preliminary decision occurred either before the 1st sample (upfront guess) or after the 2nd, 4th, or 6th sample in the sequence. The four choice positions were randomly interleaved in each block. In the second condition (control), subjects were tested with the exact same sample sequences and reference positions as in the first condition, but simply maintained center fixation instead of making a preliminary decision. In contrast to the first condition, the reference was only shown for the final decision. Both conditions were tested in alternating blocks. Our results show that being engaged in a preliminary decision against a reference significantly improves subjects’ final decision performance compared to the control condition. This suggests that the formation of categorical stimulus representations may be crucial for accurate and robust sequential evidence accumulation.
Multivariate Brain Connectivity
Angular gyrus responses show joint statistical dependence with brain regions selective for different categories Using fMRI movie data and state-of-the-art multivariate statistical dependence based on artificial neural networks, we identified the angular gyrus encoding responses across face-, body-, artifact-, and scene-selective regions.
Abstract Paper Code
Category selectivity is a fundamental principle of organization of perceptual brain regions. Human occipitotemporal cortex is subdivided into areas that respond preferentially to faces, bodies, artifacts, and scenes. However, observers need to combine information about objects from different categories to form a coherent understanding of the world. How is this multicategory information encoded in the brain? Studying the multivariate interactions between brain regions of male and female human subjects with fMRI and artificial neural networks, we found that the angular gyrus shows joint statistical dependence with multiple category-selective regions. Adjacent regions show effects for the combination of scenes and each other category, sug- gesting that scenes provide a context to combine information about the world. Additional analyses revealed a cortical map of areas that encode information across different subsets of categories, indicating that multicategory information is not encoded in a single centralized location, but in multiple distinct brain regions.
Distinct Portions of Superior Temporal Sulcus Combine Auditory Representations with Different Visual Streams Both ventral and dorsal visual information is combined with auditory information but that distinct portions of posterior STS combine auditory information with visual information encoded in the two streams.
Abstract Paper Code
In humans, the superior temporal sulcus (STS) combines auditory and visual information. However, the extent to which it relies on visual information from the ventral or dorsal stream remains uncertain. To address this, we analyzed open-source functional mag- netic resonance imaging data collected from 15 participants (6 females and 9 males) as they watched a movie. We used artificial neural networks to investigate the relationship between multivariate response patterns in auditory cortex, the two visual streams, and the rest of the brain, finding that distinct portions of the STS combine information from the two visual streams with auditory information.
PyMVPD: A toolbox for multivariate pattern dependence A Python-based toolbox to model the multivariate interactions between brain regions using fMRI data.
Abstract Paper Code
Cognitive tasks engage multiple brain regions. Studying how these regions interact is key to understand the neural bases of cognition. Standard approaches to model the interactions between brain regions rely on univariate statistical dependence. However, newly developed methods can capture multivariate dependence. Multivariate pattern dependence (MVPD) is a powerful and flexible approach that trains and tests multivariate models of the interactions between brain regions using independent data. In this article, we introduce PyMVPD: an open source toolbox for multivariate pattern dependence. The toolbox includes linear regression models and artificial neural network models of the interactions between regions. It is designed to be easily customizable. We demonstrate example applications of PyMVPD using well-studied seed regions such as the fusiform face area (FFA) and the parahippocampal place area (PPA). Next, we compare the performance of different model architectures. Overall, artificial neural networks outperform linear regression. Importantly, the best performing architecture is region-dependent: MVPD subdivides cortex in distinct, contiguous regions whose interaction with FFA and PPA is best captured by different models.