Using vision as a model system, the lab studies how recurrent processing supports our ability to visually recognize objects, a computationally challenging task at which humans excel. Bidirectional connections between the hierarchically arranged stages of visual cortex could serve at least two fundamental computational purposes: they may dynamically modify neural activity in a process of online visual inference (Bayesian inference) between cortical stages, and they may provide top-down signals for driving learning (error backpropagation) across the cortical hierarchy. To explore these hypotheses, experiments measure the content and downstream impact of messages passed across the network during object recognition and learning.
Our experimental platform, centered around the common marmoset, will use advanced tools such as cellular imaging and targeted optogenetics to examine the neural subpopulations involved in transmission of information between high-level visual cortical areas. Positioned at the intersection of biological vision, neuroengineering, and machine learning, the lab’s environment fosters an interplay between experiments, novel techniques, and neural network models in an effort to reveal the computations implemented in cortical networks.