Computational models to understand sensory information processing

Our research group builds computational models to understand how brain activity drives our abilities to process our external world. Every second, our brain is awash in computations. We can look at a scene and infer the objects in the scene, recognize the location of the scene, extract social information like recognizing people’s emotions and we can make complex judgements about people’s interactions. We can appreciate music and understand language. How do we achieve these feats? It essentially boils down to asking: what information is processed, where, when, how and why in the brain?

  • Descriptive Models Neuroscience is undergoing an explosion in large-scale brain activity data, so the major challenge no longer lies simply in data collection, but also in deriving understanding from this abundant stream of complex, high-dimensional, noisy data with methods that fully leverage its potential. Our group develops advanced methods for inferring and characterizing the latent structure of neural representations from large-scale brain recordings.
    Relevant papers: Khosla et al. (2022, CurrBio)

  • Predictive Models We build stimulus-computable predictive models of brain activity that help explain how information is processed in different brain regions.
    Relevant papers: Khosla et al. (2020, NeurIPS), Khosla et al. (2021, Science Advances), Khosla et al. (2022, NeurIPS), Khosla et al. (2022, biorxiv)

  • Normative Models Why do specific patterns of brain activity occur reliably across trials and animals in response to sensory stimuli in the first place? We explain neural phenomena in terms of functions to be performed, constraints on the system and the context in which the functions are performed, by emedding the “why” hypothesis (function, constraint, context) in computational models like artificial neural networks and studying the emergence of the phenomena in question.
    Relevant papers: Kanwisher et al. (2023, Trends in Neurosciences), Khosla et al. (2023, in prep)


Representational alignment in biological and artificial neural networks

Comparing representations in artificial neural networks against representations in biological systems is a promising framework for testing computational theories of the mind. Development of new metrics for evaluating the similarity between biological and artificial neural models will be critical to developing strong computational theories of the mind in this rapidly evolving ecosystem of model-brain comparisons. What is the right way to evaluate model-brain similarity? What invariances should our measures have? Can we develop novel measures that help better distinguish between competing models? Our group tackles these challenges in studying representational alignment between biological and artificial neural networks.
Relevant papers: Khosla et al. (2023, NeurIPS UniReps Proceedings), Khosla et al. (2023, in prep)


Applying neuroscience-inspired techniques to improve and understand AI models

We are also interested in the inverse question at the neuroscience and AI interface: how can neuroscientific insights help inform and inspire the next generation of models in AI?

  • Inspiration from brain data: We develop techniques for using neural data efficiently to constrain neural networks, and impart ‘brain-like’ inductive biases to models. For instance, see Khosla et al. (2022, NeurIPS) and Khosla et al. (2022, biorxiv)

  • Inspiration from principles: We aim to discover principles shaping neural responses and emulating those principles in AI models.

  • Inspiration from toolkits and formalisms: We can get inspiration from neuroscience not only in the form of theories about how the brain functions but also in terms of sophisticated toolkits and formalizations developed and adopted by neuroscience researchers to understand a complex biological network: the brain. We develop and apply such methods, motivated by neuroscientific applications, for recovering the interpretable structure of DNN representations and to interpret DNN representations along neuroscientifically-motivated formalisms (e.g. specialization, single-unit tuning).