Computational models to understand sensory information processing

Our research group builds computational models to understand how brain activity drives our abilities to process our external world. Every second, our brain is awash in computations. We can look at a scene and infer the objects in the scene, recognize the location of the scene, extract social information like recognizing people’s emotions and we can make complex judgements about people’s interactions. We can appreciate music and understand language. How do we achieve these feats? It essentially boils down to asking: what information is processed, where, when, how and why in the brain?

  • Descriptive Models Neuroscience is undergoing an explosion in large-scale brain activity data, so the major challenge no longer lies simply in data collection, but also in deriving understanding from this abundant stream of complex, high-dimensional, noisy data with methods that fully leverage its potential. Our group develops advanced methods for inferring and characterizing the latent structure of neural representations from large-scale brain recordings.
    Relevant papers: Khosla et al. (2022, CurrBio)

  • Predictive Models We build stimulus-computable predictive models of brain activity that help explain how information is processed in different brain regions.
    Relevant papers: Saha et al. (2025, CCN Proceedings), Feather et al (2025), Khosla et al. (2020, NeurIPS), Khosla et al. (2021, Science Advances), Khosla et al. (2022, NeurIPS), Khosla et al. (2022, biorxiv)

  • Normative Models Why do specific response motifs occur reliably across trials and animals in response to stimuli in the first place? We explain neural phenomena in terms of functions to be performed, constraints on the system and the context in which the functions are performed, by emedding the “why” hypothesis (function, constraint, context) in computational models like artificial neural networks and studying the emergence of the phenomena in question.
    Relevant papers: Kanwisher et al. (2023, Trends in Neurosciences), Khosla et al. (2023)


Representational alignment in biological and artificial neural networks

Comparing representations in artificial neural networks against representations in biological systems is a promising framework for testing computational theories of the mind. Development of new metrics for evaluating the similarity between biological and artificial neural models will be critical to developing strong computational theories of the mind in this rapidly evolving ecosystem of model-brain comparisons. What is the right way to evaluate similarity across networks (biological or artificial)? What invariances should our measures have? Can we develop novel measures that help better understand representational convergence or distinguish between competing models of the brain? Our group tackles these challenges in studying representational alignment between biological and artificial neural networks.
Relevant papers: Marvi et al. (2025, ICLR), Khosla et al. (2023, NeurIPS UniReps Proceedings), Khosla et al. (2023)


Universal representations in large‑scale AI models

Why do independently‑trained networks—across language, vision and auditory domains—so often learn strikingly similar internal codes? Our current line of work tackles this “convergence puzzle” by mapping where, how and why different model families land on common representational structures, blending modern optimal‑transport and shape‑analysis techniques to reveal deep cross‑model correspondences. This programme moves us beyond single‑network probing to a comparative science of modern AI, illuminating both the extent and the reasons for representational convergence—insights that guide robust interpretability and future‑proof safety interventions. Relevant papers: Bo et al. (2025, CCN Proceedings), Kapoor et al. (2024, arXiv), Marvi et al. (2025, ICLR), Khosla et al. (2023, NeurIPS UniReps Proceedings), He et al., (2025, CCN)

Applying neuroscience-inspired techniques to improve and understand AI models

We are also interested in the inverse question at the neuroscience and AI interface: how can neuroscientific insights help inform and inspire the next generation of models in AI?

  • Inspiration from brain data: We develop techniques for using neural data efficiently to constrain neural networks, and impart ‘brain-like’ inductive biases to models. For instance, see Khosla et al. (2022, NeurIPS) and Khosla et al. (2022, Nature Communications)

  • Inspiration from principles: We aim to discover principles shaping neural responses and emulating those principles in AI models.

  • Inspiration from toolkits and formalisms: We can get inspiration from neuroscience not only in the form of theories about how the brain functions but also in terms of sophisticated toolkits and formalizations developed and adopted by neuroscience researchers to understand a complex biological network: the brain. We develop and apply such methods, motivated by neuroscientific applications, for recovering the interpretable structure of DNN representations and to interpret DNN representations along neuroscientifically-motivated formalisms (e.g. specialization, single-unit tuning).