Jialin’s paper on Integrated Representational Signatures Strengthen Specificity in Brains and Models and Shreya’s paper on Form-Agnostic, Enriched Representations of Sentences Reveal Semantic Abstractness in Language Cortex are accepted at Cosyne 2026.
Meenakshi gave a talk at the ELLIS UniReps Speaker Series titled Comparative Analysis of Neural Representations: Tools, Limits, and Emerging Principles.
Check it out for a sneak peek into our lab’s current research directions.
Our paper on Privileged Representational Axes in Biological and Artificial Neural Networks is accepted at Nature Human Behavior.
Chaitanya’s paper on Bridging Critical Gaps in Convergent Learning: How Representational Alignment Evolves Across Layers, Training, and Distribution Shifts is accepted at NeurIPS 2025.
Two papers from the lab were accepted at the UniReps Workshop: The Performance Cost of Representational Misalignment by Sudhanshu Srivastava and Measuring the Measures: Discriminative Capacity of Representational Similarity Metrics Across Model Families led by Jialin Wu.
André’s paper, Superposition Disentanglement of Neural Representations Reveals Hidden Alignment, received an Honorable Mention at the UniReps Workshop at NeurIPS 2025. Congrats André!
Zoe’s paper, Seeing Through Words, Speaking Through Pixels: Deep Representational Alignment Between Vision and Language Models, has been accepted as an oral presentation at EMNLP 2025!