Skip to main content

SEMINAR: Assistant Professor Cengiz Pehlevan ‘Mathematical Principles of Learning in Machines and Brains’ December 29, 2025 @ 11:00am (FENS G032)

Dear Faculty Members & Researchers & Students,

 

We would like to invite you all to our seminar ‘Mathematical Principles of Learning in Machines and Brains’ by Assistant Professor Cengiz Pehlevan ( Harvard University ) on December 29, 2025 @ 11:00am 

 

Time: December 29, 2025 @ 11:00am

Place: FENS G032 

 

Please find the abstract of the talk and the short bio of the speaker below:

 

Abstract: Learning in neural networks is fundamental to the function of brain circuits and to the successes of modern AI. Yet a rigorous mathematical understanding of how these networks learn remains elusive, even as the need for it grows more urgent. In AI, scaling up neural network models has enabled them to achieve unprecedented capabilities through learning, but the first-principles understanding needed to ensure their safety, reliability, and efficiency is missing. In neuroscience, increasingly large datasets require theory to link measurements to principles of how the brain learns and functions. My research program addresses this dual challenge by developing mathematical frameworks that explain learning dynamics in biological and artificial neural networks.


In this talk, I will highlight our progress on two fronts. First, addressing the dominant paradigm of scaling in AI, I will show how our work, using tools from statistical mechanics and random matrix theory, has led to the mathematical classification and characterization of distinct learning regimes. These findings account for the main features of empirical neural scaling laws; enable transfer of optimal hyperparameters across model sizes, yielding significant computational benefits; and provide a framework for understanding emergent behaviors such as in-context learning.


Second, I will present mathematical theories that link measured representational changes in the brain to learning principles. I will first discuss representational drift, a widely observed phenomenon in which population codes change despite stable behavior, and show how ongoing Hebbian and anti-Hebbian plasticity produces representational changes that match drift statistics in the hippocampus and prefrontal cortex. I will then present a reinforcement learning framework explaining how hippocampal place fields reorganize as animals learn to navigate novel environments, with key predictions recently confirmed in rodent experiments. Taken together, these studies outline a path toward a rigorous mathematical foundation for learning that improves the predictability and efficiency of AI and connects neural data to testable principles of brain function.

 

Bio: Cengiz Pehlevan is an Assistant Professor of Applied Mathematics at Harvard University and an Associate Faculty Member at the Kempner Institute. His research develops mathematical theory for learning in biological and artificial neural networks. He is a recipient of a Sloan Research Fellowship in Neuroscience, an NSF CAREER Award, and a Google Faculty Research Award. Previously, he held research positions at the Flatiron Institute’s Center for Computational Biology and Janelia Research Campus, and was a Swartz Fellow at Harvard University. He holds a Ph.D. in physics from Brown University.

 


 

 

Home

FENS Dean's Office

Orta Mahalle, 34956 Tuzla, İstanbul, Türkiye

+90 216 483 96 01

© Sabancı University 2023