More than a century ago, Pavlov trained his dog to associate the sound of a bell with food. Ever since, scientists assumed the dog learned this through repetition: The more times the dog heard the bell and then got fed, the better it learned that the sound meant food would soon follow.
Now, scientists at UC San Francisco are upending this 100-year-old assumption about associative learning. The new theory asserts that it depends less on how many times something happens and more on how much time passes between rewards.
"It turns out that the time between these cue-reward pairings helps the brain determine how much to learn from that experience," said Vijay Mohan K. Namboobidiri, PhD, an associate professor of Neurology and senior author of the study, published Feb. 12 in Nature Neuroscience.
When the experiences happen closer together, the brain learns less from each instance, Namboodiri said, adding that this could explain why students who cram for exams don't do as well as those who studied throughout the semester.
Learning the cues
Scientists have traditionally thought of associative learning as a process of trial and error. Once the brain has detected that certain cues might lead to rewards, it begins to predict them. Scientists have postulated that at first the brain only releases dopamine when a reward like tasty food arrives.
But if the reward arrives often enough, the brain begins to anticipate it with a release of dopamine as soon as it gets the cue. The dopamine hit refines the brain's prediction, the theory goes, strengthening the link with the cue if the reward arrives - or weakening it if the reward fails to appear.
Namboodiri and postdoctoral scholar Dennis Burke, PhD, trained mice to associate a brief sound with getting sugar-sweetened water, varying the time between trials. They spaced the trials 30 to 60 seconds apart for some of the mice, and five to 10 minutes apart, or more, for others. The result was that the mice whose trials were closer together received many more rewards than those who trials were spaced farther apart in the same amount of time.
If associative learning depended only on repetition, the mice with more trials should have learned faster. Instead, the mice that got very few rewards learned the same amount as those that got 20 times more trials over the same amount of time.
What this tells us is that associative learning is less 'practice makes perfect' and more 'timing is everything.'"
Dennis Burke, PhD, first author of the study
Namboodiri and Burke then looked at what dopamine was doing in the mouse brain.
When the rewards were spaced further apart, the mice needed fewer repetitions before their brains began to respond to the sound with dopamine.
Then, the researchers tried a different variation. They repeatedly played the sound - spacing the cues 60 seconds apart - but only gave the mice sugar water 10% of the time. These mice needed far fewer rewards before they began releasing dopamine after the cue, regardless of whether it was followed by a reward.
More rapid learning
The findings could shift the way we look at learning and addiction. Smoking, for example, is intermittent and can involve cues - like the sight or smell of cigarettes - that increase the urge to smoke. Because a nicotine patch delivers nicotine constantly, it may disrupt the brain's association between nicotine and the resulting dopamine reward, blunting the urge to smoke and making it easier to quit.
Next, Namboodiri plans to investigate how his new theory could speed up artificial intelligence. Current AI systems learn quite slowly, because they are based on the prevailing model of associative learning, making small refinements after every interaction between billions of data points.
"A model that borrows from what we've discovered could potentially learn more quickly from fewer experiences," Namboodiri said. "For the moment, though, our brains can learn a lot faster than our machines and this study helps explain why."
Source:
Journal reference:
Burke, D. A., et al. (2026). Duration between rewards controls the rate of behavioral and dopaminergic learning. Nature Neuroscience. DOI: 10.1038/s41593-026-02206-2. https://www.nature.com/articles/s41593-026-02206-2