Rewiring Memory: A New Model That Learns Like a Human Brain

Ai memory neuroscience human.jpg


Summary: A new memory model called Input-Driven Plasticity (IDP) offers a more human-like explanation for how external stimuli help us retrieve memories, building on the foundations of the classic Hopfield network. Unlike traditional models, which assume memory recall happens from a fixed starting point, the IDP framework describes how stimuli reshape the brain’s “energy landscape” in real time to guide memory retrieval.

This dynamic approach better reflects how we remember things in real life, like recognizing a cat from just its tail. The model is also robust to noise, filtering out weak memories in favor of stable, meaningful ones, offering insights for future AI systems.

Key Facts:

  • Dynamic Memory Retrieval: The IDP model suggests that external stimuli reshape the neural landscape as memories are retrieved.
  • Noise-Resilient Design: It uses environmental “noise” to filter out unstable memories, improving robustness.
  • AI Potential: This model could inspire more memory-capable AI systems that go beyond static inputs and mimic human associative recall.

Source: UC Santa Barbara

Listen to the first notes of an old, beloved song. Can you name that tune?

If you can, congratulations — it’s a triumph of your associative memory, in which one piece of information (the first few notes) triggers the memory of the entire pattern (the song), without you actually having to hear the rest of the song again.

We use this handy neural mechanism to learn, remember, solve problems and generally navigate our reality.

This shows a brain made of computer chips.
Instead of applying the two-step algorithmic memory retrieval on the rather static energy landscape of the original Hopfield network model, the researchers describe a dynamic, input-driven mechanism. Credit: Neuroscience News

“It’s a network effect,” said UC Santa Barbara mechanical engineering professor Francesco Bullo, explaining that associative memories aren’t stored in single brain cells.

“Memory storage and memory retrieval are dynamic processes that occur over entire networks of neurons.”

In 1982 physicist John Hopfield translated this theoretical neuroscience concept into the artificial intelligence realm, with the formulation of the Hopfield network.

In doing so, not only did he provide a mathematical framework for understanding memory storage and retrieval in the human brain, he also developed one of the first recurrent artificial neural networks — the Hopfield network — known for its ability to retrieve complete patterns from noisy or incomplete inputs.

Hopfield won the Nobel Prize for his work in 2024.

However, according to Bullo and collaborators Simone Betteti, Giacomo Baggio and Sandro Zampieri at the University of Padua in Italy, the traditional Hopfield network model is powerful, but it doesn’t tell the full story of how new information guides memory retrieval.

“Notably,” they say in a paper published in the journal Science Advances, “the role of external inputs has largely been unexplored, from their effects on neural dynamics to how they facilitate effective memory retrieval.”

The researchers suggest a model of memory retrieval they say is more descriptive of how we experience memory.

“The modern version of machine learning systems, these large language models — they don’t really model memories,” Bullo explained.

“You put in a prompt and you get an output. But it’s not the same way in which we understand and handle memories in the animal world.”

While LLMs can return responses that can sound convincingly intelligent, drawing upon the patterns of the language they are fed, they still lack the underlying reasoning and experience of the physical real world that animals have.

“The way in which we experience the world is something that is more continuous and less start-and-reset,” said Betteti, lead author of the paper.

Most of the treatments on the Hopfield model tended to treat the brain as if it was a computer, he added, with a very mechanistic perspective.

“Instead, since we are working on a memory model, we want to start with a human perspective.”

The main question inspiring the theorists was: As we experience the world that surrounds us, how do the signals we receive enable us to retrieve memories?

As Hopfield envisioned, it helps to conceptualize memory retrieval in terms of an energy landscape, in which the valleys are energy minima that represent memories.

Memory retrieval is like exploring this landscape; recognition is when you fall into one of the valleys. Your starting position in the landscape is your initial condition.

“Imagine you see a cat’s tail,” Bullo said. “Not the entire cat, but just the tail. An associative memory system should be able to recover the memory of the entire cat.”

According to the traditional Hopfield model, the cat’s tail (stimulus) is enough to put you closest to the valley labeled “cat,” he explained, treating the stimulus as an initial condition. But how did you get to that spot in the first place? 

“The classic Hopfield model does not carefully explain how seeing the tail of the cat puts you in the right place to fall down the hill and reach the energy minimum,” Bullo said.

“How do you move around in the space of neural activity where you are storing these memories? It’s a little bit unclear.”

The researchers’ Input-Driven Plasticity (IDP) model aims to address this lack of clarity with a mechanism that gradually integrates past and new information, guiding the memory retrieval process to the correct memory.

Instead of applying the two-step algorithmic memory retrieval on the rather static energy landscape of the original Hopfield network model, the researchers describe a dynamic, input-driven mechanism.

“We advocate for the idea that as the stimulus from the external world is received (e.g., the image of the cat tail), it changes the energy landscape at the same time,” Bullo said.

“The stimulus simplifies the energy landscape so that no matter what your initial position, you will roll down to the correct memory of the cat.”

Additionally, the researchers say, the IDP model is robust to noise — situations where the input is vague, ambiguous, or partially obscured — and in fact uses the noise as a means to filter out less stable memories (the shallower valleys of this energy landscape) in favor of the more stable ones.

“We start with the fact that when you’re gazing at a scene your gaze shifts in between the different components of the scene,” Betteti said.

“So at every instant in time you choose what you want to focus on but you have a lot of noise around.” Once you lock into the input to focus on, the network adjusts itself to prioritize it, he explained.

Choosing what stimulus to focus on, a.k.a. attention, is also the main mechanism behind another neural network architecture, the transformer, which has become the heart of large language models like ChatGPT.

While the IDP model the researchers propose “starts from a very different initial point with a different aim,” Bullo said, there’s a lot of potential for the model to be helpful in designing future machine learning systems.

“We see a connection between the two, and the paper describes it,” Bullo said. “It is not the main focus of the paper, but there is this wonderful hope that these associative memory systems and large language models may be reconciled.”

About this AI and memory research news

Author: Sonia Fernandez
Source: UC Santa Barbara
Contact: Sonia Fernandez – UC Santa Barbara
Image: The image is credited to Neuroscience News

Original Research: Open access.
“Input-Driven Dynamics for Robust Memory Retrieval in Hopfield Networks” by Francesco Bullo et al. Science Advances


Abstract

Input-Driven Dynamics for Robust Memory Retrieval in Hopfield Networks

The Hopfield model provides a mathematical framework for understanding the mechanisms of memory storage and retrieval in the human brain.

This model has inspired decades of research on learning and retrieval dynamics, capacity estimates, and sequential transitions among memories.

Notably, the role of external inputs has been largely underexplored, from their effects on neural dynamics to how they facilitate effective memory retrieval.

To bridge this gap, we propose a dynamical system framework in which the external input directly influences the neural synapses and shapes the energy landscape of the Hopfield model.

This plasticity-based mechanism provides a clear energetic interpretation of the memory retrieval process and proves effective at correctly classifying mixed inputs.

Furthermore, we integrate this model within the framework of modern Hopfield architectures to elucidate how current and past information are combined during the retrieval process.

Last, we embed both the classic and the proposed model in an environment disrupted by noise and compare their robustness during memory retrieval.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *