In this paper, we propose a novel approach to associative memory by introducing a Trainable Learnable Associative Memory (TLAM). Conventional associative memory systems suffer from low capacity and reliance on non-neural network based, hand-crafted up...
In this paper, we propose a novel approach to associative memory by introducing a Trainable Learnable Associative Memory (TLAM). Conventional associative memory systems suffer from low capacity and reliance on non-neural network based, hand-crafted update algorithms. Our TLAM leverages a neural network-based architecture, enhancing capacity and robustness. The core of TLAM is an energy function conforming to Lyapunovs theorem, which guides the systems state into a stable state. This model can accurately memorize and retrieve original ground truth features, even when presented with arbitrarily distorted inputs. The decoder effectively reconstructs the original signals, ensuring robust performance in associative recall. We demonstrate that TLAM can learn and storecomplex patterns, significantly improving the efficiency and reliability of associative memory models. Our results highlight the potential of TLAM to advance the field of neural associative memory by providing a scalable and trainable solution.