Abstract
The Sparse, Distributed Memory (SDM) model (Kanerva, 1984) is compared to Hopfield-type, neural-network models. A mathematical framework for comparing the two models is developed, and the capacity of each model is investigated. The capacity of the SDM can be increased independent of the dimension of the stored vectors, whereas the Hopfield capacity is limited to a fraction of this dimension. The stored information is proportional to the number of connections, and it is shown that this proportionality constant is the same for the SDM, the Hopfield model, and higher-order models. The models are also compared in their ability to store and recall temporal sequences of patterns. The SDM also includes time delays so that contextual information can be used to recover sequences. A generalization of the SDM allows storage of correlated patterns.