Abstract
Cognitive function certainly poses the biggest challenge for computational neuroscience. As we argue, past efforts to build neural models of cognition (the target article included) had too narrow a focus on implementing rule-based language processing. The problem with these models is that they sacrifice the advantages of connectionism rather than building on them. Recent and more promising approaches for modeling cognition build on the mathematical properties of distributed neural representations. These approaches truly exploit the key advantages of connectionism, that is, the high representational power of distributed neural codes and similarity-based pattern recognition. The architectures for cognitive computing that emerge from these approaches are neural associative memories endowed with additional mapping operations to handle invariances and to form reduced representations of combinatorial structures.