Abstract
Humans learn new concepts extremely fast. One or two examples of a new concept are often sufficient for us to grasp its meaning. Traditional theories of concept formation, such as symbolic or connectionist representations, have problems explaining the quick learning exhibited by humans. In contrast to these representations, I advocate a third form of representing categories, which employs geometric structures. I argue that this form is appropriate for modeling concept learning. By using the geometric structures of what I call “conceptual spaces,” I define properties and concepts. A learning model that shows how properties and concepts can be learned in a simple but naturalistic way is then presented. This model accounts well for the role of similarity judgments in concept learning. Finally, as an application, the concept representations are used to give an analysis of nonmonotonic reasoning.