Abstract
Computational models of legal precedent-based reasoning developed in AI and Law are typically based on the simplifying assumption that the background set of precedent cases is consistent. Besides being unrealistic in the legal domain, this assumption is problematic for recent promising applications of these models to the development of explainable AI methods. In this paper I explore a model of legal precedent-based reasoning that, unlike existing models, does not rely on the assumption that the background set of precedent cases is consistent. The model is a generalization of the reason model of precedential constraint. I first show that the model supports an interesting deontic logic, where consistent obligations can be derived from inconsistent case bases. I then provide an explanation of this surprising result by proposing a reformulation of the model in terms of cases that support a new potential decision and cases that conflict with it. Finally, I show that the reformulation of the model allows us to verify that inconsistent case bases do not make verification that a decision is permissible substantially more complex than consistent case bases and to introduce intuitive criteria to compare different permissible decisions.