Accounting for outcome and process measures in dynamic decision-making tasks through model calibration

Journal of Dynamic Decision Making 1 (1) (2015)
  Copy   BIBTEX

Abstract

Computational models of learning and the theories they represent are often validated by calibrating them to human data on decision outcomes. However, only a few models explain the process by which these decision outcomes are reached. We argue that models of learning should be able to reflect the process through which the decision outcomes are reached, and validating a model on the process is likely to help simultaneously explain both the process as well as the decision outcome. To demonstrate the proposed validation, we use a large dataset from the Technion Prediction Tournament and an existing Instance-based Learning model. We present two ways of calibrating the model’s parameters to human data: on an outcome measure and on a process measure. In agreement with our expectations, we find that calibrating the model on the process measure helps to explain both the process and outcome measures compared to calibrating the model on the outcome measure. These results hold when the model is generalized to a different dataset. We discuss implications for explaining the process and the decision outcomes in computational models of learning.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 100,607

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2019-06-08

Downloads
14 (#1,264,352)

6 months
5 (#1,015,253)

Historical graph of downloads
How can I increase my downloads?