How to conceptual engineer ‘entropy’ and ‘information’
Abstract
In this paper, we discuss how to conceptual engineer ‘entropy’ and ‘information’ as they are used in information theory or statistical mechanics. Initially, we evaluate the extent to which the all-pervasive entangled use of entropy and information notions can be somehow defective in these domains, such as being meaningless or generating confusion. Then, we assess the main ameliorative strategies to improve this defective conceptual practice. The first strategy is to substitute the terms ‘entropy’ and ‘information’ by non-loaded terms, as it was first argued by Bar-Hillel (1955). A second strategy is to prescribe how these terms should be correctly used to be meaningful, as it was pioneered by Carnap (1952 [1977]) in Two Essays on Entropy. However, the actual implementation of these two ameliorative strategies has been historically unsuccessful due to the low credentials that philosophers as conceptual prescribers have among scientists. Finally, to try to solve these obstacles, we propose a third strategy based on leveraging evidence from the contribution of ‘Philosophy in Science’ (à la Pradeau et al. 2024) to integrate conceptual prescriptions and analyses of entropy and information as part of the scientific practices in which these notions are used.