Abstract
We study the phenomenon of merging of opinions for computationally limited Bayesian agents from the perspective of algorithmic randomness. When they agree on which data streams are algorithmically random, two Bayesian agents beginning the learning process with different priors may be seen as having compatible beliefs about the global uniformity of nature. This is because the algorithmically random data streams are of necessity globally regular: they are precisely the sequences that satisfy certain important statistical laws. By virtue of agreeing on what data streams are algorithmically random, two Bayesian agents can thus be taken to concur on what global regularities they expect to see in the data. We show that this type of compatibility between priors suffices to ensure that two computable Bayesian agents will reach inter-subjective agreement with increasing information. In other words, it guarantees that their respective probability assignments will almost surely become arbitrarily close to each other as the number of observations increases. Thus, when shared by computable Bayesian learners with different subjective priors, the beliefs about uniformity captured by algorithmic randomness provably lead to merging of opinions.