In an Aeon article from last year, David Deutsch seems to be taking a dim view of Bayesianism as a core component of an AGI:
The doctrine assumes that minds work by assigning probabilities to their ideas and modifying those probabilities in the light of experience as a way of choosing how to act. This is especially perverse when it comes to an AGI’s values — the moral and aesthetic ideas that inform its choices and intentions — for it allows only a behaviouristic model of them, in which values that are ‘rewarded’ by ‘experience’ are ‘reinforced’ and come to dominate behaviour while those that are ‘punished’ by ‘experience’ are extinguished.
Or rather, since the Bayes AGI just does what rewards it the most, it will align its ‘values’ to correspond to that. I think the problem is with the hidden meaning in the word values. What he is referring to here is probably something along the lines of a compressed version of the AGI’s internal decision making model. In fact, you could say that this is in theory what we consider the personal or ethical values of a human being to really be. Of course in practice a human’s actions frequently depart from his advertised values — probably by design.
The core problem here is not with Bayesianism but rather that an unfortunately designed AGI could find itself with a fitness function (and corresponding set of values, ethics, etc.) which are not necessarily friendly to humans. Or perhaps he’s suggesting it’s a problem with any AGI which has a goal system. What is the alternative, in that case?