Foolproof convergence in multichain policy iteration
An example for undiscounted multichain Markov Renewal Programming shows that policies may exist such that the Policy Iteration Algorithm (PIA) can converge to these policies for some (but not all) choices of the additive constants in the relative values, and as a consequence that the PIA may cycle if the relative values are improperly determined.