Extrapolations are considered for use in conjunction with iterative methods for finding the infinite horizon values for discounted finite Markov reward chains. The iterative methods considered include the Jacobi, Gauss-Seidel, and successive over-relaxation (SOR) methods. Optimal such extrapolations, using given directions, are readily found when using the L2 norm as a criterion. Certain optimal lower bound extrapolations, which apply in the Jacobi and Gauss-Seidel cases, are also easily found. We present an easily administered algorithm for reordering the equations which can reduce the L.. norm of the Gauss-Seidel matrix. Such a reduction not only provides certain theoretical benefits but can accelerate convergence. For a typical Markov reward chain example, we find that reordering yields dramatic improvement, SOR provides the best performance when the correct relaxation factor is used, sensitivity to the choice of relaxation factor can be substantially reduced by using one of the optimal L2 extrapolations presented, and nearly optimal performance can be achieved by reordering and applying Gauss-Seidel with a simple lower bound extrapolation.