New improved bounds on the optimal return function in finite state and action, infinite horizon, discounted stationary Markov decision chains are developed. They require solving a single constraint, bounded variable linear program, which can be done using marginal analysis. The bounds can be used for algorithmic termination criteria and improved tests for suboptimal decisions, using MacQueen’s  criterion. We show how to implement these tests so that little additional computational effort is required. We consider transformations which can be used to convert a process into an equivalent one which may be easier to solve. We present six general transformations (some of which are new) which can be used. Important characteristics of the transformations are discussed. For example, we examine whether they reduce the spectral radius and/or the norm (maximum row sum) of the process. The new bounds require equal row sums (such as in Blackwell ). We show that this is not a limitation. Given a process which has equal row sums, we specify some special cases (of the general transformations) which will reduce the norm and preserve equality of the row sums. One of these, called the diagonal reduction, reduces all diagonal elements by the smallest such element, and increases the off diagonal elements. When the same general transformation is used to reduce all diagonal elements to zero, we obtain Jacobi iteration (and possibly unequal row sums). Another special case, called the scalar fixed matrix reduction, may deteriorate the sparsity of the process. Another case, called the Ross reduction, not only retains equality of the row sums and preserves sparsity, but can make a dramatic reduction in the norm when it applies. For each column, it reduces each element in that column by the smallest such element.