The analysis of Markov decision problems with expected utility criteria, initiated on Kreps (1975b), is continued. Analogues of the standard dynamic programming concepts of memoryless strategies and stationarity are provided. The key notion is summarized utility; sufficient information about past rewards is encoded in a summary, such that for well behaved problems, optimal strategies exist where the choice of action at any time depends only on the current state and summary. This leads naturally to considerations of stationarity and the existence of optimal stationary strategies. Also, stationarity leads to significant improvement in the strategy iteration procedure given in Kreps (l975b).