The analysis of structured countable stage decision processes, initiated in Porteus , is continued. The standard models of positive and negative dynamic programming are given in this context, thus extending these results to criteria other than the usual expected sum of rewards, such as expected utility criteria , certain stochastic games, risk sensitive Markov decision processes, and maximin criteria. For positive problems, (what are called) unimprovable strategies are optimal and the optimal value sequence is the least solution of the optimality equations exceeding an obvious lower bound. For negative problems, conserving strategies are optimal and if one strategy is a one-step improvement on another, then it nets a greater value. (This rules out cycling in the strategy iteration procedure.) Also, transfinite methods are used to prove that the optimal value sequence is the greatest solution of the optimality equations less than an obvious upper bound. We indicate how all these results can be extended to analogues of the essentially positive, essentially negative, and convergent cases.