Multi-stage decision processes are considered, in notation which is an outgrowth of that introduced by Denardo [l0]. Certain Markov decision processes, stochastic games, and risk-sensitive Markov decision processes are formulated in this notation. We identify conditions sufficient to prove that, in infinite horizon nonstationary processes, the optimal infinite horizon (present) value exists, in uniquely defined, is what is called “structured,” and can be found by solving Bellman’s optimality equations; c-optimal strategies exist; an optimal strategy can be found by applying Bellman’s Principle of Optimality; and a specially identified kind of policy, called a “structured” policy is optimal in each stage. A link is thus drawn between (i) studies such as those of Blackwell [5,6,7] and Strauch ., where general policies for general processes are considered, and (ii) other studies, such as those of Scarf  and Derman  where structured policies for special processes are considered. The infinite stage results are built on finite stage results. Results for the stationary infinite horizon case are also included. Finally, three applications are given. The first shows how a known result, regarding the optimality of Bore1 measurable policies in an infinite stage nonstationary Markov decision process, can be derived using our approach. The second yields a new result; conditions sufficient to prove that a generalized (s,S) policy is optimal in each stage of an infinite horizon stochastic inventory process. The third is also new; conditions sufficient to prove that an optimal stationary strategy exists in a discounted stationary risk sensitive Markov decision process with constant risk aversion.