Finite horizon sequential decision problems with a “temporal vonNeumann-Morgenstern utility” criterion are analyzed. This criterion, as developed in , is a generalization of vonNeumann-Morgenstern (expected) utility of the vector of rewards, wherein an individual’s preferences concerning the timing of the resolution of uncertainty are taken into account. The preference theory underlying this criterion is reviewed and then extended in natural fashion to yield preferences for strategies in sequential decision problems. The main result is that value functions for sequential decision problems can be defined by a dynamic programming recursion using the functions which represent the original preferences, and these value functions represent the preferences defined on strategies. This permits citation of standard results from the dynamic programming literature, concerning the existence of (memoryless) strategies which are optimal with respect to the given preference relation.