The principal-agent paradigm, in which a principal has a primary stake in the performance of some system but delegates operational control of that system to an agent, has many natural applications in operations management (OM). However, existing principal-agent models are of limited use to OM researchers because they cannot represent the rich dynamic structure required of OM models. This paper formulates a novel dynamic model that overcomes these limitations by combining the principal-agent framework with the physical structure of a Markov decision process. In this model one has system moving from state to state as time passes, with transition probabilities depending on actions chosen by an agent, and a principal who pays the agent based on state transitions observed. The principal seeks an optimal payment scheme, striving toinduce the actions tht will maximize her finite horizon expected discounted profits. Although dynamic principal-agent models similar to the one proposed here are considered intractable, a set of assumptions are introduced that enable a systematic analysis. These assumptions involve the ”economic structure” of the model but not is ‘physical structure’. Under these asumptions, the paper establishes that one can use a dynamic programming recursion to derive an optimal payment scheme. This scheme is memoryless and satifies a generalization of Bellman’s principle of optimality. Important managerial insights are highlighted in the context of a two-state example called “the maintenance problem”.