Models with adaptive agents have become increasingly popular in computational sociology (e.g. Macy 1991, Macy and Flache 2002). In this paper we show that at least two important kinds of such models lack empirical content. In the first type players adjust via reinforcement learning: they adjust their propensities to undertake actions based on the kind of feedback they receive. In the second type players satisficei.e., retain the same action if the payoff is satisfactoryand search when payoffs are unsatisfactory. In both types of models feed-back is coded as satisfactory if it exceeds some aspiration level, where aspirations may themselves adjust to reflect prior payoffs. We show that outcomes in either type of model are highly sensitive to initial parameters; that is, any outcome of the stage game can be supported as a stable outcome. Intuitively, this occurs because players may be endowed with initial aspirations that make any outcome satisfactory, and thus the actions producing that outcome can be reinforced by all players. These results hold even when players aspirations are endogenous. We also present two solutions to this problem. First, we show that stochastic versions of the model ensure ergodicity: i.e., the players action-propensities and aspirations converge to a unique limiting distribution that is independent of their initial values. Second, we show that if players engage in social comparisonsspecifically, an agents aspiration depends on the payoffs of his peers, in addition to his ownthen far fewer outcomes can be sustained in equilibrium.