In models of aspiration-based reinforcement learning, agents adapt by comparing payoffs achieved from actions chosen in the past with an aspiration level. Though such models are well-established in behavioral psychology, only recently have they begun to receive attention in game theory and its applications to economics and politics. This paper provides an informal overview of a range of such theories applied to repeated interaction games. We describe different models of aspiration formation: where (1) aspirations are fixed but required to be consistent with long run average payoffs; (2) aspirations evolve based on past personal experience or of previous generations of players; and (3) aspirations are based on the experience of peers. Convergence to non-Nash outcomes may result in either of these formulations. Indeed, cooperative behavior can emerge and survive in the long run, even though it may be a strictly dominated strategy in the stage game, and despite the myopic adaptation of stage game strategies. Differences between reinforcement learning and evolutionary game theory are also discussed.