Beyond the Breaking Point? Survey Satisficing in Conjoint Experiments

Beyond the Breaking Point? Survey Satisficing in Conjoint Experiments

By Kirk Bansak, Jens Hainmueller, Daniel J. Hopkins, Teppei Yamamoto
April 26,2017Working Paper No. 3522

Recent years have seen a renaissance of conjoint survey designs within social science. To date, however, researchers have lacked guidance on how many attributes they can include within conjoint profiles before survey satisficing leads to unacceptable declines in response quality. This paper addresses that question using pre-registered, two-stage experiments examining choices among hypothetical candidates for U.S. Senate or hotel rooms. In each experiment, we use the first stage to identify attributes which are perceived to be uncorrelated with the attribute of interest — and so cannot be masked by those core attributes. In the second stage, we randomly assign respondents to conjoint designs with varying numbers of those filler attributes. We report the results of these experiments implemented via Amazon’s Mechanical Turk or Survey Sampling International. They demonstrate that our core quantities of interest are generally stable, with relatively modest increases in survey satisficing when respondents face large numbers of attributes.

Keywords
conjoint experiments, survey experiments, survey satisficing, response bias