Does Prelec Bayesian Truth Serum prevent Social Desirability Bias in Choice Experiments?
Social scientists are aware of the difficulty in eliciting respondents' private information, such as individual preferences. In the context of choice experiments (CE), for example, respondents show the tendency to misrepresent their true preferences in the attempt to please the researcher and to provide socially desirable answers. This tendency, known as social desirability bias (SDB), is pervasive and has been found to occur in many types of self-reporting measures and across the social science literature (see e.g., Fisher, 1993; King and Bruner, 2000; Leggett et al., 2003; Nederhof, 1985; Fisher and Katz 2000). If not properly addressed, the problem of SDB can have important negative economic and policy implications. Given its pervasiveness and relevance, it is not surprising to see a sizable body of literature investigating alternative approaches aiming at minimizing or circumventing the SDB problem. Within the range of proposed solutions to attenuate SDB in CE, the use of actual payments (to increase the saliency of the task), SDB specific cheap talk scripts as well the use of social desirability scales to correct for SDB were shown to be unsatisfactory (e.g., Norwood and Lusk 2011). The current state of the art in dealing with SDB in CE is represented by the use of indirect questioning (or inferred questioning or inferred valuation task) (IQ), where respondents are asked to predict the behavior of others (instead of their own). The idea behind IQ is that, as people have no immediate experience of other's private information, their answers are based on their own preferences and valuations, which are adjusted for perceived differences between the person and the others. The responses to IQ are therefore correlated with one's preference and are believed to be closer to actual preferences than the responses to direct questions (Fisher 1993). IQ has recently been implemented by Lusk and Norwood (2009), Lusk and Norwood (2010) and by Carlsson et al. (2010).
A new (and in the context of CE yet untested) mechanism to prevent SDB which includes the IQ is the Bayesian Truth Serum (BTS) developed by Prelec (Science 2004) and recently discussed as credible and persuasive incentive to tell the truth (Weaver and Prelec 2013). This mechanism consists of an information-scoring system that allows the researcher to reward the respondents for "truthful" answers and can be employed in eliciting subjective information (e.g., opinions, tastes, past behavior) in situations where the objective truth is intrinsically or practically unknowable. The key idea behind BTS is to assign a high score to an answer whose actual frequency is greater than its predicted frequency, with predictions drawn from the same population that supplies the answers. This complex scoring mechanism is not revealed to the respondents whose attention is instead drawn to the fact that their answers (to both direct and indirect questions) will be evaluated as being "truthful" and "accurate" according this new scientific method. Each respondent is requested to provide a personal answer (the equivalent of a direct question) and an estimate of how other respondents will answer the same question (the equivalent of an IQ). Both (direct and indirect) answers are scored to produce an aggregate score. As Prelec (2004) states, the scoring system transforms a survey into a competitive, zero-sum contest, in which truth-telling is a strict Bayesian Nash equilibrium.
Our study represents the first application of the BTS in the context of CE to elicit preferences for ethical and moral attributes. We run a CE with a random sample of 800 customers of an organic store chain in three different large cities in Italy (Rom, Milan, Palermo). The product used in the CE is organic pasta, a staple in Italian cuisine. The pasta attributes considered in the CE include several normative dimensions (e.g., fair price to producers, environmentally-friendly production, social responsible workforce etc). Data was collected at the grocery store using touch-screen computer assisted face-to-face interview.
All respondents were faced with a set of direct and a set of indirect questions. The direct questions (6 choice cards) asked consumers to select among pasta types (a traditional CE question). The indirect questions asked respondents to make predictions of the answers/choices of a sample of store's customers participating in the study (an IQ type question). We considered three different treatments with varying incentives to truthfully reveal preferences (BTS) and/or to provide accurate predictions. Specifically, the three treatments provided the following incentives: (i) no incentive (control treatment), (ii) incentives to provide accurate estimates of others' purchase intentions, (iii) and incentives to reveal truthful preference and accurate predictions of other's answers based on BTS. As an economic incentive we used store coupons of the value of 30 Euros. Coupons were assigned at the end of the study to customers who performed best in their tasks.
We find significant differences between WTP estimates derived from direct and indirect questioning. Consistent with previous literature, we find that direct questioning leads to higher WTP than indirect questioning suggesting that direct questioning might be subject to SBD. With regard to the ability of the BTS to reveal truthful preferences, we do not find a significant difference in mean estimates, but interestingly find a significant effect on the accuracy of the responses suggesting that the provided incentives could nevertheless improve estimates.
The subject, methods, and findings of this study should appeal to a broad cross-section of attendees of the 2015 ICM conference, in particular to those researchers interested in methods to elicit consumer preferences or health services preference, and in experimental and behavioral economics. In addition to generating discussion on elicitation methods and incentives in choice experiments, it is hoped that this paper will help highlight and motivate new research on improving state of the art elicitation techniques in choice experiments. «
Social scientists are aware of the difficulty in eliciting respondents' private information, such as individual preferences. In the context of choice experiments (CE), for example, respondents show the tendency to misrepresent their true preferences in the attempt to please the researcher and to provide socially desirable answers. This tendency, known as social desirability bias (SDB), is pervasive and has been found to occur in many types of self-reporting measures and across the social science... »
Book / Congress title:
International Choice Modelling Conference, International Choice Modelling Conference 2015