Neville told me about this neat article from ’04. It presents a way to offer rewards to people taking a poll in such a way so as to motivate them to be honest, with no prior information about what the distribution of correct answers is. Apparently, previous such techniques are based on the idea of rewarding people for agreeing with other people’s answers. This new thing about this technique for calculating the reward is that it provides people with an incentive to tell their true opinion even if they know that they hold a minority viewpoint.

Drazen Prelec. A Bayesian Truth Serum for Subjective Data. Science 15 October 2004: Vol. 306. no. 5695, pp. 462 – 466. DOI: 10.1126/science.1102081

Here’s an example that demonstrates the crux of the method. The question is, “Is Picasso your favorite visual artist?” — assume that Picasso lovers are a minority. We want to develop an incentive system that gives Picasso lovers an incentive to answer truthfully that Picasso is their favorite, even though they are in the minority:

People who, for example, rate Picasso as their favorite should — and

usually do … — give higher estimates of the percentage of the population who shares

that opinion, because their own feelings are an informative `sample of one’ …. It

follows, then, that Picasso lovers — who have reason to believe that their best estimate of

Picasso popularity is high compared to others’ estimates — should conclude that the true

popularity of Picasso is underestimated by the population. Hence, one’s true opinion is

also the opinion that has the best chance of being surprisingly common.

Based off this idea, the method rewards people for giving “surprisingly common” answers. Each person is asked not only for their own answer, but also to predict the frequency of each answer in the population. The following equation is used to calculate the reward to give each person (equation 2, “score for respondent r” in the paper, page 5 of the PDF linked above):

log ((the actual frequency of this guy’s answer in the poll)/(the geometric mean of the predicted frequency of this guy’s answer in the poll))

+

alpha * sum over all answers (the actual frequency of this answer in the poll * log((this guy’s prediction of the frequency of this answer in the poll)/(the actual frequency of this answer in the poll)))

where alpha is a parameter between 0 and 1.

The first term in the reward rewards people for giving “surprisingly common” answers. The second term rewards people for giving accurate predictions of the frequency of answers.

The paper goes on to show that, given this reward function, truth-telling is a Nash equilibrium, and furthermore that for sufficiently small alpha, this equilibrium Pareto-dominates expected scores in other equilibria. It also discusses things that can go wrong, and what to do about them.

i cross-posted this here and elsewhere — just noting that so that it is clear that neither site stole from the other

LikeLike

in the case we assume that a common answer is derived from the preference of a popular individual, which method would be valid to prove the honest response of a person’s real preference? also assuming that the subject has the capacity of choice, aware that the most common answer is that which tends to be preferred by the most popular individual. does this study accept the inability of many to achieve own regalia, or does it present subjects who acknowledge their own power to choose based on personal preference?

LikeLike

I’m glad to see at least a few other e-souls recognize BTS. Internet has diminished distribution costs to 0, so the new constraint is filtering for quality of opinion and experts. I think that BTS can beat consensus, google, and even Last.fm type algorithms. let me know what you guys think about my ideas of Truthocracy:

http://emergentfool.com/2009/11/18/truthocracy-part-ii-discovering-truth-and-experts/

LikeLike