Paul Resnick (presnick) wrote,

Collusion-resistant, Incentive-compatible

At EC-07, Radu Jurca presented a paper extending work on eliciting honest ratings to consider situations where a set of players may collude to increase their payments for ratings. The setting is the same as that of my paper with Nolan Miller and Richard Zeckhauser, on the "The Peer Prediction Method". That is, a set of raters are scored based on comparing their reports to the reports of other raters-- there is no ultimate ground truth of whether the item is "good" that can be used to evaluate the raters.

Our paper showed that it is possible to construct payments that make honest reporting a Nash Equilibrium (i.e., best thing to do if others are doing it) while creating an expected reward large enough to encourage effort required for the raters to acquire a quality signal about the item. The technique is based on proper scoring rules, applied to the posterior distribution for a reference rater, computed from the prior distribution and the rater's report. Jurca and Faltings considers whether it's possible to make such incentive payments resistant to collusion (e.g., all raters agree to report that the item is good).

Interestingly, the authors find that it is useful to make incentive payments based on the ratings of more than one reference rater. Instead of just adding up the payments determined independently by each of their reports, which I assumed would be the most effective way to do it, the payments are tied to a count of the number of reference raters who report that the item is good. Consider, for example, if the implied probability distribution for each of the reference raters is that each will report "good" with probability 0.6. Then, the number who will report good follows a binomial distribution. By carefully choosing the points a rater gets for reporting "good" or "bad" when n other people report "good", it is possible to rule out some forms of collusion. For example, with 10 raters and a prior probability distribution that each will report "good" with probability 0.5, it is easy to see that we can make the payoff be 0 when either none or all report "good", yet make the payoff for 6 total "goods" when you report good be high enough that you will want to report "good" whenever you see it, if you think others will report honestly. Nolan Miller, Richard Zeckhauser and I had the basic intuition that we could punish all the raters if there was "more than the expected amount of agreement". This fleshes out that intuition with a concrete way of setting the incentive payments.

The most interesting result in this paper comes in section 7, which considers "sybil attacks". One person controls several raters, which I'll refer to as sybils (split identities of the person). They each acquire a real signal. The person is trying to maximize the sum of the expected payoffs of the raters. The authors find that, depending on the particular prior distribution, if one or just a few reference raters is assumed to act honestly, the incentive payoffs can be constructed so that even if the rest of the raters are sybils controlled by a single entity, they cannot do better than to report the same number of "good" ratings as they actually perceived. The technique is a brute force approach (automated mechanism design) that just writes down each of the incentive compatibility constraints (for each possible number of good ratings perceived, the expected payoff given the distribution of ratings from the honest raters, is higher for honest reporting than for any false report) and then solves the linear programming problem to find the smallest expected payment subject to those constraints. It would be nice to get some stronger intuitions about what kind of payments will be selected by the brute force approach. That is, how is it leveraging the small number of honest raters to drive the colluding raters toward honest reporting? Still, I laud the authors for fine work in demonstrating that it generally is possible to resist such collusion, so long as they expect there to be a few honest raters around.

Radu Jurca and Boi Faltings, "Collusion-resistant, Incentive-compatible Feedback Payments", Proceedings of ACM EC'07, P.200-209.
Comments for this post were disabled by the author