Removing bias from moderation

A few weeks ago a friend of mine that had been thinking about reader-edited forums (like K5) posed an interesting question. He was concerned about how people’s bias would influence their voting decisions and wondered whether there could be any way to identify and filter out the effects of such bias. Of course, in some situations bias is expected, such as political elections, however in other situations, such as when a jury must vote on someone’s guilt or innocence, or when a Slashdot moderator must vote on a comment, bias is undesirable. After some thought, I came up with a proposal for such a system.

First, what do we mean by “bias”? It is a difficult question to answer exactly; examples would include political left or right-wing bias, nationalist bias, anti-Microsoft bias, and bias based on race. The dictionary definition is “A preference or an inclination, especially one that inhibits impartial judgment.” Implicit in the mechanism I am about to describe is a more precise definition of bias; it is the aptness of this definition that will determine the effectiveness of this approach.

Visitors to websites such as Amazon and users of tools like StumbleUpon will be familiar with a mechanism known as “Automatic Collaborative Filtering” or ACF. Amazon’s recommendations are based on what other people with similar tastes also liked, this is an example of collaborative filtering in action. There are a wide variety of collaborative filtering algorithms, which range widely in terms of sophistication and processor requirements, but all are designed to do more or less the same thing: anticipate how much you will like something based on how much similar people liked it. One way to look at it is that collaborative filtering tries to learn your biases and anticipate how they will influence how much you like something.

My idea was to use ACF to determine someone’s bias towards or against a particular article, and then attempt to remove the effect of that bias from their vote. The effect of their bias is assumed to be the difference between their anticipated vote based on ACF, and the global average vote for that article. Having determined this, we can then take their vote, and remove the effect of their bias from it.

Let’s look at how this might work in practice. Joe is a right-wing Bill O’Reilly fan who isn’t very good at setting aside his personal views when rating stories. Joe has just found an article discussing human rights abuses against illegal Mexican immigrants. Joe, not particularly sympathetic to illegal Mexican immigrants, gives the article a score of 2 out of 5. On receiving Joe’s rating, our mechanism uses ACF to determine what it might have expected Joe’s score to be. It notices that many of the people who tend to vote similarly to Joe (presumably also O’Reilly fans) also gave this article a low score – meaning that according to our ACF algorithm – Joe’s expected vote was 1.5. Now we look at the average (pre-adjusted) vote for the story and see that it is 3 – we then assume that Joe’s anticipated bias for this story is 1.5 minus 3 or -1.5. We use this to adjust Joe’s vote of 2 to make it an actual vote of 3.5 – which means that Joe’s adjusted vote for this story is actually above average once his personal bias has been disregarded!

So, how well will this system work in practice – and what is it really doing? What are the implications of this mechanism for determining someone’s bias? Is it fair?

I don’t pretend to have the answers to these questions, but it might be useful to think of it in terms of punishment: when your vote is adjusted by a large amount, then you are being punished by the system as your vote will have an effect different from that which you intended.

The way to minimize this punishment is for your votes as predicted by the ACF algorithm to be as close to what the average vote is likely to be as possible. The worst thing you can do is to align yourself with a group of people who consistently vote in a manner in opposition to the majority.

I have been trying to think of scenarios where it might be bad for people to do the former, or bad for them to do the latter, but so far I haven’t come up with anything. What kind of collective editor would such a system be? What kind of negative side effects might it have? I am curious to hear your opinions.

Leave a Reply

Your email address will not be published. Required fields are marked *