Dumb Mobs, 2003

I’ve been shuffling some old papers around recently and came upon the following. It was written in March 2003 as preliminary research for a panel I wanted to moderate at SXSW 2004. I got interesting responses from Bruce Sterling and Clay Shirky, which I might include if there’s interest.

Dumb Mobs, or Keep Your Epinions to Yourself

It was only a matter of time. As more and more of us got online and started to join communities, we began to share our opinions. We became a marketer’s dream, allowing them to gather our most detailed demographic data every time we made a purchase or joined a Yahoo! group. Companies like Amazon began to let us write “reviews” of our purchases and recommend things to others. With a user base of several million individuals, these databases have begun to act as our critical voice whenever we consider an online (or offline) purchase. But how good is the information we receive this way? Will this sort of “mob ranking” replace the advice of trusted sources, and if not, how will these trusted sources establish themselves online? Will it become more difficult to find good information in the flood of online ratings? What kind of forces are at work here? These are the questions I propose to explore.

I was prompted to ask some of these questions during a panel on book publishing during this year’s South by Southwest Interactive conference. The moderator had been talking about how the marketing and promotion of books had moved online, mostly due to the web’s reach and the reduced costs involved. I began to think of the way that the critic’s role had also moved online, though not in the way I’d hoped. Sure, people still brought up the New York Times online and some of them even read book reviews there, but more and more sites were adding their own ratings engine and just letting everybody have at it. Something about this made me uncomfortable and I wanted to find out why.

I have participated in this kind of critical activity myself. At the Internet Movie Database (www.imdb.com), users can rate a film out of 10 and write their own reviews which are then added to the site. A bit of a film geek, I’ve endeavoured to rate every film I see, whether it’s a masterpiece, a flop, or just an entertaining bit of fluff. Upon reflection, I think that might be the only way these sites will work. Just as a professional critic must write reviews that fall across a wide spectrum of opinion, each voter on IMDb or Amazon or Epinions must establish the boundaries of their taste. In the case of product reviews, where taste is not an issue, the critic still must establish their standards. Without informing anyone of what we don’t like, sharing what we do like will be meaningless.

However, my experience with these sites shows a different situation. Some users vote only for things they do like. These people would have an average rating that is quite high. Others only point out things they hate, and so their average ratings are quite low. As individual voices, we might be wise to ignore them, but as part of an anonymous mob, they are invisible. We don’t even know how many of them there are. The larger question is how do we know we can trust the ratings presented by a site that doesn’t limit its membership in any way? Sure, it’s democratic, but when it comes to informed opinions, the mob surely doesn’t rule.

Since the machinery behind these databases is hidden to us, I wanted to ask a few experts how they work. Is one better than another? What kind of research is being carried on into making them more useful? Will it really ever be true for me that I will weigh the opinion of the New York Times’ book critic against the mob of user ratings at Amazon and find them equal?

Let’s take Epinions as an example. When I ask it to list dramatic movies in order of rating, I get a very long list of 5-star choices. But I’m almost certain that the people who gave Schindler’s List the top rating were not the same group that elevated Anne of Green Gables to the same lofty place. I can’t be sure, but I’m trusting my gut on this one. I would hazard a guess that most people who take the time to rate their purchases online are a self-selecting group whose opinions tend toward one end of the spectrum or the other.

The interesting thing is how much more influential these algorithms have become, and how opaque they remain. Google’s search algorithm is the big one, but recent stories about the “black box” that is Yelp are also relevant. I wonder if a discussion of these issues might still be interesting, or has the issue already been settled?