Paul Resnick (presnick) wrote,

Personalized Filters Yes; Bubbles No

On Thursday, I gave the closing keynote for the UMAP conference (User Modeling and Personalization) in Girona, Spain (Catalunya). I had planned to talk about by work directed towards creating balanced news aggregators that people will prefer to use over unbalanced ones (see project site). But then Eli Pariser’s TED Talk  and book on “Filter Bubbles” starting getting a lot of attention.  He’s started a trend of a little parlor game where a group of friends all try the same search on Google and look in horror when they see that they get different results. So I decided to broaden my focus a little beyond news aggregators. I titled the talk, “Personalized Filters Yes; Bubbles No.”

As you can perhaps guess from my title, I agree with some of Pariser’s concerns about bubbles. But I think he’s on the wrong track in attributing those concerns to personalization. Most of his concerns, I argue, come from badly implemented personalization, not from personalization itself. I’ve posted a copy of my slides and notes. For anyone who wants a summary of my arguments, here goes.

His first concern I summarize as “Trapped in the Old You”. I argued that personalization systems that try to maximize your long-term clickthrough rates will naturally try to explore to see if you like things, not just give you more of the same. This is the whole point of the algorithmic work on multi-armed bandit models, for example. Moreover, once our personalization systems take into account that  our interests and tastes may change over time, and that there is declining marginal utility, eventually, for more of the same (consider the 7,000th episode of Star Trek; even I stopped watching). Personalization systems that are designed to optimize some measure of user satisfaction (such as click-throughs or purchases or dwell time or ratings) are going to be designed to give you serendipitous experiences, introducing you to things you like that you didn’t know you would like. Moreover, even today’s systems often do that pretty well, in part because when they optimize on matching on one dimension (say topic) they end up giving us some diversity in another dimension that matters to us (say, political ideology). From introspection, I think most people can recall times when automated personalization systems did introduce them to something new that became a favorite.

His second concern I summarize as, “Reinforcing Your Baser Instincts”. Here, too, good personalization systems should take into account the difference between short-term and long-term preferences (entertainment vs. education, for example). We will need delayed indicators of long-term value, such as measuring which words from articles we end up using in our own messages and blog posts, or explicit user feedback after some delay (the next day or week). It may also be helpful to offer features that people can opt in to that nudge them toward their better selves (their long-term preferences). Here, I gave examples of nudges toward balanced news reading that you can see if you look at the slides.

His third concern I summarize as, “Fragmenting Society”, but there are two somewhat separable sub-elements of this. One is the need for common reference points, so that we can have something to discuss with colleagues at the water cooler or strangers on the subway. Here, I think if individuals value having these common reference points, then it will get baked into the personalized information streams they consume in a natural way. That is, they’ll click on some popular things that aren’t inherently interesting to them, and the automated personalization algorithms will infer that they have some interest in those things. Perhaps better would be for personalization algorithms to try to learn a model that assumes individual utility is a combination of personal match and wanting what everyone else is getting, with the systems learning the right mix of the two for the individual, or the individual actually getting a slider bar to control the mix.

The second sub-concern is fragmenting of the global village into polarized tribes.  Here it’s an open question whether personalization will lead to such polarization. It hinges on whether the network fractures into cliques with very little overlap or permeability. But the small-world properties of random graphs suggest that even a few brokers, or a little brokering by a lot of people, may be enough to keep average shortest path short. Individual preferences would have to be strongly in favor of insularity within groups in order to get an outcome of real fragmentation. It turns out that people’s preferences with respect to political information, as concluded by my former doctoral student Kelly Garrett, is that they like confirmatory information but have at best a mild aversion to challenge. Moreover, some people prefer a mix of challenging and confirmatory information and everyone wants challenging information sometimes (like when they know they’re going to have to defend their position at an upcoming family gathering.) Thus, it’s not clear that personalization is going to lead us to political fragmentation, or any other kind. Other forces in society may or may not be doing that, but probably not personalization. Despite that, I do think that it’s a good idea to include perspective-taking features in our personalization interfaces, features that make it easy to see what other people are seeing. My slides include a nice example of this from the ConsiderIt work of Travis Kriplean, a PhD student at the University of Washington.

The final point I’d like to bring up is that personalization broadens the set of things that are seen by *someone*. That means that more things have a chance to get spread virally, and eventually reach a broader audience than would be possible if everyone saw the same set of things. Instead of being horrified by the parlor game showing that we get different search results than our friends do, we should delight in the possibility that our friends will be able to tell us something different.

Overall, we should be pushing for better personalization, and transparent personalization, not concluding that personalization per se is a bad thing.

At the conference banquet the night before my talk, attendees from different countries were invited to find their compatriots and choose a song to sing for everyone else. (The five Americans sang, “This Land is Your Land”). Inspired by that, I decided to compose a song we could all sing together to close the talk and the conference, and which would reinforce some themes of my talk.  The conference venue was a converted church, and the acoustics were great. Many people sang along. The melody is “Twinkle, Twinkle, Little Star”.

(Update: someone captured it on video: )

The Better Personalization Anthem

User models set me free
                as you build the Daily Me

Yes exploit, but please explore
                could just be that I’ll want more

Broaden what my models know
                UMAP scholars make it so

 Words: Paul Resnick and Joe Konstan
 Melody: Wolfgang Amadeus Mozart

Comments for this post were disabled by the author