Paul Resnick's Occasional Musings
Paul Resnick

[ website | Paul Resnick's homepage ]
[ userinfo | livejournal userinfo ]

moved to presnick.wordpress.com [17 Dec 2011|11:29am]
This blog has now moved to presnick.wordpress.com, where spam comments seem to be handled a little better.

I apologize to all of my past real commenters, whose comments are now hidden. The spam comments had overwhelmed your genuine contributions.

Personalized Filters Yes; Bubbles No [17 Jul 2011|02:32pm]

On Thursday, I gave the closing keynote for the UMAP conference (User Modeling and Personalization) in Girona, Spain (Catalunya). I had planned to talk about by work directed towards creating balanced news aggregators that people will prefer to use over unbalanced ones (see project site). But then Eli Pariser’s TED Talk  and book on “Filter Bubbles” starting getting a lot of attention.  He’s started a trend of a little parlor game where a group of friends all try the same search on Google and look in horror when they see that they get different results. So I decided to broaden my focus a little beyond news aggregators. I titled the talk, “Personalized Filters Yes; Bubbles No.”

As you can perhaps guess from my title, I agree with some of Pariser’s concerns about bubbles. But I think he’s on the wrong track in attributing those concerns to personalization. Most of his concerns, I argue, come from badly implemented personalization, not from personalization itself. I’ve posted a copy of my slides and notes. For anyone who wants a summary of my arguments, here goes.

His first concern I summarize as “Trapped in the Old You”. I argued that personalization systems that try to maximize your long-term clickthrough rates will naturally try to explore to see if you like things, not just give you more of the same. This is the whole point of the algorithmic work on multi-armed bandit models, for example. Moreover, once our personalization systems take into account that  our interests and tastes may change over time, and that there is declining marginal utility, eventually, for more of the same (consider the 7,000th episode of Star Trek; even I stopped watching). Personalization systems that are designed to optimize some measure of user satisfaction (such as click-throughs or purchases or dwell time or ratings) are going to be designed to give you serendipitous experiences, introducing you to things you like that you didn’t know you would like. Moreover, even today’s systems often do that pretty well, in part because when they optimize on matching on one dimension (say topic) they end up giving us some diversity in another dimension that matters to us (say, political ideology). From introspection, I think most people can recall times when automated personalization systems did introduce them to something new that became a favorite.

His second concern I summarize as, “Reinforcing Your Baser Instincts”. Here, too, good personalization systems should take into account the difference between short-term and long-term preferences (entertainment vs. education, for example). We will need delayed indicators of long-term value, such as measuring which words from articles we end up using in our own messages and blog posts, or explicit user feedback after some delay (the next day or week). It may also be helpful to offer features that people can opt in to that nudge them toward their better selves (their long-term preferences). Here, I gave examples of nudges toward balanced news reading that you can see if you look at the slides.

His third concern I summarize as, “Fragmenting Society”, but there are two somewhat separable sub-elements of this. One is the need for common reference points, so that we can have something to discuss with colleagues at the water cooler or strangers on the subway. Here, I think if individuals value having these common reference points, then it will get baked into the personalized information streams they consume in a natural way. That is, they’ll click on some popular things that aren’t inherently interesting to them, and the automated personalization algorithms will infer that they have some interest in those things. Perhaps better would be for personalization algorithms to try to learn a model that assumes individual utility is a combination of personal match and wanting what everyone else is getting, with the systems learning the right mix of the two for the individual, or the individual actually getting a slider bar to control the mix.

The second sub-concern is fragmenting of the global village into polarized tribes.  Here it’s an open question whether personalization will lead to such polarization. It hinges on whether the network fractures into cliques with very little overlap or permeability. But the small-world properties of random graphs suggest that even a few brokers, or a little brokering by a lot of people, may be enough to keep average shortest path short. Individual preferences would have to be strongly in favor of insularity within groups in order to get an outcome of real fragmentation. It turns out that people’s preferences with respect to political information, as concluded by my former doctoral student Kelly Garrett, is that they like confirmatory information but have at best a mild aversion to challenge. Moreover, some people prefer a mix of challenging and confirmatory information and everyone wants challenging information sometimes (like when they know they’re going to have to defend their position at an upcoming family gathering.) Thus, it’s not clear that personalization is going to lead us to political fragmentation, or any other kind. Other forces in society may or may not be doing that, but probably not personalization. Despite that, I do think that it’s a good idea to include perspective-taking features in our personalization interfaces, features that make it easy to see what other people are seeing. My slides include a nice example of this from the ConsiderIt work of Travis Kriplean, a PhD student at the University of Washington.

The final point I’d like to bring up is that personalization broadens the set of things that are seen by *someone*. That means that more things have a chance to get spread virally, and eventually reach a broader audience than would be possible if everyone saw the same set of things. Instead of being horrified by the parlor game showing that we get different search results than our friends do, we should delight in the possibility that our friends will be able to tell us something different.

Overall, we should be pushing for better personalization, and transparent personalization, not concluding that personalization per se is a bad thing.

At the conference banquet the night before my talk, attendees from different countries were invited to find their compatriots and choose a song to sing for everyone else. (The five Americans sang, “This Land is Your Land”). Inspired by that, I decided to compose a song we could all sing together to close the talk and the conference, and which would reinforce some themes of my talk.  The conference venue was a converted church, and the acoustics were great. Many people sang along. The melody is “Twinkle, Twinkle, Little Star”.

(Update: someone captured it on video: )

The Better Personalization Anthem

User models set me free
                as you build the Daily Me

Yes exploit, but please explore
                could just be that I’ll want more

Broaden what my models know
                UMAP scholars make it so

 Words: Paul Resnick and Joe Konstan
 Melody: Wolfgang Amadeus Mozart

Yelp gets more reviews per reviewer than CitySearch or Yahoo Local [29 Jun 2011|03:04pm]
Author attributes it to the fact that reviewers are anonymous at CitySearch and Yahoo local, but build up reputations on Yelp. Of course, there are also other differences between the sites.

Zhongmin Wang (2010) “Anonymity, Social Image, and the Competition for Volunteers: A Case
Study of the Online Market for Reviews,” The B.E. Journal of Economic Analysis & Policy: Vol.
10: Iss. 1 (Contributions), Article 44.
Available at: http://www.bepress.com/bejeap/vol10/iss1/art44

Abstract:
This paper takes a first step toward understanding the working of the online market for re-
views.   Most online review firms rely on unpaid volunteers to write reviews.   Can a for-profit
online review firm attract productive volunteer reviewers, limit the number of ranting or raving
reviewers, and marginalize fake reviewers?  This paper sheds light on this issue by studying re-
viewer productivity and restaurant ratings at Yelp, where reviewers are encouraged to establish a
social image, and two competing websites, where reviewers are completely anonymous. Using a
dataset of nearly half a million reviewer accounts, we find that the number (proportion) of prolific
reviewers on Yelp is an order of magnitude larger than that on either competing site, more produc-
tive reviewers on all three websites are less likely to give an extreme rating, and restaurant ratings
on Yelp tend to be much less extreme than those on either competing site.

Need recommender systems contest ideas [13 Sep 2010|05:32pm]
Do you have an idea or plan for a future challenge/contest that you think could move the field of Recommender Systems forward? I’d love to hear about your idea or plan, even if only in sketch form, and even if you’re not in a position to carry it out yourself. At this year’s RecSys conference in Barcelona, I’ll be moderating a panel titled, “Contests: Way Forward or Detour?” As part of that panel, I’d like to present brief sketches of several contest ideas for the panelists to respond to.

Please send me your ideas!

----------------------Abstract of the Session
Panelists:
Joseph A. Konstan, University of Minnesota, USA
Andreas Hotho, University of Würzburg, Germany
Jesus Pindado, Strands, Inc., USA

Contests and challenges have energized researchers and focused attention in many fields recently, including recommender systems. At the 2008 RecSys conference, winners were announced for a contest proposing new startup companies. The 2009 conference featured a panel reflecting on the then recently completed Netflix challenge.

Would additional contests help move the field of recommender systems forward? Or would they just draw attention from the most important problems to problems that are most easily formulated as contests? If contests would be useful, what should the tasks be and how should performance be evaluated? The panel will begin with short presentations by the panelists. Following that, the panelists will respond to brief sketches of possible new contests. In addition to prediction and ranking tasks, tasks might include making creative use of the outputs of a fixed recommender engine, or eliciting inputs for a recommender engine.

Gerhard Fischer paper at C&T [26 Jun 2009|10:56am]
"Towards an Analytic Framework for Understanding and Supporting Peer-Support Communities in Using and Evolving Software Products" at C&T

Participation in SAP's online community.

Before/After point system introduced in SAP:
Mean response time decreased (51 min vs. 34 min.)
Mean helper count increased (1.89 vs. 2.02)
Percentage answered (12% vs. 30%)

Some evidence of gaming of the system—people just ask questions to gain points.

Karim Lakhani at C&T [26 Jun 2009|09:39am]
Karim Lakhani is giving a great keynote at C&T about tracking innovation. He has worked with MatLab programming contests that have a fascinating format. There's clear performance outcome; source code of all entries is available to other people; leaders are tracked. Researchers can track which lines of code get reused.

What leads to displacing the currently leading entry?
--novel code
--novel combos of others' code
--NOT borrowed code
--complexity
--NOT conformance

What leads code to get reused in future (leading) entries?
--novel code
--novel combos of others' code
--borrowed code
--complexity
--conformance

Also did experiments with TopCoder.
One experiment with computational biology contest problem.
Three conditions (random assignment?):
Fully collaborative vs. fully competitive vs. mixed (competitive first week, then all code shared)
Fully collaborative got the best performance
Best performing entries did better than state-of-the-art in computational biology

Universally Utility-Maximizing Privacy Mechanisms [13 Jun 2009|05:51am]
Interesting paper presentation by Tim Roughgarden.

He gave a nice introduction to the recent literature on provably privacy-preserving mechanisms for publishing statistical summaries such as counts of rows from databases satisfying some property (e.g., income > 100,000). Suppose a mechanism computes the actual count, and then reports something possibly different (e.g., by adding noise). There is a definition of a p-privacy if, for every possible output (count), for any person (row) the ratio of the probability of that output with the row present to the probability of that output with the row omitted is always in the range [p, 1/p]. Intuitively, whatever the actual count, there's not much revealed about whether any particular person has high income.

One technique that works for counts, LaPlace-p, is to add to the correct count +/- z, where probability of z is 1/2(-lnp)e^^(z/lnp). For any reported count, there's some confidence interval around it, and the size of that confidence interval is independent of the count. Thus, for reported count 1, you can't really tell whether the correct count is 1 or 0, and thus you can't really tell whether a particular person has high income, *even if you have great side information about everyone else in the database*. On the other hand, if the reported count is 27,000, you still can't tell much about any one person, but you can be pretty sure that the correct count is somewhere around 27,000.

Roughgarden's paper is about how much value you can get from the best count function (in terms of some loss function comparing true result to reported result) while still preserving the p-privacy requirement. It turns out that a mechanism very much like LaPlace-p, but discretized, works to minimize the expected loss no matter what the user of the count's priors are about the distribution of correct counts. It is in this sense that it is universal. This requires a little user-specific post-processing of the algorithm's output, based on the user's priors about the correct counts. For example, if the reported count is -1, we know that's not the correct count; it must really be 0 or something positive, and you can back out from the report and the user's prior beliefs to infer a belief distribution over correct counts.

Babaioff; Characterizing Truthful Multi-Armed Bandit Mechanisms [12 Jun 2009|10:46am]
At Economics of Search conference.

Moshe Babaioff presented an interesting paper.

Suppose that you're conducting an auction for adwords, where you want to rank the bidders based on expected revenue in order to allocate slots and determine prices for slots based on bids. But suppose you don't know what the clickthrough rate will be for the items.

In a multi-armed bandit model, there are multiple bandit slot machines and you have to decide which arms to pull. There is an explore/exploit tradeoff-- you need to explore (experiment) to estimate the clickthrough rates, including some experimentation with those you have low estimates for, in case that estimate is wrong. But over time you switch to more exploitation, where you pull the arm of the highest expected value.

The new twist in this paper is that you want advertisers to truthfully reveal their valuation for a click. If clickthrough rates are known, you can set price essentially using a second-price mechanism based on bid*clickthrough. But if you're using a multi-armed bandit algorithm to determined clickthrough rates, the correct prices would depend on estimated clickthrough rates that you don't necessarily see because you don't test them.

It's a theory paper. They prove that, with the requirement that the mechanism induce trruthful bidding, there's always a pure exploration phase, where the selection of the winners can depend on previous clickthroughs but *not* on the bids; and then a pure exploitation phase, where the clickthroughs no longer affect allocation of slots in the next round. The best multi-armed bandit algorithms without the truth-telling requirement don't have that separation of phases. And, it turns out that the best algorithms without the truth-telling requirement have less "regret" relative to the best you could do if you magically knew the clickthrough rates at the beginning.

So now I'm curious what the best algorithms are without the truth-telling requirement. My guess is that they put more exploration into things that the best estimate so far has higher value for. We actually need to use an algorithm like this for the next version of our "converation double pivots" work on drupal.org, where we're going to dynamically change the set of recommended items based on a combination of prior generated from recommender algorithms and actual observed clicks. But we don't have any truthful revelation requirement, so we should be able to use the standard algorithms.

Rewards program for social networking activity [27 May 2008|02:44pm]
As part of the CommunityLab project, for the past five years I've been doing research related to incentives for participation in online communities. Now one of my colleagues, Yan Chen, is working with a startup company, urTurn, that has created a cross-platform rewards program. That is, you accumulate points for posting photos or making friend links in social network sites like Facebook and MySpace. Then you turn in the points for prizes.

I'm not quite sure what their business model will be (what do they get from having people accumulate points on their site)? But it will be interesting to see how motivating the points are for people, and how they will prevent various attempts to game the system.

So, sign up, help Yan with her research (she has no financial stake in the company), and win valuable prizes!

How newsgroups refer to NetScan data [29 Jun 2007|02:55pm]
Reflections and Reactions to Social Accounting Meta-Data. Eric Gleave (U of Washington) and Marc Smith (Microsoft Research). At C&T.

In 18 months, there were about 5000 messages that explicitly referred to "netscan.research". Analyzed/coded 952 messages.

Basic findings:

  • Half discuss groups. 80% of those linking to the Netscan report card for the group, 17% explicitly discuss the group's "health".

  • 22% discuss the message's author, such as saying that the author is #1 in the group.

  • 31% discuss others, including their stats; 5% of these are "troll checks"

  • 48% discuss the Netscan system itself



Some discussion points:

  • Helpful for comparisons between competing groups on similar topics

  • Reduces costs of monitoring and sanctioning

  • Facilitates construction and maintenance of status

  • Identifies people who are trolls

Rhythms of social interaction at Facebook [29 Jun 2007|02:20pm]
Rhythms of social interaction: messaging within a massive online network. Scott A. Golder, Dennis M. Wilkinson and Bernardo A. Huberman (HP labs).

Scott Golder presenting at C&T. Log analysis of Facebook messaging patterns, from 496 North American universities.

The college weekend goes Friday noon to Sunday noon. Message traffic follows the same pattern Mon-Thurs. Friday morning is same as Mon-Thurs. morning. Sunday afternoon/evening is same as Mon-Thurs. Saturday all day, plus Friday PM and Sunday AM, have much lower traffic.

45% of messages and pokes went to people at different schools. However, this percentage was much lower in the late night/early morning hours.

Perhaps the most surprising result is the seasonal variation in the percentage of messages that are within versus between schools. During vacations, the percentage of within-school messages increases! The authors give the plausible explanation that the messaging is substituting for in-person communication between the same people that would occur when school is in session. This seems surprising to me, however, as I would have thought that the complementarity effect would be stronger-- you send a poke or message to someone that you saw earlier today or expect to see later today. It would be interesting to see some future research that explores more directly the complementarity/substitution effects of various communication modalities with f2f meetings in everyday use.

Group Formation in Large Social Networks [28 Jun 2007|05:03pm]
L. Backstrom, D. Huttenlocher, J. Kleinberg and X. Lan. "Group Formation in Large Social Networks: Membership, Growth, and Evolution", Proceedings of KDD 2006.

Datasets on membership in LiveJournal groups and explicit "friend" relationships; and on publishing in conferences and explicit citations between authors.

Question 1: How does the probability of joining a group depend on the friends who are already in it?
A: 'The data suggest a “law of diminishing returns” at work, where having additional friends in a group has successively smaller effect but nonetheless continues to increase the chance of joining...' But if a greater percentage of the friends are linked to each other, the probability of joining is even higher. They suggest that a "strength of weak ties" argument would suggest the opposite of this finding (you're more likely to find out new info from weak ties who don't know each other). But I think decisions about joining require much more than just finding out about the community. (See next blog entry on what makes people commit to/stay in a community.)

Question 2: Which communities will grow over time?
A: Here the characteristics provide a little less predictive power. One obvious one, given the result above, is if there are a lot of people who have a lot of friends in the group, then the group will have larger growth in the next time period. Somewhat more puzzling is that the more three-person cliques in the group, the less the group grows. This could reflect that stagnant groups eventually develop more links among members and hence more cliques.

Question 3: "given a set of overlapping communities, do topics tend to follow people, or do people tend to follow topics?"
A: More frequently, people active in a conference where a topic is hot start going to other conferences where the topic is already hot, rather than the transplantation of people causing the topic to become hot.

Kraut: Developing Commitment Through Conversation [28 Jun 2007|03:20pm]
Today I'm at the Communities and Technologies conference, at the workshop on studying interaction in online communities.

Bob Kraut is discussing some of the data analysis issues in his study in Usenet newsgroups of what independent variables predict whether a message would get responded to.

They first did some machine learning techniques to identify the signature of messages that have a "self-introduction". Then they used that as a regressor, along with some directly measurable variables like using first-person pronouns.

He and Moira Burke have a paper tomorrow where they did a controlled experiment. They found that the key ingredient is saying that you're part of the community, not that you share the interest/condition around which the group has formed.

Collusion-resistant, Incentive-compatible [17 Jun 2007|09:19am]
At EC-07, Radu Jurca presented a paper extending work on eliciting honest ratings to consider situations where a set of players may collude to increase their payments for ratings. The setting is the same as that of my paper with Nolan Miller and Richard Zeckhauser, on the "The Peer Prediction Method". That is, a set of raters are scored based on comparing their reports to the reports of other raters-- there is no ultimate ground truth of whether the item is "good" that can be used to evaluate the raters.

Our paper showed that it is possible to construct payments that make honest reporting a Nash Equilibrium (i.e., best thing to do if others are doing it) while creating an expected reward large enough to encourage effort required for the raters to acquire a quality signal about the item. The technique is based on proper scoring rules, applied to the posterior distribution for a reference rater, computed from the prior distribution and the rater's report. Jurca and Faltings considers whether it's possible to make such incentive payments resistant to collusion (e.g., all raters agree to report that the item is good).

Interestingly, the authors find that it is useful to make incentive payments based on the ratings of more than one reference rater. Instead of just adding up the payments determined independently by each of their reports, which I assumed would be the most effective way to do it, the payments are tied to a count of the number of reference raters who report that the item is good. Consider, for example, if the implied probability distribution for each of the reference raters is that each will report "good" with probability 0.6. Then, the number who will report good follows a binomial distribution. By carefully choosing the points a rater gets for reporting "good" or "bad" when n other people report "good", it is possible to rule out some forms of collusion. For example, with 10 raters and a prior probability distribution that each will report "good" with probability 0.5, it is easy to see that we can make the payoff be 0 when either none or all report "good", yet make the payoff for 6 total "goods" when you report good be high enough that you will want to report "good" whenever you see it, if you think others will report honestly. Nolan Miller, Richard Zeckhauser and I had the basic intuition that we could punish all the raters if there was "more than the expected amount of agreement". This fleshes out that intuition with a concrete way of setting the incentive payments.

The most interesting result in this paper comes in section 7, which considers "sybil attacks". One person controls several raters, which I'll refer to as sybils (split identities of the person). They each acquire a real signal. The person is trying to maximize the sum of the expected payoffs of the raters. The authors find that, depending on the particular prior distribution, if one or just a few reference raters is assumed to act honestly, the incentive payoffs can be constructed so that even if the rest of the raters are sybils controlled by a single entity, they cannot do better than to report the same number of "good" ratings as they actually perceived. The technique is a brute force approach (automated mechanism design) that just writes down each of the incentive compatibility constraints (for each possible number of good ratings perceived, the expected payoff given the distribution of ratings from the honest raters, is higher for honest reporting than for any false report) and then solves the linear programming problem to find the smallest expected payment subject to those constraints. It would be nice to get some stronger intuitions about what kind of payments will be selected by the brute force approach. That is, how is it leveraging the small number of honest raters to drive the colluding raters toward honest reporting? Still, I laud the authors for fine work in demonstrating that it generally is possible to resist such collusion, so long as they expect there to be a few honest raters around.

Radu Jurca and Boi Faltings, "Collusion-resistant, Incentive-compatible Feedback Payments", Proceedings of ACM EC'07, P.200-209.

Recommenders and Sales Diversity [14 Jun 2007|02:16pm]
At the EC '07 conference, Kartik Hosanagar presented a paper modeling the impact of recommender systems on sales diversity. Do they contribute to a long tail, where lots of products get a few sales, or do they reinforce blockbusters. The paper suggests the latter.

There are actually two effects that we should expect from recommenders. One is discovery-- once one person discovers an item, some other people with similar tastes who would not have found that item do find it. The other is reinforcement-- an item that many people have sampled will be more likely to get recommended.

The paper provides a simple two-item, two-player, two-urn model in section 4. Unfortunately, it begins with an assumption that both players have the same probabilities of choosing the two items, in the absence of a recommender. Without diversity in what people who choose without the recommender, it doesn't seem to capture the discovery effect for recommenders.

Section 5 seems to provide a more promising simulation framework. Consumers have different "ideal points" in the space, and thus are likely to select some distribution of items in absence of a recommender. The recommender that increases the salience of some items to people that are little farther from their ideal point. Even here, however, it doesn't quite seem to capture the phenomenon that the recommender makes salient an item that is in fact closer to the consumer's ideal than what the consumer would have found. It seems to me that you'd need a variant of the Hotelling model where there's a separate model of item salience that is not completely determined by the distance from the customer's ideal. Things that are already blockbusters would be more likely to be noticed and chosen, even if farther from the customer's ideal. That's kind of how the recommender is modeled, but I think it needs to be applied to the base choice model, not just the effect of the recommender system.

D. Fleder, K. Hosanagar "Recommender Systems and Their Impact on Sales Diversity", Proceedings of ACM EC '07, pp.192-199.

Peer Prediction Method with Reduced Payments [14 Jun 2006|02:34pm]
ACM EC 06. Jurca and Faltings

Work extends my "Peer Prediction" paper, written with Nolan Miller and Richard Zeckhauser, on eliciting honest reports, by comparing reports between people.

Automatically selects a scoring rule, with lower expected payments but still incentive compatible.

Has some mechanism for probabilitistically filtering out unusual ratings. I'll have to look at the paper to see the details of this.

Claims that the honest reporting equilibrium is evolutionarily stable, meaning that small coalitions can't attack it. Again, I'll have to take a look at this.

Collaborative Filtering with Privacy [14 Jun 2006|02:21pm]
ACM EC '06. Presentation on privacy-preserving collaborative filtering.

Previous approaches:

  • secure multi-party computation to compute eignevectors (Canny).

  • add noise to each rating



This paper shows that adding noise may not preserve as much privacy as you' d like. If the noise for each rating is a random draw from the same distribution, and if there is a finite set of possible ratings, then you can make a pretty good backward inference about what the original ratings were. The basic idea is...

The solution in this paper is to have users add a variable amount of noise to their ratings, not the same draw for each item.

I haven't had a chance to read the paper in detail yet, but it seems quite elegant. I hope I'll be able to use it in my recommender systems course this fall, though the math may be too advanced.

Sponsored Search Auction Mechanisms [13 Jun 2006|03:13pm]
Current session has several papers on auction mechanisms for conducting auctions for which ads will be displayed in sponsored search.


Lahaie, analysis of alternative auction designs, including Yahoo and Google's current mechanisms. Offers an overview of the design space.

Mahdian and Saberi, MSR. Online algorithm, meaning that you have to decide which advertiser gets each search without knowing how many more searches there will be. Based on picking a single price to charge all advertisers. May be missing something, but the problem setup doesn't seem to match real advertising allocation problems, and the solution seems to unnecessarily restrict to fixed-price for all advertisers, rather than the kinds of mechanisms in the previous and next papers.

Aggarwal, Google presentation, Aggarwal et al.. Current mechanism: Advertiser makes a per-click dollar bid (for a particular search keyword). Google orders the bids based on bid*estimated-clickthru-percentage. If you're in slot j, you pay the rate based on the bid of slot j+1. This seems like it might be a nice generalization of 2nd price auction mechanism, but it's not-- it's not incentive-compatible. Presented design for a new mechanism in which truthful bidding is best, assuming others are bidding truthfully. For some reason, she said you can't use a VCG mechanism unless a "separability" condition holds. But the actual mechanism she presented is, I think, a VCG mechanism. Perhaps I'm missing something, or perhaps she has a more restricted idea of what a VCG mechanism is. The mechanism she presents is only incentive-compatible if there are no budget constraints that tie different auctions together or repeated-game effects from revealing your preferences today impacts on tomorrow's auction behavior of your opponents.

Estimating click-through rates for ads, without actually paying the full cost of putting your ad up and measuring it. This estimate is useful for optimizing your bidding.

ACM EC 06: Fudenberg invited lecture [13 Jun 2006|12:24pm]
I'm at the ACM EC conference for the next couple days. Computer Science theory/algorithms/AI people looking at economic incentive issues.

This talk: "Stable Superstitions and Rational Steady State Learning", given by Drew Fudenberg (joint work with Levine)

(These are scattered notes taken during the actual talk. If it seems to the reader that it's getting at something interesting, you can probably get better intuitions about it, and more accurate characterization of results, from a paper, or a set of slides posted by Levine.)

Context: Learning in games. Anonymous random matching. Some history of previous papers that went too fast to capture.

"Self-confirming equilibrium"; less restrictive than Nash. No one can do better with "rational experimentation." Nash requires people to know what would happen if you deviate.

Agents off equlibirum path play infrequently, so have much less incentive to experiment. Wrong steps one step off equilibrium can't be stable, but wrong steps two off equilibrium can.

Illustration: Hmmurabi's second law. Accused person is thrown in river. If lives, accuser is killed. if dies, accuser gets their property.
Superstition: guilty are more likely to drown than innocent. This supersition is stable, because accusers rarely get to find out, because if they believe it, they won't accuse the innocent, and they don't get to find out.
Alternative supersitition: guilty will be struck by lightning. This superstition is not stable. Kids try petty crime and discover they're not struck by lightning.

Rational Steady-State Learning
Agent's decision problem: each agent in role i expects to play T times. Agent observes only terminal node each time. Agent believes faces time-invariant distribution of opponents' strategies. (This is wrong, but hopefully a reasonable model of how people would actually be thinking.) Steady states are where people play strategies that are optimal given the information they have from the previous rounds.

Results focus on characterizing steady states as T tends to infinity-- most players have lots of observations of play (but only rational experimentation in those rounds of play), and htere are few novices in the game in any round.

Asymptotic result for Hammurabi caes: there will be no crimes (in the limit of arbitrarily long lifetimes). With long but finite T, some crimes are committed, some false accusations take place, and people making false accusations learn that they work. But if there are few opportunities for being a witness, then there's no rational interest in experimenting with false accusation, because you won't get to do it very often even if you find out that the false accusation works.

Model highlights the role of experimentation in determining when a superstition is likely to survice.

David Parkes: question about applications to Sponsord Search design-- implications for encouraging experimentation or sharing information learned from experimentation.

NetSquared Human Rights Session [30 May 2006|06:47pm]
Patrick Ball, Benetech
Small organizations on the ground don't want to share their data-- it's their ticket of entry to policy discussions. They do need crypto and communication so they can get their data to a secure place even if their laptops are impounded.

Make it serve the local need of the person entering the data, and by the way have it do the stuff that's good for the organization and the long haul.

Has been doing statistical analysis to estimate prevalence of Human Rights violations, based on counts and overlaps between sources.

Dan McQuillen, Amnesty International
Mashups are a great publicity/marketing opportunity for human rights organization.
The big human rights battles are about to be fought out on the Internet-- things like

Bryan Nunez, Witness
Trains human rights activists/defenders on use of video (cameras, editing, distribution). Help them use the video as part an action plan.
-------------------------------
Patrick is very concerned about Internet filtering. (Years ago he challenged me about PICS at a CFP conference. Now he's concerned about Google's community tagging and how it might be used by ISPs for filtering. Had an interesting conversation with him at lunch about this.)

NetSquared: state of Open Source Software for Nonprofits [30 May 2006|06:16pm]
Some audience questions before the start of the session:

  • Is Open Source relevant? Or are open APIs all that matters?

  • Are there underlying values for NPOs choosing tech, or is it just a question of picking what works best



David Geilhufe's arguments for open source for the non-profit sector: avoid duplication of effort; encourage innovation.

OpenBRR (open business readiness rating)-- more appropriate criteria for making decisions on open source than if you use the usual criteria that have been articulated for commercial products.

Zack Rosen on the CivicSpace ecology.
Communities:

  • Drupal

  • CivicSpace Foundation

  • OpenNGO-- the CRM portion


Vendors/Service Providers

  • CivicSpace, Inc.

  • CivicActions

  • Echo Ditto

  • ...+20 more


In the CRM space, biggest three vendors are Kintera, Convio, GetActive. Then there's a long tail with the little vendors. But if you aggregate all the vendors, the CiviCRM community is number two, and much more profitable. Tools are advancing exponentially faster. Vendors in the OpenSource space are bidding 10-50% of commercial market leaders. One and two person shops are bidding against market leaders and winning.

The Mambo/Joomla fork. Major developers didn't like what the people in charge of Mambo did, so they left on masse, and were able to take the source code with them.

NetSquared, CitizenJournalism session [30 May 2006|03:12pm]
Dan Gillmor. Citizen Journalism is becoming the norm. Eyewitness reports from disasters are just the beginning. Digg is the darling example now because it has ratings of news stories, though he also mentions Slashdot for rating the commentary. (Look for the new interface reading comments on Slashdot, coming soon, that I've been working on with students Youn-ah Kang and Nathan Oostendorp!)

The OhmyNews story. Korea. Extremely successful; has become one of the most influential publications in Korea. 43,000 citizen reporters==>screening by news Guerilla Desk. Mostly reviews, commentary. Also 65 staff reporters, mostly hard news, analysis. But there's a lot of blending between them. Now trying an international version, and a partnership with a prestigious newspaper in Japan. 86 countries with 1000 citizen reporters so far on international version. Doubling about every 3 months.

Ethan Zuckerman, Global Voices. Story of Hao Wu, blogger detained without charge in China. Effort to publicize his case got much easier once Hao Wu's sister started blogging about the case. Lesson: "Don't speak. Point." Don't try to speak on behalf of others-- just point to those who are speaking on their own behalf.

Some notes from NetSquared, Session 1 [30 May 2006|03:12pm]
At NetSquared


  • Howard Rheingold: "Still need a residue of hierarchy, but it can be a pretty small one"

  • Paul Saffo: "The power of the whisper"

  • My summary of the morning: when the analysis gets complicated, just remember, "It's the participation, stupid."

  • From the floor: "a just society means 'not just my society' "

On to Calaveras for WineCamp [30 May 2006|12:47pm]
At the dinner after Online Community Camp, Greg Beuthin from ComputMentor told me about WineCamp, where geeks and non-profits were camping out for the weekend. Some of the people from CivicSpace were going to be there, and a major goal of my trip to NetSquared (starting Tuesday, today) was to connect with them. So off I went on Friday afternoon, after a day talking about my research at Yahoo!

Unlike the Online Community Camp, which had borrowed some un-conference ideas, this was the real deal. Saturday morning people introduced themselves, gave a few "tags" to describe themselves, and said what they hoped to get out of the conference. Tibba Phillips, founder of Output for Hope, which helps people find missing persons who are "off the grid", said she was looking to upgrade their website to include a more easily searchable database, so that the project could scale up. Zack Rosen said his goal for the weekend was to build Libba's database. On my turn I piped in that I wanted to watch/help him do it. It became a big barnraising activity, with about 10 people involved by Sunday.

It actually turned out to be an informative dry run for the course I'm planning for winter semester, where teams of students will develop custom sites, using the drupal CMS platform, for non-profit organization clients. Saturday, when we had no power or connectivity, we did requirements analysis. On Sunday, indoors at a winery, we implemented. We only had about 3.5 hours. Zack, Tim Bonneman, and I trasnferred much of the content of the existing site. Several hackers from CivicCRM put together the database part, by using their tools to add custom fields to their basic person-data entity. WineCamp organizer Chris Messina made a new theme so that suddenly, two hours into the work, our generic drupal-themed site transformed to have the look and feel of the existing Output for Hope site that we were copying. I worked on adding help material to the site so that their web volunteer would be able to maintain it. We didn't quite get to a site they can roll out, but we got pretty close and there's a good chance that their web volunteer will be able to take it the rest of the way. Here's the work in progress.

I also connected with Laney from The Princess Project, which is trying to scale up or franchise or do a chapter model of some kind. In a quick brainstorm with Laney and David Geilhufe, we hatched the idea of an online kit that would allow people to self-organize in a new city, and have their progress tracked in various ways so that the national organization could provide appropriate resources at different points, and there could be peer to peer support among chapters. It's basically a franchising/chapter model of scalable organizing, but with some new twists made possible by technology and the peer-to-peer sharing ethos.

I think this peer-to-peer chapter organizing, coordinated by a central toolkit, could actually be the big idea about how IT can help rebuild social capital that I was supposed to come up with for the Saguaro Seminar, but never did. On the ride home, Zack pointed out that this new chapter/franchising model was pretty much what they had tried to do in the Dean campaign. It's also related to what Meetup has been trying to do. And it's sort of what BarCamp is already putting into practice.

It was also a truly wonderful experience for the senses. Wine from Ferriere vineyards, swimming through a cavern, sleeping under the stars, amazing vistas, yoga in the woods.

See photos from the Flickr feed (WineCampCalaveras):

Online Community Camp [25 May 2006|02:08pm]
I'm at an Online Community Camp.
"Camp" is the new word for conferences that are only loosely organized-- people propose topics at the beginning of the day and people go to whatever seems interesting.

Who's here:

  • Vendors

  • Consultants

  • Community managers, web producers at companies, non-profits, and media outfits

  • one student from Stanford, and me, reprenting academia

----------------
Topics they're interested in:


  • how to change platforms; how to select platforms

  • How to quantify ROI, to justify and get resources

  • Some inteest in reputation

  • Using online communities for market research

  • Media wants audience to talk with each other, how to facilitate that

  • Multiple communities, how to not require multiple destinations, cross-site integration

  • Practical applications of Web 2.0-- what's hype vs. useful

  • How to apply social networking/myspace

  • Extracting/summarizing from online discussions
    --integration with corporate

  • Online/offline connection

  • Best practices across the board

  • Combining data from other sources about people with data from online communities.

  • Blogs and RSS vs. conventional discussion boards

A Lost Social Capital Opportunity? [09 Sep 2005|09:33pm]
Briefly, I thought there might be an opportunity in the aftermath of Hurricane Katrina for America to generate a huge amount of bridging social capital, as people with financial resources (enoguh to have a spare suite or apartment) offered assistance to people on a one-to-one basis. There's been a huge outpouring of offers. But the official response seems to be to prefer big shelters and commercial rentals for longer term, rather than person-to-person aid. The adopt-a-family program I've heard about in Texas, where the adopting family is supposed to provide aid other than housing (like getting oriented, finding schools, etc.), seems like a nice exception.

Apparently, people with more resources and more connections have been leaving the shelter system to stay with friends and family. When they're ready for more permanent housing, some of them seem to be making use of individual offers of assistance, often found through friend-of-a-friend, sometimes with someone in the chain consulting Internet-based information resources.

It seems like a lost opportunity that the people with fewest resources, who are still in the shelter system, are not getting connected to people of greater means at a time when those people who are well-off are unusually open to a human connection accross class lines. It could have had a positive long-term impact on the social fabric of America. Here's to hoping the connections still happen, perhaps if adopt-a-family or welcome-wagon programs take off in the coming weeks.

Mapping America's Generosity [09 Sep 2005|09:21pm]
We at the UM School of Information, with extraordinary help from people around the country, have been burning the midnight oil the past week, and have now launched a very cool interface to the many offers of housing that people have made online. See http://katrinahousing.net and click on "combined search".

Resistance to Private Housing Matches for People Displaced by Katrina [03 Sep 2005|11:21pm]
Here's a story about a flashpoint in the central control versus distributed self-organization drama that’s been playing out in many ways throughout society in recent years, enabled at least in part by the Internet.

There are many websites where people are offering private housing, free, to people displaced by Katrina. We’ve been creating an aggregator site for those various websites (katrinahousing.net). This led us to learn that the U.S. Government and professional relief organizations like the Red Cross are not completely comfortable with this person-to-person aid from strangers idea (hosts don’t know what they’re getting into and may not be prepared to do it well; some hosts or guests might be unsavory). We learned about this concern in an abstract way on Thursday from talking to an Ann Arbor Red Cross official, but now we have a very concrete example.

Bruce Vinkemulder, a minister from Battle Creek, MI arranged to send a bus to a shelter in Mississippi, where displaced people had signed up to go on the bus. Apparently, he wanted to get official approval and kept getting bumped up the chain, until a regional Red Cross director gave a more direct “no”, and said they wouldn’t be letting people go until they’d been processed, which would probably take a week. He also got a similar answer from the National Guard in Battle Creek, which already has upwards up 1,000 people temporarily housed there, though they are letting him in to do Bible study tomorrow. [Note: turns out those people weren't there yet, but some did arrive later. --PR 9/9/05]

It’s not obvious to me whether the take-it-slow, do-it-the-professional-way approach is better than the people-to-people approach in this situation. Certainly, the human costs to people in the shelters of staying there a long time can be pretty high (even if they get access to professional counselors they wouldn’t get access to if they dispersed). On the other hand, if masses of people rely on the kindness of individual strangers, there are bound to be some bad outcomes that result. My assessment is that, on balance, given the numbers of people displaced in the current situation, it would be better to encourage person-to-person aid rather than try to put the brakes on it.

What’s new in the current situation is that our ability to coordinate that kind of person-to-person aid is far greater now, with the Internet, than it’s ever been in the past. We’ve been able to jump the boundaries of social networks, in order to connect resources that were socially distant. Consider, for example, how we hooked up with the Battle Creek minister. Someone we knew had posted an offer on an Internet site. Someone working with Bruce used the Internet to contact our friend. My wife, Caroline, ran into her on the street and put us in touch with Battle Creek group. My wife then agreed to try to find housing for six families in Ann Arbor who weren’t spoken for in Battle Creek. We are using both our social networks, Internet postings, and the newspaper, to try to recruit those additional hosts.

Anyway, I think it would be interesting to investigate how widespread the resistance of the Red Cross and National Guard is to private home placements, and what impact that is having on the overall situation.

Youth Bulges' Impacts on Adolescent Civic Knowledge and Participation [03 Jun 2005|12:57pm]
Hart, D., R. Atkins, P. Markey and J. Youniss (2003). "Youth Bulges in Communities: The Effects of Age Structure on Adolescent Civic Knowledge and Civic Participation." Psychological Science 15(9): 591-597.

Three studies linking age structure (percentage of youth in the population) to civic outcomes. Conclusion: civic participation and civic knowledge patterns are transmitted socially, not just by immediately household.

Starting point: youth are more likely to volunteer, but have less civic knowledge (empirical finding for U.S. population).

Question: are young people who live in communities with more youth saturation even more likely to volunteer (and less likely to have civic knowledge) than young people who live in communities with more adults?

Answer: Yes. Even controlling for demographic factors and for characteristics of the survey respondents' parents (e.g., education, income, whether they volunteer, etc.)

This result also seems to hold up internationally: greater youth population correlates with more volunteering and less civic knowledge (even controlling for GDP).

convincing people about optimal strategies [02 May 2005|02:47pm]
In some of the applied game theoretic mechanism design work that I do, and that I hear other people present, I've noticed a recurring concern. In some mechanism (for auctions, for example), the designer of the mechanism may be able to prove that some strategy (say, honest revelation of one's preferences) is part of an equilibrium (an optimal strategy given some assumption about others' strategies), or is even a dominant strategy (an optimal strategy no matter what everyone else does). But it might not be obvious to participants in the game that they cannot gain from deviating.

For example, in my paper with Nolan Miller and Richard Zeckhauser on eliciting effort and honest evaluations, honest reporting of one's evaluation of a product is a best response (in expected value terms) if everyone else is also reporting honestly. It's best in expected value terms, but in particular realizations, an individual may regret reporting honestly. And the proof that it's best in expected value terms depends on understanding: a) the logarithm function, and b) either Jensen's inequality, or the ability to take a derivative of the log function.

The issue came up again today in a talk I saw incoming SI Ph.D. student John Lin give about lab experiments with different mutli-unit auction formats.

Someone should do some research about how to convince users that non-strategic behavior is incentive compatible when mathematical analysis demonstrates that it is. If you know of any work related to this, I'd love to hear about it.

CHI Workshop: Beyond Threaded Conversation [04 Apr 2005|09:25am]
At the CHI conference, we held a workshop about new ways of organizing on-line conversation, beyond just grouping messages into "topics" and organizing them based on the "reply-to" structure.

Derek Hansen, one of my Ph.D. students, did the primary lifting in organizing the workshop, and he did an excellent job of running the session. We had 25 really high-caliber participants and we seemed to have enough common background to not just talk past each other (though that happened occasionally).

The website with participants' position papers is currently password protected, but we will be removing the password protection soon.

Skoll World Forum on Social Entrepreneurship [01 Apr 2005|05:50pm]
I've been in England this week for the Skoll World Forum on Social Entrepreneurship. Before the meeting in Oxford, David Halpern of the Prime Minister's hosted a meeting, and Will Davies invited a bunch of people involved in civic technology initiatives in the UK.

Sorry for the length of this post, but here are some highlights of interesting people I met on this trip:

  • I've been wanting to meet David Halpern for several years, because Bob Putnam has suggested it several times. Having met him, I now understand why. He's exactly what I'd want in my country for an academic-truned-public servant: thoughtful, open to new ideas, trying to get to the bottom of things, wanting to experiment but really learn from such experiments, but still action oriented. I hope I'll get to be part of some future advisory group that he assembles to design careful experimentation on civic technologies.
  • Will Davies wrote a very nice report last year on uses of technology for local communication.
  • Tom Steinberg, MySociety. Very interesting guy. He was kind enough to give me a little walking tour of the area around parliament after the Strategy Unit meeting, and also arranged to have drinks with William Perrin, Director of Strategy and Policy for the Cabinet Office e-Government unit, and also introduced me to several people during the Skoll Workshop. MySociety is incubating several interesting experiments, including PledgeBank, which allows people to "pledge" that they'll do some civic activity if enough other people join them. Sort of combines the idea of public RSVPs that I think is the power behind eVite with goal setting and "thresholds" or "provision points" for public goods. Wom, William, and I hatched some ideas about government support for a civic technologies platform, with open APIs and a free hosting service if you agree to open source your software-- once you've demonstrated sufficient usage of some service, you'd qualify for a proper government evaluation of the public benefits, which could then lead to further subsidies. It seems like a nice idea, because it would allow continual bubbling up of new initiatives, without anyone in government having to decide too early which seemed most promising.
  • Ellie Stonely, from UK Villages. This non-profit is a shoestring operation, with 3 staff, piecing together funding. But they've got a useful portal of local information. Several British sites have data indexed by postal code (which are very fine-grained, often identifying as few as a dozen houses), and they link into those. But they also allow anyone to post village notices. They again take advantage of geographic indexing to automatically show things in nearby towns using a distance-based search. In the U.S. these kinds of sites are all defined around hub cities, but here every town is its own hub, collecting notices from a radius around it.
  • Someone from the BBC's iCan project was there (didn't get cards, but Tom Loosemore and James Cronin were on the invite list for the event). This BBC-developed portal, still in beta, encourages people to organize all kinds of civic and political activities, and provides tools for coordinating them.
  • Alejandro Litovsky, from the Keystone project at AccountAbility. He's a colleague of David Bonbright who I met earlier this winter at a meeting in New York. David helped me connect the research I've been doing for a dozen years on recommender/reputation systems with the notion of democratizing accountability, which is an important social mission in its own right and something of increasing importance to the non-profit/civil society sector, where questions of legitimacy and accountability are rising to the fore. David is trying to figure out new accountability processes for civil society organizations that will also serve internally to enhance organizational learning. Alejandro is trying to organize a global dialogue on the question of civil society accountability and we had some interesting discussion about how to organize that dialogue in an inclusive way.
  • Ed Mayo from the UK National Consumer Council is thinking about recommender systems for products, especially ticket items and where there's not repeat interaction between a single customer and provider.
  • I caught up with Mark Moore, from Harvard's Kennedy School, who I knew from the Saguaro Seminar.
  • The most exciting connection (and that's saying a lot, as I look over the rest of the list) was with JB Schramm of College Summit, who I recognized because David Bornstein told a story about him during his lecture at Michigan earlier this winter. (David was also at the meeting, and I got to have dinner with him one night.) J.B. and I hatched an idea about how to use recommender/reputation systems to help more students from less elite high schools get into colleges. We played hooky from one session and spent a long time mapping out ideas and evaluation methods. Possible project pending.
  • Paul Hodgkin, of Primary Care Futures was on the panel with me. He described a system for patient ratings of health care providers that will go into beta test in one region of England next year, nicely coinciding with a big move the National Health System is making toward patient choice about which provider they'd like to go to for care. They've thought through many of the details quite nicely, including offering ongoing information to patients (e.g., reminders, directions to upcoming appointments) so that when they send a follow-up after the care, people will be more likely to fill it out. They're also offering to patients that if the comment is positive, they can have a note sent directly to care providers ("thanks to Nurse Mary on the third floor for providing such wonderful care during my recovery..."), which I'm guessing will be quite popular. They may suffer from the usual problem of grade inflation-- how to get people to express mild dissatisfaction, and not fear that others will overreact to it if it turns out not be a pattern. I suggested the possibility of letting patients volunteer to act as on-line "guides" or "mentors" to future patients. They already were thinking about ways to have facilitate support of various kinds (e.g., 10 questions that previous patients suggest you ask your doctor), and he thought that individual matching might be an interesting possibility as well. If it works, it would be an incredible example of using technology to convert potential social capital into real.
  • Some possible connections for students interested in international information work:

    • James Fruchterman from Benetech, which develops software for human rights organizations and other tools to "help solve social problems with sustainable enterprises". (My high school soccer teammate Patrick Ball works there now. He's apparently the world's foremost statistical expert on human rights monitoring, and testified at Milosevic's trial.)
    • Rodrigo Baggio from CDI, which runs a whole network of community technology and learning centers in Brazil and elsewhere in Latin America. He was one of the "Skoll Awardees". They had a very nice ceremony, MCed by actor Ben Kingsley (from Gandhi). It was a genuinely moving ceremony: you couldn't help but be amazed by the things these people have accomplished. Rodrigo was the first to receive his plaque, and he got things off to a good start by raising his hands above his head in genuine celebration.
    • Martin Burt from Fundacion Paraguaya, which teaches entrepreneurial skills to youth in Paraguay. He was another Skoll Awardee.
    • Karen Tse from International Bridges to Justice, which is establishing legal assistance networks in China, Cambodia, and Vietnam. She's thinking about a knowledge management system to connect their participants.

  • The first night at dinner I was seated next to Sushmita Ghosh, President of Ashoka. (CEO and founder Bill Drayton was also there and gave a very good plenary speech, but I didn't get a chance to talk with him.) She had just heard about collaborative filtering recently, as a result of an Economist article, and was very interested. I was amazed that she was able to quote something from the article about the difference between item-to-item vs. person-to-person filtering methods. For someone to be far enough removed from the tech world to have not heard about collaborative filtering until this year, but to grasp that distinction and remember it a week or two later, that's one smart cookie. I'm not kidding-- she was the first to mention the term item-to-item, not me!
  • Someone from the Calvert Foundation (I think Tim Freundlich) was part of last night's dinner group. Calvert Foundation runs an investment fund that makes loans to community development organizations. Caroline and I are investors, so that was kind of fun. I was surprised that he knew our investment advisor by name and had met him (apparently, the advisors are a key way that funds get the word out, so they put some effort into marketing to them individually).
  • A report on Evaluation in the Field of Social Entrepreneurship was just released, and discussed at one of the sessions. It's available for download at Foundation Strategy Group's website.
  • Brian Trelstad of the Acumen Fund, a social venture capital fund. He's just finished an article for a Stanford business publication, evaluating the non-profit org evaluators like GuideStar. I'm looking forward to getting a copy.
  • Melanie Edwards, a Stanford lecturer, started MediaMobile, which gathers demographic and market data in Brazilian slums, by having local residents go door-to-door surveying neighbors. Unlike low-income communities in the U.S., who sometimes are fearful of census takers and turned off by marketers, she says that in these commmunities people are generally so glad that anyone wants to hear their opinion that they answer very openly.
  • There were a couple interesting sessions with MBA students from elite business schools in the US and Europe. One of the emerging ideas from the session and discussion afterwards was that professional schools are really training people to be Chief Operating Officers for social enterprises, not to start them. If we want those organizations to be high-performance organizations, then COOs are going to be helpful. But we shouldn't get confused with training entrepreneurs, for which professional degrees aren't the best path. I met Beth Anderson, faculty at Duke's Fuqua school after that session.

Pew report on Internet's impact on expusure to political arguments [01 Nov 2004|04:02pm]
A report issued last week shows that Internet users have more exposure to political arguments than non-users, and even have more exposure to arguments that challenge the positions they hold.

I co-authored the report with Ph.D. student Kelly Garrett and with John Horrigan from Pew. This is part of Kelly's Ph.D. thesis work. He's on the job market this year and will be a great catch for a comm studies or information program. Ask me if you're hiring...

New I-Neighbors service from MIT [18 Aug 2004|11:01am]
Keith Hampton at MIT is making publicly available his I-Neighbors service. It lets you find, or create, an online presence for your block or neighborhood. Unlike previous incarnations in the commercial realm, which were hot a few years back, this one seems to focus on person to person connections at a much finer granularity.

Many of the features will look familiar to people who have set up YahooGroups before, but there are a number of features that take advantage of geograhpic proximity information. Definitely worth a look if you'd like to create more social connections *or* more political activity in your neighborhood. Even has a carpool finding feature, though it doesn't try to real-time matching.

Keith developed this software for use in field studies of four Boston neighborhoods. He hasn't written up the results yet, at least not for public consumption, but he describes some of the results in an interview. Sounds pretty amazing!

When Reputation Systems Are Worse Than Useless [18 Aug 2004|10:00am]
A paper by Ely, Fudenberg, Levine, titled When is Reputation is Bad?, analyzes mathematical models of situations where public reputations make it harder, not easier, to sustain good behavior. I'll start with their example of a car mechanic who prefers to be honest but will occasionally be tempted to take an unfriendly action in order not to be mistaken in the long run for a crooked mechanic. Then I try to summarize their findings about the class of situations that lead to this kind of problem.

Suppose that a car mechanic can recommend either a tuneup or a new engine, and that half the cars that come to her need a tuneup, half a new engine. Customers prefer to have the correct repair done (even though new engines are expensive). For any particular car, a good car mechanic gains greater utility from being honest, but might be tempted to do otherwise because of long-run reputation effects, as we'll see. A bad mechanic has no morals and likes the extra revenue from engine replacements, so always recommends engine replacement. There are both good and bad mechanics out there, and customers know mechanics only from their reputation history, which is just the sequence of Tuneup/Replacement actions they took in response to previous customers.

Customers start with some initial beliefs about how likely it is that a mechanic is good. If that belief is high enough, a first customer will try the mechanic, and the game is underway. Even one tuneup in a mechanic's history will convince customers that the mechanic is good (bad mechanics always replace the engine; the same phenomenon could occur, I think, but would be more complicated if bad mechanics occasionally disguised themselves-- look for a future post about a paper by Cripps, Mailath, and Samuelson that gives some insights into that).

Suppose a mechanic has a string of engine replacements, with no tuneups. Each additional engine replacement makes customers more suspicious that the mechanic is bad (though it's always possible that it's a good mechanic who just happened to get a lot of cars that all needed new engines). Eventually, after some number K of engine replacements, customers are so suspicious that they stop going to that mechanic and the game is over.

Now consider what the good mechanic should do if she happens to get K cars in a row that all need new engines. On the Kth one, she knows that being honest will cause her to be mistaken for a bad mechanic and she'll get no future business, so she's tempted to recommend a tuneup even though she thinks it needs a new engine. But customers, knowing that even a good mechanic will not be honest the next time, after K-1 engine replacements, will not bring their cars to the mechanic in that situation. By an unraveling argument familiar in game theoretic analysis, that means that the good mechanic will not be honest on her K-1st car if she's had all engine replacements up till then, and so on all the way back to the very first car. Thus, customers can't trust even the good mechanics to be honest, even on the first car, and no one uses the mechanics at all.

The moral of the story is that the public reputation system is creating the wrong incentives. The usual incentive effect for a reputation system is to cause a strategic player to do something that helps other people, in order to be "confused" with the type of player who really do like to help other people. Here, it's creating an incentive for a strategic player to do something that hurts other people, in order not to be confused with the type of player who really prefers those harmful actions.

The paper summarizes (p.7) the conditions that can lead to this kind of problem:

  1. Information about a player is revealed only when other players are willing to engage with that player, so that getting a sufficiently bad reputation is a black hole that you can't escape from.

  2. There are "friendly" actions; a high probability of friendly actions is what causes partners to we willing to play. (In the mechanics example, honesty is the friendly action.)

  3. There are bad "signals" or outcomes that occur more frequently with unfriendly actions but occur sometimes even with friendly actions. It is these signals/outcomes that will be made publicly visible in a reputation system. (In the mechanics example, the bad outcome is recommending an engine replacement.)

  4. There are "temptations", unfriendly actions that reduce the probability of bad "signals" and increase the probability of all the good signals. (In the mechanics example, the temptation is reporting the signal "tuneup" even when the car needs an engine replacement.)

  5. The proportion of player types who are committed to the friendly action regardless of its consequences is not too large. (These would be mechanics who would never say "tuneup" when you needed an "engine", even if it meant closing their business tomorrow.



Note that these conditions can be met for the mechanics situation even if the public signals that are shared reflect whether the engine really did need to replaced. For example (see p.30), suppose that the good mechanics get an imperfect reading of whether a car needs a tuneup or an engine replacement. But after they try one or the other, the truth is revealed and goes into their publicly visible reputation, along with the action they chose. In this case, a "bad signal" is when the mechanic turns out to be wrong in her recommendation [Note added after initial post-- the bad signal is really being wrong in a recommendation of "engine"-- see followup comments]. The friendly action of making an honest recommendation can still lead to a bad signal, though a bad mechanic who always recommends an engine tuneup will still get a bad signal more frequently. Recommending a "tuneup" is still a temptation, to avoid being confused with the bad mechanics.

An earlier draft had a useful discussion of why not all "advice" processes will fit the citeria listed above, though I don't find it in the current draft. Perhaps most important is criterion 1, that getting a bad reputation is something that you can't escape from. If a player can pay a fee to encourage customers to continue interacting with her, or if there are some customers who don't pay attention to reputations, or if there's some way to keep generating public signals without having any customers take a risk on you, then there can be an escape from the black hole, and thus the unraveling argument won't come into play (the temptation option is not so compelling just before your reputation is about to enter the black hole). In other situations, condition 4 may not apply: there may not be a temptation action available that reduces the probability of all the bad signals while increasing the probability of all the good signals.

Dynamic ride sharing in the Bay Area [02 Aug 2004|03:05pm]
A couple weeks ago, I spent some time with Dan Kirschner (see slideshow), a long-time ride-sharing activist from Berkeley, who is trying to launch dynamic, computer-assisted ride-matching on the I-80 corridor going into and out of San Francisco. Unfortunately, I see from his web site that he's put the service on hold for the time being-- not enough users, especially drivers.

Because of HOV lanes with less congestion and no tolls on the Bay Bridge into San Francisco, there has long been "casual car pooling" on the morning commute from Berkeley and other locations, but in the evening few drivers picked up riders and the riders relied on BART. A few years ago, the city put up destination signs on Beale Street, near the entrance to the Bay Bridge, and people going longer distances (eg., Vallejo) line up and eventually get a ride. As you might expect in San Francisco, it's a diverse crowd (see slideshow). As I found watching a slug-line in DC, drivers, sometimes in fancy cars, take passengers on a first-come-first-serve basis.

Dan's new computer-assisted service would have two primary advantages. First, morning pickups would be at locations other than the current pick-up spots, several of which have parking lots that fill up quite early in the morning. After those times, there are few riders (they can't park) and so it's hard for drivers to fill their cars. Second, in the afternoon, riders would be matched with a driver before leaving work, and thus would not have to wait in line.

The problems with the news service idea that I could pick up, from talking to people in the lines as I distributed flyers, were:

  • Inertia. The people in line find the current system at least minimally acceptable.

  • Corodination costs and loss of spontaneity. Though he calls the service "Ride Now", as actually designed people have to decide at least an hour in advance in the morning and 30 minutes at night, and have to call each other to confirm. In analyzing why he has suspended the service, Dan wrote, "One lesson is that this has to be easier for people to use. A truly 'instant match' system would be more a one-step process: 'Hi, I'm here, give me a ride match.'"
  • Lack of backup. If a rider doesn't get a computer-assisted match at night, hecan't just use the regular carpool line, because drivers take those passengers back to the old pickup spots (with the full parking lots) not the new pickup spots that Dan's service has. One interested womanwho I gave a flyer to said she couldn't use the new service because there is express bus service from San Francisco to the existing parking lots as a backup if she doesn't get a ride, but no service or only local service to the new pickup points.


It's a hard problem getting this kind of service launched. It was interesting to see when passing out flyers that most people basically were interested and were willing to believe in the good intentions of the service launchers (after getting a few questions answered). One person even pointed to Dan and said, "He's the father of this carpool system", referring to Dan's role several years ago in assembling the initial critical mass of users for the current pickup spots on Beale Street. But there just weren't enough takers, at least yet, to make this particular version of the service go.

I'm still optimistic that Dan's fishing in the right pond, and I look forward to doing a little fishing there myself.

Kearns: Economics of Social Networks [09 Apr 2004|11:38am]
At the Aladdin workshop on "Web Structure and Algorithms", Michael Kearns is giving a more detailed presentation on part of what I reported on a few weeks ago.

He's generalizing from general equilibrium models such as Arrow-Debreu to situations where people only exchange with people they are connected to in a newtork. As an aside, Kearns said that it has only recently been shown that there is an efficient algorithm for computing equilibrium prices, DPSV 2002. (Arrow-Debreu's famous theorem from 1954 is that such prices always exist, if a few technical conditions are satisfied.)

Assume an undirected network restricting exchange. Assume no resale. (Could encode network in goods and utilities of standard model: virtual good is a good and an initial holder; individual utility for a virtual good is 0 if no link in the graph to the holder of the good.) Can have variation in price for the same good, because people can only get the good locally. Equilibrium prices means that each consumer satisfies *all* their demand from their neighbor with lowest price-- price has to be higher if that seller wouldn't have enough to meet your demand. Result from Kakade, Kearns, and Ortiz 04: equilibria always exist, and can compute equilibria in polynomial time (# users), exp(# goods).

Today's subject: marrying that work with preferential attachment model for buyer-seller networks. Preferential attachment means new node's probability of attaching to an existing node is directly proportional to the current degree of the existing node, so you get a power law of degree distribution for graph nodes. Buyers who have higher degree have more choices of trading partners, so generally have more market power, but it's even better if you have high degree to sellers who have low degree (they'll have no choice but to sell to you at low prices). Derives theorems about wealth distribution upper bound, price variation (as number of links per person increases, get less price variation). Simulations suggest that there is a power law distribution of wealth. (Note: not hard to derive some model that leads to power law distribution of wealth. Claim for novelty/interest here is that it's a process that comes from some notion of rational behavior of the participants. Low degree people not just giving money to neighbors with high degree. They have less market power.) Also did a behavioral experiment with students in a class, creating a few islands that were allowed to trade only within the island, but with imbalanced numbers of buyers and sellers in each island.

Garrison Keillor's take on ride sharing [02 Apr 2004|12:54am]
Garrison Keillor has a humorous take on ride sharing

I'm starting to work on software infrastructure for the kind of ride sharing that I've speculated about elsewhere. Marc Smith from Microsoft Research has been an important collaborator in architecting it. No announcements to make yet, but it's the project I'm working on that I'm most excited about these days.

Processing social information and Social processing of information [31 Mar 2004|10:39pm]
I just spent two days at the Social Computing Symposium sponsored by Microsoft Research. It brought together a terrific mix of well known researchers, pundits, and "edge people" who are developing social software and integrating it into their lives in interesting ways. The pundits and edge people were referred to as "the cool people" or sometimes as "the digerati" at this conference.

The intellectual highlight for me was Tom Erickson offering a crisp (but not short) definition of social computing that helped make some useful distinctions among categories. My much shorter version is:

  • processing of social information that is distributed in social collectivities


Processing should be taken as a shorthand for collecting, distributing, aggregating, etc.
To say that the information is social is a shorthand for saying that it describes people or their relationships.
To say that it is distributed in social collectivities highlights the fact that there may be information assymmetries among the participants in the collectivity.

One reason that this definition is useful is that it distinguishes processing of social information from systems that permit collaborative processing of any old information. For example, Slashdot's threaded commenting systems would not count, but its moderation and karma systems would. Wikis would not count. LiveJournal's generic blogging and commenting features would not count, but its features that build on people specifying "friends" whose journals they want to follow would count.

It's not clear whether "social computing" is the term that ought to go with the category that Tom has articulated. It seems just as natural to apply it so social processing of information rather than processing of social information. But in any case, the distinction seems useful.

There was an IRC backchannel throughout the conference, populated by the pundits, edge people, and some researchers, mostly the younger ones. It was interesting to watch it for the first morning, but then I tuned it out-- too many in jokes from a crowd not really in with. The best line I noticed on the IRC channel came from David Weinberger who said there were two groups of people in the room, each envying the other but for the wrong reasons. I wonder what the right reasons would be. One of the most interesting things for me to observe was some grad students who are living as edge people but training to become researchers. Danah Boyd, for example, has already been profiled in the New York Times, even though she's still relatively early in her Ph.D. program at Berkeley SIMS, I think. She did a nice presentation that suggested she is well on her way to understanding academic norms. I hope she can hold onto her enthusiasm and charisma during the dissertation process.

The most interesting new additions to my social network were Clay Shirky and Michael Cornfield. Both have an amazing mental quickness, and the ability to present ideas clearly and persuasively.

I had one bone to pick with Clay, however. He had good comments on lots of things throughout the event, but his main theme (along with David Weinberger) was the dangers of explicit representations of social information. But he was fighting a straw man at times, because the explicit representations of social information and the computations performed on them need not be models of how people process social information in their heads. eBay's reputation score for a seller need not accurately reflect what really happened in the transactions or how an individual would aggregate that information in informal, everyday life. It just needs to be informative and create the right incentives.

Michael Kearns on network models for game theory and economics [27 Feb 2004|05:23pm]
I just attended a fascinating seminar by Michael Kearns, an AI researcher at UPenn. In recent work on combinatorial auctions and allocation of network resources there has been a convergence of computer science theorists and economic theorists. Kearns (and some others) are bringing to that mix the AI focus on the properties of alternative representations in terms of the reasoning that they facilitate.

Kearns' basic idea is to impose some structure on game theoretic models involving large numbers of players. In Nash equilibrium models, suppose that each agent's payoffs depend on the actions of only a few of the other agents. That leads to a graph representation (players are nodes; directed edge if another player's action affects this player's payoffs) from which it is possible to more efficiently compute the Nash equilibria. In correlated equilibria, suppose that each player's strategy is independent of most other players' strategies, contingent on the choices of just a few players. Those players can be connected with edges, but you still might have sparse graph that allows for certain computations to happen more quickly. In Arrow-Debreu general equilibrium models, suppose that only local exchanges are possible, and define a new form of equilibrium that requires all the local markets to be in equilibrium. The final idea builds on that to make use of recent work in mathematical foundations of random and not-so-random graphs to generate plausible local exchange structures.

This stuff is far enough from what I work on these days that it's fun to go to a well-presented summary of recent trends, but not to read all the individual papers or hear a talk on just one result. Kearns is a terrific presenter, so this was great!

The Culture of Fear [27 Jan 2004|04:02pm]
I recently read Barry Glassner's 1999 book "The Culture of Fear".

It describes many different arenas in which the media are providing anecdotes about horrible things which make compelling stories but which either didn't really happen or happen very rarely. So rarely, he argues, that purely from a risk management perspective, they are hardly things that most people ought to be paying attention to all. But they have a great pull on our imaginations, either because they are symbolic of something else that we feel anxious or guilty about, or because they distract attention from things that we genuinely should be afraid of. In some cases, he argues, the very things we truly should fear are things we feel guilty about, and the distraction is dysfunctional because it ends up allowing the real problems to build up.

He debunks public scares about lots of things, from tainted Halloween candy to teen moms to raod rage to plane crashes to crime to Internet pornography and predators. He does this with a compelling storytelling style that makes the book a good read.

I was a little disappointed, though, that the book didn't quite deliver on its promise to explain why we have a culture of fear that's making us too scared, and scared of the wrong things. While the book offered some explanations for why the media and the general public are so fascinated with particular fears, there wasn't always the same explanation for the different scares and nowhere did he tie it up with a general theory of why we're scared of the wrong things. Is it just that most people are really bad at risk assessment, and far more moved by anecdotes? Is it a defense mechanism? A conspiracy in aid of certain political and economic interests?

I was also disappointed that there was no mention of a topic that interests me right now: hitchhiking (or what we might now call ad hoc ride sharing, if we want to avoid the scary connotations of hitchhiking.) There is certainly a lot of fear about it. Before it all but died out, were there any good statistics on the risks? Did the scare stories follow the same pattern as the other fears he writes about? Or was it really as dangerous as I was told as a child? If anyone has citations to research on this, I'd love to know about it. (If you're curious why I'm so interested in this, see the last section of my paper on "Impersonal SocioTechnical Capital".)

New paper on provision of moderation at SlashDot [23 Jan 2004|04:32pm]
Cliff Lampe and I have just completed a paper to appear at the CHI04 conference.


Abstract
Can a system of distributed moderation quickly and consistently separate high and low quality comments in an online conversation? Analysis of the site Slashdot.org suggests that the answer is a qualified yes, but that important challenges remain for designers of such systems. Thousands of users act as moderators. Final scores for comments are reasonably dispersed and the community generally agrees that moderations are fair. On the other hand, much of a conversation can pass before the best and worst comments are identified. Of those moderations that were judged unfair, only about half were subsequently counterbalanced by a moderation in the other direction. And comments with low scores, not at top-level, or posted late in a conversation were more likely to be overlooked by moderators.

On to Sebastopol for FOOCamp [11 Oct 2003|08:45pm]
FOO-- Friends Of O'Reilly (the publisher of lots of books about technical topics, especially open source).
Camp-- sleep on the floor or pitch a tent on the back lawn of the O'Reilly Campus

Portable toilets and showers provided, and some food, plus electricity and WiFi connectivity. Birds-of-a-feather sessions only: sign up on the wall chart for a room and hope people will come.

So some of the same topics as the last two days, and also "invitation only" but otherwise a very different culture.

Esther Dyson's Release 1.0's next issue is going to be on reputation systems, and she led our lunch table to an interesting conversation about what role reputation systems might have in politics. Conclusion: most useful for more obscure political things where mainstream media aren't covering.

There was an interesting session on politics, where we speculated about options the DeanForAmerica campaign to scale up its participatory web features if there is a huge influx of participants as the primaries approach. The options: SlashDot style rating; invite people to join small groups rather than posting comments to main blog, with small groups providing distilled commentary for larger viewing; invite people to form a network of friends whose comments they pay attention to, a la LiveJournal. David Weinberger was at the session and is acting as "Internet advisor" to the campaign. I showed my ignorance afterward by introducing myself to him and asking what he does when not advising Dean. I guess everyone else here knows him: he was one of the authors of The Cluetrain Manifesto.

Scott Heiferman, CEO of Meetup was at the lunch table. I got excited about the possibility of doing research on whether meetup.com is attracting new people to getting together, and whether it has an impact on their activities or civic identities. They were inspired by Putnam's Bowling Alone argument when they launched the service. I asked about research possibilities to see whether they're actually solving that problem. Turns out Scott had a high energy meeting with Putnam last week, and moreover he wanted to talk to me about incoroporating reputation system ideas into meetup. So perhaps something will come of this connection as well...

I'm sitting in the session on Blogdex and technorati. I guess I should check on how many links and friends my blog has. Not many: Only 8 links in (and only 3 people have me on their LiveJournal friends list).

Overall, a fun and productive side trip. Thanks, Marc Smith, for telling me about the event and convincing me I should come here for the day rather than renting a bike and going into the hills around Sonoma.

Online Community Summit, Sonoma, CA [11 Oct 2003|08:20pm]
Jim Cashel of Forum One Communications organized the second annual Online Community Summit. Perhaps 20 veteran community managers and "serial entrepreneurs" attended, about an equal number of newcomers looking to add community features to their sites (especially from NGOs and advocacy groups), and a handful of venture, angel and philanthropic funders. About 60 people in all.

The mood was generally optimistic. Not only is there venture funding flowing in, but they're starting to see real revenue streams, from subscriptions, from contextualized advertising, from partners who benefit from the community member's business, and from selling summary data about the community as marketing reports.

Contrary to my father and my Saguaro Seminar colleague Tom Sander,, people here generally agreed with me that flashmobs are cool, because they are art and because they are rehearsal or exploration of what's possible in coordinated action among strangers.

People were surprised that the political scene has emerged as a new source of innovation in online communities and social software, whereas it has lagged behind other sectors in using the Internet until recently. I was disappointed that Zephyr Teachout from Dean for America canceled at the last minute.

I got a lot of leads for my new 5-year NSF-funded research project (with co-PIS Bob Kraut, Sara Kiesler, Yan Chen, John Riedl, Loren Terveen, and Joe Konstan) on mining the social science literature for design ideas on how to motivate contributions to online communities. ePinions is trying to encourage contribution of more and better product reviews. Marc Smith at Microsoft wants to encourage contribution of bug reports and tech support answers in MS support forums and I expect to follow up with both. There was some interesting conversation about the crowding out of internal motivations when explicit incentives and/or rewards are provided.

I also had some great converstions with Jed Miller about online civic dialogues. He's been working on plans for a very large scale dialogue on health care (500,000 people perhaps) and I'd love to use it as a testbed for the ideas I've been kicking around about how to distill dialogue with the participants doing the distilling in a distributed manner.

And I had several inquiries about advising various startups. In general, I like kibbitzing and brainstorming on projects (but not always having to do all the follow-through), so I may start doing more of this kind of work.

Did I mention that Sonoma is beautiful, with hills/mountains in the distance, sunshine and lots of nice restaurants? Mary Furlong, one of the founders of SeniorNet, saved her birthday celebration for a dinner with this group, because she liked the people so much last year. I, too, enjoyed the company at the dinners, where local wines were drunk...in moderation.

Information Cascades/Herding: Experimental Evidence, by Dominitz and Hung [10 Sep 2003|05:27pm]
On Monday, Angela Hurd presented an interesting paper at the Heinz School seminar series. It's on "herding" or "Information Cascades". In the most clear-cut version, each person has some private information and has to make a binary choice with some payoffs resulting from that choice: information helps make the higher payoff choice. They can see what decision previous people have made before making their own choice, but don't know the other people's private information. Through Bayesian updating, you can infer something about other people's information from their observed choices, and thus update your own priors about which is the better choice to make. If enough people make the same choice, then the "public" information from those choice swamps your own private information, and you just go along with the crowd. But that means that you don't reveal any of your private information through your choice, because your choice does not depend on what your private information is.

As usual, in experimental settings people are found not to behave in accordance with a perfect Bayesian equilibrium in this kind of setting. Some people seem to ignore or discount severely information revealed by other people's choices. Other people seem to think that other people's choices continue to be informative even beyond the point where their choices should be determiend by the public information that has already been revealed.

In this experiment, Hung and Dominitz gathered some extra information from subjects: they asked them after each revealed choice to assess what the other person's private signal was and to assign a probability to which is the best choice. They found that even after a cascade had started, people tended to think that people choosing consistent with the cascade were revealing information about their private signals, suggesting that they thought people would have broken the cascade if they had contrary signals. They also found that people were not updating their beliefs about the more profitable action sufficiently, given what they thought others' actions revealed about their signals.

Future work will have to reveal a theory of how people are interpreting others' choices and revising their own beliefs. Seminar participants proposed many possible elements of such a theory. One plausible element is certainly that there are multiple types, some of whom simply reveal their signals and some of whom overinterpret others' signals. Some sophisticated types may be taking account of the existence of those other types in making their own choices.

Fear and Networking in Jerusalem and Haifa [03 Sep 2003|02:54pm]
I arrived Sunday afternoon, August 24, for a short visit to Israel, my first since 1986. Gustavo Mesch and Ilan Talmud, sociologists at the University of Haifa, invited me for a conference on computer networks and social networks. Click here for a photo slideshow from my trip.

I was a little worried about terrorist bombings. OK. More than a little. I avoided taking buses, and I rented an international cell phone to take with me so I could check in with my wife and son once a day and they could reach me. Contrary to my expectations, the flight was full and the hotels I stayed in were not deserted. Few were conventional tourists, however, as a taxi driver/guide explained. Many were on official solidarity missions, including a group of 40,000 French who had just finished their visit. It was striking that every restaurant had a security guard at its entrance, as did every building at the University. There were roadblocks at entrances to the University, on the highway to Jerusalem, and at the entrance to the airport. Israelis are used to it in the same way that Americans are used to airport security: it seemed to be pro-forma most of the time, but I did have to present my passport when I was riding in the back of a taxi, and had to empty my bag and turn on my computer once when entering a University building.

On Monday morning, I had just enough time to wander through the empty shouk in the Arab Quarter of the Old City, before it opened for business, to the Western Wall. The security guards at the first entrance I tried directed me to another entrance, down a small alley, where they directed me to a third entrance with an x-ray machine for my backpack. Once through security, I found hundreds of people, apparently ferried in through a different entrance by car. Apparently, Bar Mitzvahs at the Wailing Wall are a well-organized industry; there were about a dozen going on, replete with videographers.

My next stop was the offices of New Israel Fund and Shatil. NIF funds non-profit social change organizations. My 1986 visit was a study tour arranged by NIF and I started contributing money last year. Shatil is a 20-year-old arm of NIF that provides various management support services to NIF fundees. It has played a huge role in creating a vibrant civil society in Israel, rather than depending solely on government programs or international donor projects. I spoke with Shatil’s webmaster, Hanan Cohen, and NIF Associate Director Ellen Goldberg about the possibility of adding a technology planning/training/consulting arm to Shatil, along the lines of NPower or perhaps even affiliated with it.

Hanan is a very interesting guy. He grew up on a Kibbutz, didn’t finish high school but taught himself computers beginning on the Kibbutz’s PDP-11. He now lives in an urban Kibbutz, where 18 families pool their income, allocate it according to need, and own property collectively. Despite these great differences in our backgrounds, we are part of the same digital culture. He knows about the same tools, the same social trends, and even the same geek celebrities. Hanan attended the conference I spoke at, and he gave me a ride.

At the conference I had the pleasure of getting re-acquainted with Prof. Sheizaf Rafaeli, who I had met a couple of times on his frequent summers and sabbaticals at University of Michigan B-school. One of his students, Yael Levanon, made an interesting presentation about a study of email list usage for two suburbs. Sheizaf posed an interesting design challenge over lunch on the last day. Israeli High School students have a “volunteer” service requirement but there is not a good system for placing them. Currently, parents are arranging for their children to work at organizations they know about, but it’s not clear what the children of less well-connected parents do. What kind of information system could help? Further conversation revealed that further requirements analysis is needed: it’s not completely clear what the real stumbling blocks are. Not enough good placements? No way for students to find them? All the students vie for the same placement? At the end of the conversation, I hazarded a guess that a Hebrew version of friendster.com might be the best support tool, to basically rely on connections as before but with a slightly wider reach into one’s social network.
Bill Dutton made a case for thinking about societal changes not in terms of more or better information but rather reconfigured access to information, people, services, and technologies. I’d heard it before, but this time it clicked for me. I’ve been using the metaphor of “information flows” which captures the ideas of access to information and people, but not really the latter two. More on the information flow another time.

I also got to hear Barry Wellman’s networked individualism spiel one more time. The essence of it is that the bounded group is not a very useful unit of analysis, because people have multiple affiliations and the people they are connected to are not often connected to each other. We had some interesting discussion during the Q&A about what people might affiliate with to gain a sense of identity if not the traditional bounded groups. I suggested that we might expect an increase in allegiance to things like nationalities, races, religions, gender, etc., what Wenger would refer to as imagination-based modes of belonging. Heidi Campbell had an interesting suggestion that shared experiences rather than shared membership might become increasingly important, citing things like Raves and Flash Mobs.

Amalya Oliver had an interesting presentation about how social collectivities form boundaries based on membership identity markers and representations of unique knowledge domains. In her theory, a social collectivity begins with sites of difference, something that makes people like each other and distinct from others. From there, boundaries form through formalization of sites of difference, through two networking forces. The first, centrifugal force of growth, pushes toward wider domain claims and member recruitment. The second, centripetal force pushes inward toward a crisper domain definition that ensures retention of existing members. When an equilibrium of these forces has been achieved, the boundaries are defined, though they may continue to evolve (presumably slowly).

After the conference, we had a quick tour of Haifa with a professional guide. She showed us the Arab neighborhood, Wadi Nisnas, which has been the site of annual public art efforts by both Jewish and Arab artists. The results of many years are visible as you walk down the street, which is otherwise an ordinary residential neighborhood. At this time of little hope for peace, the guide emphasized repeatedly that in Haifa there was co-existence between Arabs and Jews.

Flash Mobs and other coordination among strangers [14 Aug 2003|10:43am]
Like many others, I've been following with interest the emergence of flash mobs, where many people converge on a location at a particular time then disperse. The assumptions we have about what actions strangers can undertake together are going out the window. Until now, only large institutions or the mass media could coordinate action among strangers. People have always wanted to be part of something bigger than themselves. But for an increasing number of people, joining organizations seems to be either too inconvenient or too uncomfortable (what are the identity commitments that one makes by joining? better to remain a free individual...)

So far, flash mobs are just artistic events, performances that are fun to participate in. But this is just the beginning of experimentation with a new social possibility: coordinated activity among strangers without an institutional framework or mass media to coordinate it. For example, someone recently posted a pointer to a suggestion that distributed but coordinated political protests could be more effective than converging for a rally. It's not completely thought through yet, but an intriguing idea. In any case, it would be foolish to write this phenomenon off as purely whimsical.

This is a really exciting time we live in, if new social configurations get you excited. I view smart mobs (and flash mobs in particular) as just one more form of experimentation with new forms of social organizing that have been developing over the past decade or two. Recommender systems for movies and messages, eBay's reputation system, and geocaching all fit the bill as well. The common theme is that they involve action and interaction among strangers. Friendships sometimes develop out of the activity but they are not pre-requisite. People are developing trust and coordinating activity in large networks, without becoming friends or even acquaintances.

We might think of established friendships and institutions as a "just-in-case" form of social organization. Relationships are in place so that action can be taken as the need arises. We might think of these new ways of interacting in large networks as "just-in-time" social organization. Some relationships are in place in advance of activity, but they may be few and weak.

If neither friendship nor instutitional membership is a pre-requisite for coordinated activity among strangers, what are the pre-requisites? I've coined the phrase "impersonal social capital" to refer to whatever those enablers are. It's some combination of networks of acquaintances, generalized trust, assurance through reputation or other accountability mechanisms, and a big dose of technology to reduce communication and coordination costs. I took a first stab at trying to sort this stuff out in a paper titled, "Beyond Bowling Together: SocioTechnical Capital" a few years ago. It may be time to take a closer look again, now that there are more examples to draw from.

What should the ALA do after losing the CIPA case in supreme court? [14 Aug 2003|10:10am]
The ALA issued a press release on July 25, outlining its response to the Supreme Court ruling upholding the CIPA requirement that public libraries install Internet filters.

Helping libraries make filtering software work as well as possible for their needs is in the statement as one of the things for ALA to do:
Gathering and making available information and research on the impact of CIPA and filtering on libraries and library users, including information and research on filtering software and evaluative information for libraries selecting and using filtering software,

But its buried amid all of the other activities of documenting why CIPA is so bad, to enable future advocacy against CIPA. I could see ALA putting some effort into continuing the fight, but there really are two agendas at odds here-- making it work as well as possible, and documenting that it doesn't work well. To my taste, the agenda is way too far tipped toward the latter.

When Skip Auld, and ALA Council member, sent a message to the ALA Council listserv that suggested a focus on informing and serving libraries as they make filter choices, he cited me and the study of commercial filter error rates we conducted last year for the Kaiser Family Foundation, and suggested that I be invited to a meeting that ALA is convening to figure out its own plan of action. Someone found that so offensive that she publicly insulted me, calling me "addle-brained". I guess it could be worse. Larry Lessig once called a project I was working on "The Devil" (though he was careful not to call me the devil, and we subsequently ended up writing a paper together highlighting exactly where we agreed and disagreed.)

Lawn Signs vs. Meetup as Local Social Capital Builders [14 Jul 2003|12:54pm]
I had some interesting correspondence with Bob Putnam, Lew Feldstein, and Tom Sander about Internet sites as collectors of attention in order to make local matches. I've continued to correspond with them about ICTs and social capital since participating in the Saguaro Seminar on Civic Engagement a few years back.

I claimed, following the logic of my last two posts here, that in some cases, the Internet is working better as an attention aggregator for matching than any physical mechanism could do. I got some push back that if the goal is to stimulate social connections between geographically proximate actors, even the success stories on the Internet, like meetup.com and craigslist are not anywhere near as good as the traditional mechanisms like political yard signs (and presumably block parties, though they didn't mention that). I think that may depend on how physically proximate one requires the connections to be.

Below, I try to separate out the push-pull dimension from the targeting dimension. I think the targeting of messages to their intended audiences is the key thing that the Internet can potentially do better than any off-line communication effort. And if we think there are some positive social capital externalities from social connections with "neighbors" who live a mile or two away, as well as those who live down the block, then some targeting will be valuable, and the continued development of matching services on-line is a hopeful sign for social capital in America.

Dimension 1: Push vs. pull. In "push" delivery, the sender exercises some control over the content or timing of message delivery; with "pull" the receiver exercises more control. This is really a continuum rather than a dichotomous variable, despite the usual punditry. You see a lawn sign or a bumper sticker or a pop-up Internet ad as you pass by, even if you were not looking for messages on the topic. That's push.

While push may be attractive to message senders, there are both privacy and information overload issues with it. The privacy problem occurs when the recipient is annoyed by the message's contents or its timing. Information overload occurs when there are a lot of messages pushed. As more messages vie for attention, the success of any individual message in attracting attention declines. Push can be effective only if there aren't very many different messages being pushed (there can be many repeated messages so long as there are only a few kinds, as information conveyed visually by quantities of repeated messages, as when the green political signs outnumber the blue ones on a block).

Dimension 2: Targeting. There are two dimensions of this (related to the ideas of type I and type II errors in binary classification). What percentage of the people who are exposed to the message are interested in it? What percentage of the people who would be interested in the message are exposed to it? In addition to the obvious communications efficiency advantages of more targeted messages, there are potential privacy advantages to the sender. Someone may be far more willing to reveal their interest in knitting (or radical libertarianism) only to people who also reveal that interest. Consider the staunchly pro-war neighborhood where the few opponents might like to find each other without exposing themselves to the ridicule (or worse) of most of their neighbors.

Note that targeting and push are not necessarily mutually exclusive, even in the off-line world. Consider abortion protests outside of clinics, where the intended audience of the message are congregated. Still, it is often difficult to target a specific audience effectively with billboards, bumper stickers, and other geographically constrained message channels.

Usually, to get better targeting, pull modes work better. Putting a for-sale sign in your car window can't hurt, but to increase the percentage of car shoppers who might see your message, it's wise to place a classified ad in the paper. To reach an even larger percentage of potential buyers, you can try eBay Motors, but if you have a cheaper car that's not worth shipping to distant buyers, then you'd only want to reach local buyers and listing on eBay would have poor precision in message targeting (a small percentage of the people seeing the listing would be interested in your car). Similarly, lawn signs may be good for alerting everyone that you support Howard Dean for President, but they're not a very good way to find the small percentage of people in your town who would want to attend a fundraiser or help you leaflet at the state fair.

Clearly, there are more opportunities for targeting in communications that are brokered through the Internet. Meetup.com and the kind of Internet-based geo-matching may be more promising for finding the people who want to leaflet at the state fair. Conversely, there are fewer opportunities for effective push on-line (spam and pop-up ads notwithstanding), in part because people have fewer spare attention cycles while on-line. While in a traffic jam, people are often happy to be amused by a bumper sticker on the car in front. Not while reading email or surfing the web.

More on Global Organizing of Local Activity [09 Jul 2003|11:46am]
Following up on yesterday's thoughts, what are the success criteria for attempts to use the Internet to connect people who live near other and are interested in some topic?

First, there needs to be a critical mass of interest before people will want to sign up. There's no value added if you go to a site and find you're the only one near you interested in the topic. So there are strong network effects: the more people are interested, the more likely you are to have a local match. Meetup.com attempts to be a central clearinghouse on a range of topics. But for most topics they seem to have at most a couple thousand people signed up worldwide. That means if you're not in a major city you're probably not going to find anyone near you. One technique that could help in this light is to have variable geographic boundaries. For some topics, and in some rural areas, people might be glad to be matched up with people even a few hundred miles away. As more people express an interest in some topic, you can be matched with people closer and closer to you (zip codes allow for pretty good distance calculations). Perhaps meetup.com will move in that direction. The Dean campaign is making use of meetup.com, but also has its own matching system, where they let the user set a maximum distance from their zipcode for events they'd be interested in. The system could probably suggest the distance threshold automatically, so that users would always get some search results.

Second, because of the need for critical mass, there has to be a focal point; most people need to guess the same web site to go. For some topics, such as a presidential campaign, there is an official site that is the natural focal point. For others, it's not so clear.There may be strong network effects here, yielding a winner-take-all market. If meetup or someone else (YahooGroups if it wanted to get in the game?) gains enough attention, it could become the place that everyone would think to go on any topic. Once that happens, it would be hard for anyone else to get in the game, much as it's hard for any auction site to compete with eBay at this point.

navigation
[ viewing | most recent entries ]
[ go | earlier ]