First, welcome to the
Review, and thank you for a very thoughtful contribution as your first post.
QUOTE(Anne Sexton @ Wed 30th November 2011, 10:23am)
... the included assumption that status as measured by adminship is the only kind of status based reward available. It's easily measureable, true, but mighten't there be off-wiki status based rewards available to content contributors, if only internalized ones?
There are, no doubt, extrinsic as well as intrinsic rewards associated with Wikipedia editing. Perhaps the most obvious is the satisfaction of influencing others on a topic, especially a controversial one. There are also, to be sure, more positive or socially-acceptable extrinsic rewards. But I doubt that a significant minority of editors, even within the domain of a topic area, can agree on them. When they do agree, you get well-functioning "Wiki Project" teams, of which they are a few.
QUOTE(Anne Sexton @ Wed 30th November 2011, 10:23am)
The most interesting thing to me ... is the discussion of the metric for deciding who's a content contributor. This:
http://arxiv.org/abs/1002.0561 ... proposes a metric for measuring quality of contributed content
Specifically, this para:
QUOTE
The quality of a contribution is measured in terms of Wnew, the number of new words added by a user to Wikipedia articles, such that the words were not present in any previous revisions of those articles. We found a high correlation between the number of new words that survive 5 revisions, and the number Wsurv that survive to the last revision of the article ( > 0:97), consistent with previous analyses of edit persistence. We therefore constructed a simple metric by taking the proportion of new words introduced by the user that are retained in the last version of a suciently frequently edited article: Wsurv=Wnew.
I am pretty dubious about this metric. The other metrics in the article, such as "Best Answer" selections by peers, all seem better than this. Surviving text over 5 revisions may indicate edit-warring, article ownership, or simply selecting non-controversial articles. Frequent serial revisers (those who make 10 or 30 or 50 small revisions in a row) would be "high quality" by this measure. No attempt in the paper is made to test this metric against a qualitative or "reader-rating" score of article quality. That is where I think this analysis falls down.
Also, the measure of quality for an "encyclopedia" article is (or should be) substantially different from a self-help "how to fix your PC" online forum or the equivalent.
QUOTE(Anne Sexton @ Wed 30th November 2011, 10:23am)
I'm just thinking that maybe this kind of thing would let you ignore non-article space edits in your calculations, and decide who the content contributors are by ranking them according to amount of quality material added?
A measure of "contribution to article stability" might very well be interesting, but I don't think it is a proxy for "quality".