Lessons Learned (Again)
How much time should pass before you can say, with reasonable assurance, that the media has utterly failed to follow up on a story you thought was important?
It’s been over three weeks since the story of Ron Livingston’s lawsuit against “John Doe” - for using Wikipedia, Facebook, and other websites to spread a gay-rumor hoax - was plastered all over the internet. Maybe that isn’t enough time, but so far at least, the not-so-anonymous John Doe spreading the false rumor has gotten away with it completely. Livingston will probably drop the lawsuit, since the culprit presumably has no money, lives in a different country, and (having been caught) isn’t likely to continue his antics for the foreseeable future. And, by extension, Wikipedia will have gotten away with it too, despite having facilitated the whole thing for almost two years.
It’s fair to say that my own feelings in this regard constitute sour grapes. After all, our intention in researching this situation and in identifying Mark Binmore as the culprit (though WR member Tarantino had the scoop on that, not me) was to point out a serious weakness in Wikipedia’s BLP policy, and by extension, to shame Wikipedia into finally doing something substantive about the overall problem. Every little bit helps, but it looks like anyone who thought this case might be the straw that finally broke the camel’s back, and caused Wikipedia to finally implement preventative features against online defamation, was mistaken.
So what did we learn?
The Tiger Effect vs. The Streisand Effect
Perhaps the most obvious thing is that Tiger Woods is an effective shield against the Streisand Effect. If the Livingston lawsuit had been filed two months earlier, or maybe even today, I suspect media follow-up would have been more forthcoming. As it happened, all attention was focused on Woods, easily one of the ten most recognizable celebrities in the world. Sure, a few other stories were reported during this period of time that didn’t involve Woods, but investigative reporting requires time and effort, as opposed to simply reposting material off a newswire or RSS feed. Times are tough, and the media goes where the money is.
For Livingston, this may have been a good thing, because it effectively silenced the media with regards to his situation, but not so much that the story was overlooked by people (i.e., us) willing to check it out and find the perpetrator. And, to be fair to the media, the Woods story was far more salacious and interesting, there were lots of attractive women involved, and (not coincidentally) it didn’t bear the inherent problem of potentially offending gay people. So while I wouldn’t go so far as to say that B-list celebrities never get follow-up reporting done on their behalf, the fact remains that A-list celebrity scandals trump B-list celebrity scandals. B-list celebrities can probably do just about anything they want to (within reason) during the coverage cycle of an A-list scandal, with near-impunity. The gossip machine might whirr and sputter and pop out a few indexable web pages, but ultimately nobody will really notice.
Maybe we knew that already, though.
Google Hates Celebrities Too (i.e., not just you and me)
Another thing we learned (again) is that Google’s extract and ranking algorithms are far, far stupider and crueller than most people probably believe. We already knew, of course, that Google gives preferential rankings to Wikipedia, news sites, and several blogs that claim to be news sites - but most people probably assume that Google rankings haven’t rewarded obvious text-string repetition for years. (Which is really just another way of saying that most people probably assume that Google rankings haven’t rewarded obvious text-string repetition for years. Oh, and did I mention that Google rankings haven’t… ehh, never mind.)
In fact, SEO dogma seems to be that repeating the same phrase multiple times on the same web page will actually reduce the page’s Google ranking, and that Google won’t further reward this by using the repeating text in the short textual extracts that appear beneath each search result. Well, from where I’m sitting at least, the SEO dogma is dead wrong.
What Google’s algorithm apparently does do, and this is the scary part, is cross-reference each site with every other site that contains the same search terms, giving preferential weight to Wikipedia - and if your site says something different from the others, its PageRank is reduced. It’s probably safe to assume that this is meant to help suppress minority “fringe” opinions that, in the estimation of Google programmers, are likely to be the ravings of diseased minds. In fact, mostly what it does is reward scrapers and re-posters, penalize anyone who produces original content, and help reduce opinion-diversity on the internet. Admittedly, this is conjecture, since we can’t see the code they’re running - so how do we arrive at this conclusion?
In what was, at that point, a 4-page discussion of the lawsuit here on Wikipedia Review, Daniel Brandt made this post on Page 2. In it, he quoted the same text string roughly 20 times (which I won’t quote, for fear of Google using that as the extract for this blog entry too), to indicate the enormous amount of scraping done from the Wikipedia article versions that contained the hoax. Not only did Google choose that specific page out of the 4-page thread for its extract, it used the repeated text as the extract itself (as pictured)! It was still using it when the thread reached 14 pages, which is when we swapped the text exclusively for searchbots to lessen the damage. There’s no question that on Dec. 6, the repeated text - taken directly from Wikipedia and its scrapers - would have been seen by Google’s algorithms as the prevailing opinion on the subject at the time. To make matters worse, many of these were one-time scrapes, so they’re not likely to be updated any time soon.
As if that wasn’t enough, Google Images was even worse. The WR blog entry contained three photographs: One of a fat man in “tighty whiteys” captioned “Lee Dennison,” another of Binmore captioned “Mark Binmore,” and a publicity photo (taken from a news/gossip site) of Livingston and his wife captioned “Livingston with his wife, Rosemarie Dewitt.” Logically, you’d think that a Google Images search on the phrase “Lee Dennison” would display the tighty-whiteys image from the WR page, which (again) is captioned “Lee Dennison.” But you’d be wrong. Instead, it displayed the Livingston/Dewitt photo, and beneath that Google displayed the words “Lee Dennison Story!” - thus compounding the defamatory effect the hoaxster was trying to achieve! So, naturally, we swapped the photo with another fat-man shot, but well over a week later the same photo of Livingston and Dewitt was still there! Wonderful!
As of the time of this writing, the Livingston-Dewitt thumbnail has finally been replaced on Google Images - by the Wikipedia Review logo. So it’s better than it was, but seriously - they’re just laughing in our faces at this point, aren’t they?
[Editor’s Note: Oops, posted too soon - one day after posting this entry, the correct photo started to appear on Google Images.
The New Antihomosexualism
I’ll get in lots of trouble for this, comment-wise, but another thing I believe we’ve learned from this incident is that homophobia is alive and well on the internet. Except that these days people aren’t afraid of what gay people represent, they’re afraid of the gay people themselves. After all, gay folks have computers and internet connections too - and they’re justifiably less-than-thrilled about the way they’ve been treated by society over the past, let’s say, 1,600 years or so. The internet gives them a way to fight back against bigotry and oppression from the comfort and safety of their own homes, apartments, and Venice Beach condos, in ways they could only dream about before. And who can blame them?
It would probably be unfair to draw any general conclusions about the current state of the Gay Rights movement from individual reactions to the Livingston lawsuit story, but suffice to say that a disturbing number of self-identified gay commenters showed practically no concern whatsoever for Livingston, or for the ease by which his reputation, career, and recent marriage were threatened by an anonymous Wikipedia/Facebook goon on the other side of the world. Instead, they focused solely on the point that it should not be considered offensive, much less libelous or “actionable,” to spread false rumors suggesting that a male celebrity is gay when he actually isn’t. Their point was, and is, well-taken. But in making this point, many of them labeled Livingston a “homophobe,” a “tool,” and even a “whiner” - and perhaps predictably, some even suggested that his failure to simply ignore the nearly two years of incessant creepiness and cyberstalking meant that there must actually be some truth to the rumors. (And no doubt Google will pick up that phrase as the extract for this blog entry.)
It seems to me, at least, that the result of this kind of narrow perspective, if widely adopted, will not be true social equality or increased acceptance for gay people by straight people. It might help slightly in the long term, as increasingly-marginalized bigots and moralist reactionaries slowly die off. But in the meantime, the comments I read suggested a form of reactionarism in themselves, and as such these attitudes are more likely to result in simmering resentment among straights - who might increasingly turn to their own kinds of dirty-pool tactics, both online and off, to fight back. The resulting cycle of violence (or violins?) may not be of a physical nature, but it could still be damaging in many other ways.
The question of whether these tactics will be more sophisticated than inserting “ERIC IS NOT A FAG” into random Wikipedia articles remains to be answered.
Man, these grapes taste terrible!
…It is clear now, if it wasn’t before, that net advertising cannot support thorough investigation, reporting, and fact-checking… In-depth reporting is both irreplaceable and expensive, and it must be funded or it will perish.
Jack doesn’t know the half of it. Net advertising can’t support anything - it can’t even support itself! Why should businesses advertise on news sites at all, when most people are going to find the products and services they want just by typing a few words and phrases into Google, and going straight to the online retailers and providers who sell them?
The future doesn’t look so good, frankly. Once professional investigative reporting is gone, what happens? Will it even matter that people in the USA and other democracies have “freedom of the press,” if the press no longer exists to exercise that freedom?