FORUM WARNING [2] Division by zero (Line: 2933 of /srcsgcaop/boardclass.php)
Huggle is Evil -
     
 
The Wikipedia Review: A forum for discussion and criticism of Wikipedia
Wikipedia Review Op-Ed Pages

Welcome, Guest! ( Log In | Register )

> General Discussion? What's that all about?

This subforum is for general discussion of Wikipedia and other Wikimedia projects. For a glossary of terms frequently used in such discussions, please refer to Wikipedia:Glossary. For a glossary of musical terms, see here. Other useful links:

Akahele.orgWikipedia-WatchWikitruthWP:ANWikiEN-L/Foundation-L (mailing lists) • Citizendium forums

> Huggle is Evil
Neil
post
Post #1


Awesome member
****

Group: Regulars
Posts: 302
Joined:
From: UK
Member No.: 4,822



Anon user adds a note to the Jonesboro, Arkansas article noting the schools are becoming Magnet Schools.

An eager Huggle user reverts and slaps a warning template on the anon's talk page.

Anon user, understandably irate, drops a message on the Huggle user's talk page saying "if your telling me that I'm a bad writer, then i understand, i could have written that better but if your telling me I'm wrong than **** you!"

Huggle user reverts the IP and reports them for harassment and personal attacks to AIV, where the IP gets blocked.

Fortunately, in this case, I'd already spotted the AIV report, checked the diff, spent ten seconds typing "Jonesboro Arkansas Magnet Schools" into Google, and restored/cleaned up/referenced the edit. And got the IP unblocked.

When I discussed this piece of incompetency with the Huggle user, asking how was adding a uncited but correct piece of information vandalism, he responded with "There was no citation".

Given this staggering lack of understanding, I blank and protect his huggle css page, and inform him. His response: "I OBVIOUSLY do know what vandalism is. Look at the quantity of vandalism I have reverted."

I bite my tongue.

I dread to think how many good faith edits are reverted by the huggle and twinkle kids which leads to people new to Wikipedia turning away from it. I'm sure this sort of thing is not new to many/most of you, but I needed to vent.

Oh, and now the Huggle user has retired, no doubt to start a new account to carry on playing.

This post has been edited by Neil:
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
 
Reply to this topicStart new topic
Replies
Shalom
post
Post #2


Ãœber Member
*****

Group: Regulars
Posts: 880
Joined:
Member No.: 5,566



Someday I'd like to write a userspace essay titled "Why I don't use performance-enhancing tools." I've made about 25,000 edits on English Wikipedia, not counting alternate accounts, IP addresses, deleted edits, edits on other projects, etc. Every single edit was a real edit, except for rollback and page moves. I do not use automated tools, and I never have, and I probably never will (though I am free to change my mind at any time).

Even as a human vandalism patroller I am not immune to mistakes. On my RFA as Shalom I admitted that I made a mistake reverting a series of anon edits to the biography of Josh Hancock saying that he had died in a car accident without giving a source. I did take a few seconds to check Google (not Google News, just plain Google) to see if Josh Hancock had died, but I didn't find anything. It turns out this anon editor was a little ahead of the curve, and I found out he was right from two messages by established users on my talk page. I left an apology on the anon's talk page, but I don't think he read it. It's too bad. I understand from reading old posts on this site that similar shenanigans have occurred with other biographies. The problem is that, from my experience as a patroller, and reacalling that (to paraphrase Mark Twain) reports of Sinbad's death have been greatly exaggerated, in most cases where an anon edits a page to say somebody just died, the anon is lying and the person is alive and well. I assumed bad faith because, without a source to back up these edits, the default assumption was that Josh Hancock was still alive until I was certain that he was dead. That being said, the level 3 warning ("Please stop." etc.) I left for the user was a bit excessive, and I'm sorry I didn't use a level 2 warning instead. But that's a minor detail. Gurch also reverted another user who re-added the same material until the edits finally stuck after a third try by someone else. Such is life. I wish there were a smoother way for anons who don't know about sourcing issues to tell us that people have died, but there isn't. It's one of the flaws in the system that can't be resolved without semi-protecting all 250,000+ BLPs, and I'm not in favor of doing that.

Returning to the issue at hand, not using automated tools is not a foolproof bulwark against human error. But what editing by hand does for me is to make sure that every single one of my edits is a fully conscious decision, and I am not trusting a piece of computer code to make any edits on my behalf. (Templates are another matter.) I made a mistake on the Josh Hancock article, but it was an honest mistake, which I made with full awareness, based on the information I had at the time, that my action seemed to be correct. In retrospect, it was not correct, but at least I can look back and know that I tried to make the right decision with full consideration to all available factors, and without relying on automated tools to make a decision on my behalf.

Just a couple of days ago I noticed that Addbot, operated by Addshore, had added the {{uncat}} template to a hundred "year in baseball" articles. I left him a note asking him to fix it, and to his credit, he responded quickly, undid all the edits, and diagnosed the problem. (The "category" link was based on a template and was not actually in the wiki-code for those pages, so the bot did not see it and thought there was no category.) Next time Addshore applies for RFA, I'll support him. (I opposed his first try.) So for something like that, it's okay to find a problem after the fact and correct it. For Neil's case, that approach doesn't work. An editor got blocked for no good reason, and we probably lost one more potential contributor, at least for a long while. We really have a way of shooting ourselves in the foot by welcoming users with one hand and chasing them away with the other. That's another discussion for another thread.

I keep getting sidetracked. My point is that I would not feel terrible if automated anti-vandalism tools were simply shut down altogether. I'm not saying this in an effort to let the site be swamped with vandalism - God forbid. What I'm saying is, if you can't review edits by hand and make an informed decision about those edits, then you should not be using a bot to make an uninformed decision about those edits. I have no issue with ClueBot and its clones: they do a good job. But letting users undo edits without actually thinking carefully about those edits is not a good idea, and I have serious questions about whether these tools are really a net benefit to the project.

You may ask what might happen if these tools were disabled. Aside from the hurt feelings that Gurch and AmiDaniel and others would sustain, and I don't take that lightly, would the project be worse off? I think what would happen is people would start to realize that the vandalism problem really is becoming unmanageable. We're putting fingers in the dike by letting people use automated tools to do things they can only do half as fast by hand, but the flood of vandalism is starting to weaken the dike. I think Wikipedia as such is sustainable, in the sense that people will always be willing to contribute content and funds. I think the anti-vandalism model is not sustainable. As Requests for adminship becomes more difficult to pass, kids will start to lose their primary motivation for doing anti-vandalism work, and adults like me already understand that Wikipedia needs our content much more than it needs our mindless reverting of other people's junk. So in the end, it simply won't get done. We'll get more complaints from people wondering why a vandalism item remained on a page for three months, and then maybe we'll do something about it. I think semi-protection should be used more liberally than it is currently. Semi-protection requests get refused because "there is not enough vandalism to justify it at this time" (I'm paraphrasing). How much vandalism is "enough"? This doubly applies to high-profile articles where if nobody vandalizes today, someone will vandalize tomorrow, or a week from now, or a month from now. I make no distionction between BLPs and other articles, except to say that BLP articles that are targets of vandalism should be semi-protected until the subject dies. For example Joe Liberman's biography is permanently semi-protected, as well it should be. Non-BLP articles can be given a little more latitude, but for an article about Kazakhstan, there's no reason to think that people will ever stop making jokes, and that's been semi-protected and move-protected, but it took a while to get there. (HAGGER?) There's no reason we can't extend this logic, which is working in practice, to any targets of vandalism. We don't choose which pages to semi-protect; the vandals do. And if the vandals want us to semi-protect every page in all of creation, then they can vandalize all five thousand random pages about numbered asteroids and we'll semi-protect those too. I really have no patience for this blind tolerance for wasting valuable contributors' time. Once we get rid of the automated tools, people will start to see the real problem and will eventually do something about it. Flagged revisions is definitely a step in the right direction.

To make myself clearer: we tolerate admin-bots and anti-vandal-bots because humans can't do the jobs themselves. Instead of just letting anyone use these powerful bots, we should instead ask ourselves why we can't manage our problems by hand? Is there a way we can reduce the workload instead of automating the response? I think reducing the workload is the more sustainable solution in the long run, and it might help forestall the sort of misunderstanding that may arise from using automated anti-vandalism tools.

If Jimbo Wales really cares about letting anyone edit any page on Wikipedia, he can stop being a celebrity and start spending a few hours patrolling recent-changes or new-pages like everyone else does. Then he might understand why his project is getting out of control.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Jon Awbrey
post
Post #3


τὰ δέ μοι παθήματα μαθήματα γέγονε
*********

Group: Moderators
Posts: 6,783
Joined:
From: Meat Puppet Nation
Member No.: 5,619



tl;dr

Jon (IMG:smilys0b23ax56/default/cool.gif)

QUOTE(Shalom @ Fri 6th June 2008, 2:10pm) *

Someday I'd like to write a userspace essay titled "Why I don't use performance-enhancing tools." I've made about 25,000 edits on English Wikipedia, not counting alternate accounts, IP addresses, deleted edits, edits on other projects, etc. Every single edit was a real edit, except for rollback and page moves. I do not use automated tools, and I never have, and I probably never will (though I am free to change my mind at any time).

Even as a human vandalism patroller I am not immune to mistakes. On my RFA as Shalom I admitted that I made a mistake reverting a series of anon edits to the biography of Josh Hancock saying that he had died in a car accident without giving a source. I did take a few seconds to check Google (not Google News, just plain Google) to see if Josh Hancock had died, but I didn't find anything. It turns out this anon editor was a little ahead of the curve, and I found out he was right from two messages by established users on my talk page. I left an apology on the anon's talk page, but I don't think he read it. It's too bad. I understand from reading old posts on this site that similar shenanigans have occurred with other biographies. The problem is that, from my experience as a patroller, and reacalling that (to paraphrase Mark Twain) reports of Sinbad's death have been greatly exaggerated, in most cases where an anon edits a page to say somebody just died, the anon is lying and the person is alive and well. I assumed bad faith because, without a source to back up these edits, the default assumption was that Josh Hancock was still alive until I was certain that he was dead. That being said, the level 3 warning ("Please stop." etc.) I left for the user was a bit excessive, and I'm sorry I didn't use a level 2 warning instead. But that's a minor detail. Gurch also reverted another user who re-added the same material until the edits finally stuck after a third try by someone else. Such is life. I wish there were a smoother way for anons who don't know about sourcing issues to tell us that people have died, but there isn't. It's one of the flaws in the system that can't be resolved without semi-protecting all 250,000+ BLPs, and I'm not in favor of doing that.

Returning to the issue at hand, not using automated tools is not a foolproof bulwark against human error. But what editing by hand does for me is to make sure that every single one of my edits is a fully conscious decision, and I am not trusting a piece of computer code to make any edits on my behalf. (Templates are another matter.) I made a mistake on the Josh Hancock article, but it was an honest mistake, which I made with full awareness, based on the information I had at the time, that my action seemed to be correct. In retrospect, it was not correct, but at least I can look back and know that I tried to make the right decision with full consideration to all available factors, and without relying on automated tools to make a decision on my behalf.

Just a couple of days ago I noticed that Addbot, operated by Addshore, had added the {{uncat}} template to a hundred "year in baseball" articles. I left him a note asking him to fix it, and to his credit, he responded quickly, undid all the edits, and diagnosed the problem. (The "category" link was based on a template and was not actually in the wiki-code for those pages, so the bot did not see it and thought there was no category.) Next time Addshore applies for RFA, I'll support him. (I opposed his first try.) So for something like that, it's okay to find a problem after the fact and correct it. For Neil's case, that approach doesn't work. An editor got blocked for no good reason, and we probably lost one more potential contributor, at least for a long while. We really have a way of shooting ourselves in the foot by welcoming users with one hand and chasing them away with the other. That's another discussion for another thread.

I keep getting sidetracked. My point is that I would not feel terrible if automated anti-vandalism tools were simply shut down altogether. I'm not saying this in an effort to let the site be swamped with vandalism - God forbid. What I'm saying is, if you can't review edits by hand and make an informed decision about those edits, then you should not be using a bot to make an uninformed decision about those edits. I have no issue with ClueBot and its clones: they do a good job. But letting users undo edits without actually thinking carefully about those edits is not a good idea, and I have serious questions about whether these tools are really a net benefit to the project.

You may ask what might happen if these tools were disabled. Aside from the hurt feelings that Gurch and AmiDaniel and others would sustain, and I don't take that lightly, would the project be worse off? I think what would happen is people would start to realize that the vandalism problem really is becoming unmanageable. We're putting fingers in the dike by letting people use automated tools to do things they can only do half as fast by hand, but the flood of vandalism is starting to weaken the dike. I think Wikipedia as such is sustainable, in the sense that people will always be willing to contribute content and funds. I think the anti-vandalism model is not sustainable. As Requests for adminship becomes more difficult to pass, kids will start to lose their primary motivation for doing anti-vandalism work, and adults like me already understand that Wikipedia needs our content much more than it needs our mindless reverting of other people's junk. So in the end, it simply won't get done. We'll get more complaints from people wondering why a vandalism item remained on a page for three months, and then maybe we'll do something about it. I think semi-protection should be used more liberally than it is currently. Semi-protection requests get refused because "there is not enough vandalism to justify it at this time" (I'm paraphrasing). How much vandalism is "enough"? This doubly applies to high-profile articles where if nobody vandalizes today, someone will vandalize tomorrow, or a week from now, or a month from now. I make no distionction between BLPs and other articles, except to say that BLP articles that are targets of vandalism should be semi-protected until the subject dies. For example Joe Liberman's biography is permanently semi-protected, as well it should be. Non-BLP articles can be given a little more latitude, but for an article about Kazakhstan, there's no reason to think that people will ever stop making jokes, and that's been semi-protected and move-protected, but it took a while to get there. (HAGGER?) There's no reason we can't extend this logic, which is working in practice, to any targets of vandalism. We don't choose which pages to semi-protect; the vandals do. And if the vandals want us to semi-protect every page in all of creation, then they can vandalize all five thousand random pages about numbered asteroids and we'll semi-protect those too. I really have no patience for this blind tolerance for wasting valuable contributors' time. Once we get rid of the automated tools, people will start to see the real problem and will eventually do something about it. Flagged revisions is definitely a step in the right direction.

To make myself clearer: we tolerate admin-bots and anti-vandal-bots because humans can't do the jobs themselves. Instead of just letting anyone use these powerful bots, we should instead ask ourselves why we can't manage our problems by hand? Is there a way we can reduce the workload instead of automating the response? I think reducing the workload is the more sustainable solution in the long run, and it might help forestall the sort of misunderstanding that may arise from using automated anti-vandalism tools.

If Jimbo Wales really cares about letting anyone edit any page on Wikipedia, he can stop being a celebrity and start spending a few hours patrolling recent-changes or new-pages like everyone else does. Then he might understand why his project is getting out of control.



This post has been edited by Jon Awbrey:
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Jon Awbrey
post
Post #4


τὰ δέ μοι παθήματα μαθήματα γέγονε
*********

Group: Moderators
Posts: 6,783
Joined:
From: Meat Puppet Nation
Member No.: 5,619



Damn! The old "TL;DR" article at ED used to be one of the funniest things on the Internet, the only page in all of ED that I routinely linked. And now some witless drudge has gone and deleted it, and even Xpunged the long version from the history.

Bleeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee!

Jon (IMG:smilys0b23ax56/default/cool.gif)

QUOTE(Jon Awbrey @ Fri 6th June 2008, 2:12pm) *

tl;dr

Jon (IMG:smilys0b23ax56/default/cool.gif)

QUOTE(Shalom @ Fri 6th June 2008, 2:10pm) *

Someday I'd like to write a userspace essay titled "Why I don't use performance-enhancing tools." I've made about 25,000 edits on English Wikipedia, not counting alternate accounts, IP addresses, deleted edits, edits on other projects, etc. Every single edit was a real edit, except for rollback and page moves. I do not use automated tools, and I never have, and I probably never will (though I am free to change my mind at any time).

Even as a human vandalism patroller I am not immune to mistakes. On my RFA as Shalom I admitted that I made a mistake reverting a series of anon edits to the biography of Josh Hancock saying that he had died in a car accident without giving a source. I did take a few seconds to check Google (not Google News, just plain Google) to see if Josh Hancock had died, but I didn't find anything. It turns out this anon editor was a little ahead of the curve, and I found out he was right from two messages by established users on my talk page. I left an apology on the anon's talk page, but I don't think he read it. It's too bad. I understand from reading old posts on this site that similar shenanigans have occurred with other biographies. The problem is that, from my experience as a patroller, and reacalling that (to paraphrase Mark Twain) reports of Sinbad's death have been greatly exaggerated, in most cases where an anon edits a page to say somebody just died, the anon is lying and the person is alive and well. I assumed bad faith because, without a source to back up these edits, the default assumption was that Josh Hancock was still alive until I was certain that he was dead. That being said, the level 3 warning ("Please stop." etc.) I left for the user was a bit excessive, and I'm sorry I didn't use a level 2 warning instead. But that's a minor detail. Gurch also reverted another user who re-added the same material until the edits finally stuck after a third try by someone else. Such is life. I wish there were a smoother way for anons who don't know about sourcing issues to tell us that people have died, but there isn't. It's one of the flaws in the system that can't be resolved without semi-protecting all 250,000+ BLPs, and I'm not in favor of doing that.

Returning to the issue at hand, not using automated tools is not a foolproof bulwark against human error. But what editing by hand does for me is to make sure that every single one of my edits is a fully conscious decision, and I am not trusting a piece of computer code to make any edits on my behalf. (Templates are another matter.) I made a mistake on the Josh Hancock article, but it was an honest mistake, which I made with full awareness, based on the information I had at the time, that my action seemed to be correct. In retrospect, it was not correct, but at least I can look back and know that I tried to make the right decision with full consideration to all available factors, and without relying on automated tools to make a decision on my behalf.

Just a couple of days ago I noticed that Addbot, operated by Addshore, had added the {{uncat}} template to a hundred "year in baseball" articles. I left him a note asking him to fix it, and to his credit, he responded quickly, undid all the edits, and diagnosed the problem. (The "category" link was based on a template and was not actually in the wiki-code for those pages, so the bot did not see it and thought there was no category.) Next time Addshore applies for RFA, I'll support him. (I opposed his first try.) So for something like that, it's okay to find a problem after the fact and correct it. For Neil's case, that approach doesn't work. An editor got blocked for no good reason, and we probably lost one more potential contributor, at least for a long while. We really have a way of shooting ourselves in the foot by welcoming users with one hand and chasing them away with the other. That's another discussion for another thread.

I keep getting sidetracked. My point is that I would not feel terrible if automated anti-vandalism tools were simply shut down altogether. I'm not saying this in an effort to let the site be swamped with vandalism - God forbid. What I'm saying is, if you can't review edits by hand and make an informed decision about those edits, then you should not be using a bot to make an uninformed decision about those edits. I have no issue with ClueBot and its clones: they do a good job. But letting users undo edits without actually thinking carefully about those edits is not a good idea, and I have serious questions about whether these tools are really a net benefit to the project.

You may ask what might happen if these tools were disabled. Aside from the hurt feelings that Gurch and AmiDaniel and others would sustain, and I don't take that lightly, would the project be worse off? I think what would happen is people would start to realize that the vandalism problem really is becoming unmanageable. We're putting fingers in the dike by letting people use automated tools to do things they can only do half as fast by hand, but the flood of vandalism is starting to weaken the dike. I think Wikipedia as such is sustainable, in the sense that people will always be willing to contribute content and funds. I think the anti-vandalism model is not sustainable. As Requests for adminship becomes more difficult to pass, kids will start to lose their primary motivation for doing anti-vandalism work, and adults like me already understand that Wikipedia needs our content much more than it needs our mindless reverting of other people's junk. So in the end, it simply won't get done. We'll get more complaints from people wondering why a vandalism item remained on a page for three months, and then maybe we'll do something about it. I think semi-protection should be used more liberally than it is currently. Semi-protection requests get refused because "there is not enough vandalism to justify it at this time" (I'm paraphrasing). How much vandalism is "enough"? This doubly applies to high-profile articles where if nobody vandalizes today, someone will vandalize tomorrow, or a week from now, or a month from now. I make no distionction between BLPs and other articles, except to say that BLP articles that are targets of vandalism should be semi-protected until the subject dies. For example Joe Liberman's biography is permanently semi-protected, as well it should be. Non-BLP articles can be given a little more latitude, but for an article about Kazakhstan, there's no reason to think that people will ever stop making jokes, and that's been semi-protected and move-protected, but it took a while to get there. (HAGGER?) There's no reason we can't extend this logic, which is working in practice, to any targets of vandalism. We don't choose which pages to semi-protect; the vandals do. And if the vandals want us to semi-protect every page in all of creation, then they can vandalize all five thousand random pages about numbered asteroids and we'll semi-protect those too. I really have no patience for this blind tolerance for wasting valuable contributors' time. Once we get rid of the automated tools, people will start to see the real problem and will eventually do something about it. Flagged revisions is definitely a step in the right direction.

To make myself clearer: we tolerate admin-bots and anti-vandal-bots because humans can't do the jobs themselves. Instead of just letting anyone use these powerful bots, we should instead ask ourselves why we can't manage our problems by hand? Is there a way we can reduce the workload instead of automating the response? I think reducing the workload is the more sustainable solution in the long run, and it might help forestall the sort of misunderstanding that may arise from using automated anti-vandalism tools.

If Jimbo Wales really cares about letting anyone edit any page on Wikipedia, he can stop being a celebrity and start spending a few hours patrolling recent-changes or new-pages like everyone else does. Then he might understand why his project is getting out of control.




This post has been edited by Jon Awbrey:
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Jon Awbrey
post
Post #5


τὰ δέ μοι παθήματα μαθήματα γέγονε
*********

Group: Moderators
Posts: 6,783
Joined:
From: Meat Puppet Nation
Member No.: 5,619



What's Top-Posting?

And do they have a Bot for that?

Jon (IMG:smilys0b23ax56/default/cool.gif)

QUOTE(dtobias @ Fri 6th June 2008, 2:47pm) *

QUOTE(Jon Awbrey @ Fri 6th June 2008, 2:38pm) *

Damn! The old "TL;DR" article at ED used to be one of the funniest things on the Internet, the only page in all of ED that I routinely linked. And now some witless drudge has gone and deleted it, and even Xpunged the long version from the history.


But do you really need to go quoting back, in full, the posting that you're saying is too long (twice already)? Top-posting / fullquoting is enough of a problem in e-mail lists … do you need to do it on forums too?


Hey! Maybe I can submit the dig infra to ED as a replacement for their old "TL;DR" article?

Jon (IMG:smilys0b23ax56/default/cool.gif)

QUOTE(Jon Awbrey @ Fri 6th June 2008, 2:38pm) *

Damn! The old "TL;DR" article at ED used to be one of the funniest things on the Internet, the only page in all of ED that I routinely linked. And now some witless drudge has gone and deleted it, and even Xpunged the long version from the history.

Bleeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee!

Jon (IMG:smilys0b23ax56/default/cool.gif)

QUOTE(Jon Awbrey @ Fri 6th June 2008, 2:12pm) *

tl;dr

Jon (IMG:smilys0b23ax56/default/cool.gif)

QUOTE(Shalom @ Fri 6th June 2008, 2:10pm) *

Someday I'd like to write a userspace essay titled "Why I don't use performance-enhancing tools." I've made about 25,000 edits on English Wikipedia, not counting alternate accounts, IP addresses, deleted edits, edits on other projects, etc. Every single edit was a real edit, except for rollback and page moves. I do not use automated tools, and I never have, and I probably never will (though I am free to change my mind at any time).

Even as a human vandalism patroller I am not immune to mistakes. On my RFA as Shalom I admitted that I made a mistake reverting a series of anon edits to the biography of Josh Hancock saying that he had died in a car accident without giving a source. I did take a few seconds to check Google (not Google News, just plain Google) to see if Josh Hancock had died, but I didn't find anything. It turns out this anon editor was a little ahead of the curve, and I found out he was right from two messages by established users on my talk page. I left an apology on the anon's talk page, but I don't think he read it. It's too bad. I understand from reading old posts on this site that similar shenanigans have occurred with other biographies. The problem is that, from my experience as a patroller, and reacalling that (to paraphrase Mark Twain) reports of Sinbad's death have been greatly exaggerated, in most cases where an anon edits a page to say somebody just died, the anon is lying and the person is alive and well. I assumed bad faith because, without a source to back up these edits, the default assumption was that Josh Hancock was still alive until I was certain that he was dead. That being said, the level 3 warning ("Please stop." etc.) I left for the user was a bit excessive, and I'm sorry I didn't use a level 2 warning instead. But that's a minor detail. Gurch also reverted another user who re-added the same material until the edits finally stuck after a third try by someone else. Such is life. I wish there were a smoother way for anons who don't know about sourcing issues to tell us that people have died, but there isn't. It's one of the flaws in the system that can't be resolved without semi-protecting all 250,000+ BLPs, and I'm not in favor of doing that.

Returning to the issue at hand, not using automated tools is not a foolproof bulwark against human error. But what editing by hand does for me is to make sure that every single one of my edits is a fully conscious decision, and I am not trusting a piece of computer code to make any edits on my behalf. (Templates are another matter.) I made a mistake on the Josh Hancock article, but it was an honest mistake, which I made with full awareness, based on the information I had at the time, that my action seemed to be correct. In retrospect, it was not correct, but at least I can look back and know that I tried to make the right decision with full consideration to all available factors, and without relying on automated tools to make a decision on my behalf.

Just a couple of days ago I noticed that Addbot, operated by Addshore, had added the {{uncat}} template to a hundred "year in baseball" articles. I left him a note asking him to fix it, and to his credit, he responded quickly, undid all the edits, and diagnosed the problem. (The "category" link was based on a template and was not actually in the wiki-code for those pages, so the bot did not see it and thought there was no category.) Next time Addshore applies for RFA, I'll support him. (I opposed his first try.) So for something like that, it's okay to find a problem after the fact and correct it. For Neil's case, that approach doesn't work. An editor got blocked for no good reason, and we probably lost one more potential contributor, at least for a long while. We really have a way of shooting ourselves in the foot by welcoming users with one hand and chasing them away with the other. That's another discussion for another thread.

I keep getting sidetracked. My point is that I would not feel terrible if automated anti-vandalism tools were simply shut down altogether. I'm not saying this in an effort to let the site be swamped with vandalism - God forbid. What I'm saying is, if you can't review edits by hand and make an informed decision about those edits, then you should not be using a bot to make an uninformed decision about those edits. I have no issue with ClueBot and its clones: they do a good job. But letting users undo edits without actually thinking carefully about those edits is not a good idea, and I have serious questions about whether these tools are really a net benefit to the project.

You may ask what might happen if these tools were disabled. Aside from the hurt feelings that Gurch and AmiDaniel and others would sustain, and I don't take that lightly, would the project be worse off? I think what would happen is people would start to realize that the vandalism problem really is becoming unmanageable. We're putting fingers in the dike by letting people use automated tools to do things they can only do half as fast by hand, but the flood of vandalism is starting to weaken the dike. I think Wikipedia as such is sustainable, in the sense that people will always be willing to contribute content and funds. I think the anti-vandalism model is not sustainable. As Requests for adminship becomes more difficult to pass, kids will start to lose their primary motivation for doing anti-vandalism work, and adults like me already understand that Wikipedia needs our content much more than it needs our mindless reverting of other people's junk. So in the end, it simply won't get done. We'll get more complaints from people wondering why a vandalism item remained on a page for three months, and then maybe we'll do something about it. I think semi-protection should be used more liberally than it is currently. Semi-protection requests get refused because "there is not enough vandalism to justify it at this time" (I'm paraphrasing). How much vandalism is "enough"? This doubly applies to high-profile articles where if nobody vandalizes today, someone will vandalize tomorrow, or a week from now, or a month from now. I make no distionction between BLPs and other articles, except to say that BLP articles that are targets of vandalism should be semi-protected until the subject dies. For example Joe Liberman's biography is permanently semi-protected, as well it should be. Non-BLP articles can be given a little more latitude, but for an article about Kazakhstan, there's no reason to think that people will ever stop making jokes, and that's been semi-protected and move-protected, but it took a while to get there. (HAGGER?) There's no reason we can't extend this logic, which is working in practice, to any targets of vandalism. We don't choose which pages to semi-protect; the vandals do. And if the vandals want us to semi-protect every page in all of creation, then they can vandalize all five thousand random pages about numbered asteroids and we'll semi-protect those too. I really have no patience for this blind tolerance for wasting valuable contributors' time. Once we get rid of the automated tools, people will start to see the real problem and will eventually do something about it. Flagged revisions is definitely a step in the right direction.

To make myself clearer: we tolerate admin-bots and anti-vandal-bots because humans can't do the jobs themselves. Instead of just letting anyone use these powerful bots, we should instead ask ourselves why we can't manage our problems by hand? Is there a way we can reduce the workload instead of automating the response? I think reducing the workload is the more sustainable solution in the long run, and it might help forestall the sort of misunderstanding that may arise from using automated anti-vandalism tools.

If Jimbo Wales really cares about letting anyone edit any page on Wikipedia, he can stop being a celebrity and start spending a few hours patrolling recent-changes or new-pages like everyone else does. Then he might understand why his project is getting out of control.





This post has been edited by Jon Awbrey:
User is offlineProfile CardPM
Go to the top of the page
+Quote Post

Posts in this topic
Neil   Huggle is Evil  
guy   No doubt we need a National Huggle Association to ...  
Lar   No doubt we need a National Huggle Association to...  
No one of consequence   Holy crap, he had 18000 edits in just 2 months. A...  
Kato   Can you elaborate as to what Huggle is?  
Jon Awbrey   Can you elaborate as to what Huggle is? It...  
GlassBeadGame   Can you elaborate as to what Huggle is? Thank ...  
dtobias   Damn! The old "TL;DR" article at E...  
GlassBeadGame   Damn! The old "TL;DR" article at ...  
guy   tl;dr Make some time and read it. It's much...  
ByAppointmentTo   [url=http://www.encyclopediadramatica.com/TL;DR]t...  
Milton Roe   I think semi-protection should be used more liber...  
GlassBeadGame   No, it's more a matter of "Why do we n...  
thekohser   To make myself clearer: we tolerate admin-bots an...  
Somey   You may ask what might happen if these tools were ...  
maggot3   Huggle is basically an automated recent changes vi...  
GlassBeadGame   Huggle is basically an automated recent changes v...  
Shalom   Huggle is basically an automated recent changes v...  
michael   I occasionally use tools, although bots have usurp...  
dogbiscuit   Just how many kiddies are sitting there, day after...  


Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 

-   Lo-Fi Version Time is now:
 
     
FORUM WARNING [2] Cannot modify header information - headers already sent by (output started at /home2/wikipede/public_html/int042kj398.php:242) (Line: 0 of Unknown)