Printable Version of Topic

Click here to view this topic in its original format

_ General Discussion _ Huggle is Evil

Posted by: Neil

Anon user adds a note to the http://en.wikipedia.org/wiki/Jonesboro%2C_Arkansas article noting the schools are becoming Magnet Schools.

An eager Huggle user reverts and slaps a warning template on the anon's talk page.

Anon user, understandably irate, drops a message on the Huggle user's talk page saying "if your telling me that I'm a bad writer, then i understand, i could have written that better but if your telling me I'm wrong than **** you!"

Huggle user reverts the IP and reports them for harassment and personal attacks to http://en.wikipedia.org/wiki/WP:AIV, where the IP gets blocked.

Fortunately, in this case, I'd already spotted the AIV report, checked the diff, spent ten seconds typing "Jonesboro Arkansas Magnet Schools" into Google, and restored/cleaned up/referenced the edit. And got the IP unblocked.

When I discussed this piece of incompetency with the Huggle user, asking how was adding a uncited but correct piece of information vandalism, he responded with "There was no citation".

Given this staggering lack of understanding, I blank and protect his huggle css page, and http://en.wikipedia.org/wiki/User_talk:CanadianLinuxUser#Huggle. His response: "I OBVIOUSLY do know what vandalism is. Look at the quantity of vandalism I have reverted."

I bite my tongue.

I dread to think how many good faith edits are reverted by the huggle and twinkle kids which leads to people new to Wikipedia turning away from it. I'm sure this sort of thing is not new to many/most of you, but I needed to vent.

Oh, and now the Huggle user has retired, no doubt to start a new account to carry on playing.

Posted by: guy

No doubt we need a National Huggle Association to match the National Rifle Association. "Huggle doesn't cause problems - people cause problems." tongue.gif

Posted by: No one of consequence

Holy crap, he had 18000 edits in just 2 months. After 2 months I was still trying to figure out reliable sourcing and image policy. Vandalism needs to be reverted, but Wikipedia is not a first-person shooter.

I always used to wonder if Monicasdude had returned under a new name. He was sometimes bitter and acerbic but he was brilliant at rescuing articles from the idiot new page patrollers who slap speedy delete tags on obviously notable subjects just because the article did not spring forth fully formed in a single edit like Athena from the head of Zeus. Seeing stuff like this makes me realize Monicasdude must be gone forever because this sort of thing would have made him explode.

It's a fun hobby and all that but some of the people are real jerks.

(yes, that's stating the obvious)

Posted by: Kato

Can you elaborate as to what Huggle is?

Posted by: Jon Awbrey

QUOTE(Kato @ Fri 6th June 2008, 1:52pm) *

Can you elaborate as to what Huggle is?


It's a brand of disposable diapers.

Jon cool.gif

Posted by: GlassBeadGame

QUOTE(Kato @ Fri 6th June 2008, 11:52am) *

Can you elaborate as to what Huggle is?



Thank you, Kato, for asking that question. Although we might be sorry we wanted to know.

Posted by: Shalom

Someday I'd like to write a userspace essay titled "Why I don't use performance-enhancing tools." I've made about 25,000 edits on English Wikipedia, not counting alternate accounts, IP addresses, deleted edits, edits on other projects, etc. Every single edit was a real edit, except for rollback and page moves. I do not use automated tools, and I never have, and I probably never will (though I am free to change my mind at any time).

Even as a human vandalism patroller I am not immune to mistakes. On my RFA as Shalom I admitted that I made a mistake reverting a series of anon edits to the biography of Josh Hancock saying that he had died in a car accident without giving a source. I did take a few seconds to check Google (not Google News, just plain Google) to see if Josh Hancock had died, but I didn't find anything. It turns out this anon editor was a little ahead of the curve, and I found out he was right from two messages by established users on my talk page. I left an apology on the anon's talk page, but I don't think he read it. It's too bad. I understand from reading old posts on this site that similar shenanigans have occurred with other biographies. The problem is that, from my experience as a patroller, and reacalling that (to paraphrase Mark Twain) reports of Sinbad's death have been greatly exaggerated, in most cases where an anon edits a page to say somebody just died, the anon is lying and the person is alive and well. I assumed bad faith because, without a source to back up these edits, the default assumption was that Josh Hancock was still alive until I was certain that he was dead. That being said, the level 3 warning ("Please stop." etc.) I left for the user was a bit excessive, and I'm sorry I didn't use a level 2 warning instead. But that's a minor detail. Gurch also reverted another user who re-added the same material until the edits finally stuck after a third try by someone else. Such is life. I wish there were a smoother way for anons who don't know about sourcing issues to tell us that people have died, but there isn't. It's one of the flaws in the system that can't be resolved without semi-protecting all 250,000+ BLPs, and I'm not in favor of doing that.

Returning to the issue at hand, not using automated tools is not a foolproof bulwark against human error. But what editing by hand does for me is to make sure that every single one of my edits is a fully conscious decision, and I am not trusting a piece of computer code to make any edits on my behalf. (Templates are another matter.) I made a mistake on the Josh Hancock article, but it was an honest mistake, which I made with full awareness, based on the information I had at the time, that my action seemed to be correct. In retrospect, it was not correct, but at least I can look back and know that I tried to make the right decision with full consideration to all available factors, and without relying on automated tools to make a decision on my behalf.

Just a couple of days ago I noticed that Addbot, operated by Addshore, had added the {{uncat}} template to a hundred "year in baseball" articles. I left him a note asking him to fix it, and to his credit, he responded quickly, undid all the edits, and diagnosed the problem. (The "category" link was based on a template and was not actually in the wiki-code for those pages, so the bot did not see it and thought there was no category.) Next time Addshore applies for RFA, I'll support him. (I opposed his first try.) So for something like that, it's okay to find a problem after the fact and correct it. For Neil's case, that approach doesn't work. An editor got blocked for no good reason, and we probably lost one more potential contributor, at least for a long while. We really have a way of shooting ourselves in the foot by welcoming users with one hand and chasing them away with the other. That's another discussion for another thread.

I keep getting sidetracked. My point is that I would not feel terrible if automated anti-vandalism tools were simply shut down altogether. I'm not saying this in an effort to let the site be swamped with vandalism - God forbid. What I'm saying is, if you can't review edits by hand and make an informed decision about those edits, then you should not be using a bot to make an uninformed decision about those edits. I have no issue with ClueBot and its clones: they do a good job. But letting users undo edits without actually thinking carefully about those edits is not a good idea, and I have serious questions about whether these tools are really a net benefit to the project.

You may ask what might happen if these tools were disabled. Aside from the hurt feelings that Gurch and AmiDaniel and others would sustain, and I don't take that lightly, would the project be worse off? I think what would happen is people would start to realize that the vandalism problem really is becoming unmanageable. We're putting fingers in the dike by letting people use automated tools to do things they can only do half as fast by hand, but the flood of vandalism is starting to weaken the dike. I think Wikipedia as such is sustainable, in the sense that people will always be willing to contribute content and funds. I think the anti-vandalism model is not sustainable. As Requests for adminship becomes more difficult to pass, kids will start to lose their primary motivation for doing anti-vandalism work, and adults like me already understand that Wikipedia needs our content much more than it needs our mindless reverting of other people's junk. So in the end, it simply won't get done. We'll get more complaints from people wondering why a vandalism item remained on a page for three months, and then maybe we'll do something about it. I think semi-protection should be used more liberally than it is currently. Semi-protection requests get refused because "there is not enough vandalism to justify it at this time" (I'm paraphrasing). How much vandalism is "enough"? This doubly applies to high-profile articles where if nobody vandalizes today, someone will vandalize tomorrow, or a week from now, or a month from now. I make no distionction between BLPs and other articles, except to say that BLP articles that are targets of vandalism should be semi-protected until the subject dies. For example Joe Liberman's biography is permanently semi-protected, as well it should be. Non-BLP articles can be given a little more latitude, but for an article about Kazakhstan, there's no reason to think that people will ever stop making jokes, and that's been semi-protected and move-protected, but it took a while to get there. (HAGGER?) There's no reason we can't extend this logic, which is working in practice, to any targets of vandalism. We don't choose which pages to semi-protect; the vandals do. And if the vandals want us to semi-protect every page in all of creation, then they can vandalize all five thousand random pages about numbered asteroids and we'll semi-protect those too. I really have no patience for this blind tolerance for wasting valuable contributors' time. Once we get rid of the automated tools, people will start to see the real problem and will eventually do something about it. Flagged revisions is definitely a step in the right direction.

To make myself clearer: we tolerate admin-bots and anti-vandal-bots because humans can't do the jobs themselves. Instead of just letting anyone use these powerful bots, we should instead ask ourselves why we can't manage our problems by hand? Is there a way we can reduce the workload instead of automating the response? I think reducing the workload is the more sustainable solution in the long run, and it might help forestall the sort of misunderstanding that may arise from using automated anti-vandalism tools.

If Jimbo Wales really cares about letting anyone edit any page on Wikipedia, he can stop being a celebrity and start spending a few hours patrolling recent-changes or new-pages like everyone else does. Then he might understand why his project is getting out of control.

Posted by: Jon Awbrey

http://www.encyclopediadramatica.com/TL;DR

Jon cool.gif

QUOTE(Shalom @ Fri 6th June 2008, 2:10pm) *

Someday I'd like to write a userspace essay titled "Why I don't use performance-enhancing tools." I've made about 25,000 edits on English Wikipedia, not counting alternate accounts, IP addresses, deleted edits, edits on other projects, etc. Every single edit was a real edit, except for rollback and page moves. I do not use automated tools, and I never have, and I probably never will (though I am free to change my mind at any time).

Even as a human vandalism patroller I am not immune to mistakes. On my RFA as Shalom I admitted that I made a mistake reverting a series of anon edits to the biography of Josh Hancock saying that he had died in a car accident without giving a source. I did take a few seconds to check Google (not Google News, just plain Google) to see if Josh Hancock had died, but I didn't find anything. It turns out this anon editor was a little ahead of the curve, and I found out he was right from two messages by established users on my talk page. I left an apology on the anon's talk page, but I don't think he read it. It's too bad. I understand from reading old posts on this site that similar shenanigans have occurred with other biographies. The problem is that, from my experience as a patroller, and reacalling that (to paraphrase Mark Twain) reports of Sinbad's death have been greatly exaggerated, in most cases where an anon edits a page to say somebody just died, the anon is lying and the person is alive and well. I assumed bad faith because, without a source to back up these edits, the default assumption was that Josh Hancock was still alive until I was certain that he was dead. That being said, the level 3 warning ("Please stop." etc.) I left for the user was a bit excessive, and I'm sorry I didn't use a level 2 warning instead. But that's a minor detail. Gurch also reverted another user who re-added the same material until the edits finally stuck after a third try by someone else. Such is life. I wish there were a smoother way for anons who don't know about sourcing issues to tell us that people have died, but there isn't. It's one of the flaws in the system that can't be resolved without semi-protecting all 250,000+ BLPs, and I'm not in favor of doing that.

Returning to the issue at hand, not using automated tools is not a foolproof bulwark against human error. But what editing by hand does for me is to make sure that every single one of my edits is a fully conscious decision, and I am not trusting a piece of computer code to make any edits on my behalf. (Templates are another matter.) I made a mistake on the Josh Hancock article, but it was an honest mistake, which I made with full awareness, based on the information I had at the time, that my action seemed to be correct. In retrospect, it was not correct, but at least I can look back and know that I tried to make the right decision with full consideration to all available factors, and without relying on automated tools to make a decision on my behalf.

Just a couple of days ago I noticed that Addbot, operated by Addshore, had added the {{uncat}} template to a hundred "year in baseball" articles. I left him a note asking him to fix it, and to his credit, he responded quickly, undid all the edits, and diagnosed the problem. (The "category" link was based on a template and was not actually in the wiki-code for those pages, so the bot did not see it and thought there was no category.) Next time Addshore applies for RFA, I'll support him. (I opposed his first try.) So for something like that, it's okay to find a problem after the fact and correct it. For Neil's case, that approach doesn't work. An editor got blocked for no good reason, and we probably lost one more potential contributor, at least for a long while. We really have a way of shooting ourselves in the foot by welcoming users with one hand and chasing them away with the other. That's another discussion for another thread.

I keep getting sidetracked. My point is that I would not feel terrible if automated anti-vandalism tools were simply shut down altogether. I'm not saying this in an effort to let the site be swamped with vandalism - God forbid. What I'm saying is, if you can't review edits by hand and make an informed decision about those edits, then you should not be using a bot to make an uninformed decision about those edits. I have no issue with ClueBot and its clones: they do a good job. But letting users undo edits without actually thinking carefully about those edits is not a good idea, and I have serious questions about whether these tools are really a net benefit to the project.

You may ask what might happen if these tools were disabled. Aside from the hurt feelings that Gurch and AmiDaniel and others would sustain, and I don't take that lightly, would the project be worse off? I think what would happen is people would start to realize that the vandalism problem really is becoming unmanageable. We're putting fingers in the dike by letting people use automated tools to do things they can only do half as fast by hand, but the flood of vandalism is starting to weaken the dike. I think Wikipedia as such is sustainable, in the sense that people will always be willing to contribute content and funds. I think the anti-vandalism model is not sustainable. As Requests for adminship becomes more difficult to pass, kids will start to lose their primary motivation for doing anti-vandalism work, and adults like me already understand that Wikipedia needs our content much more than it needs our mindless reverting of other people's junk. So in the end, it simply won't get done. We'll get more complaints from people wondering why a vandalism item remained on a page for three months, and then maybe we'll do something about it. I think semi-protection should be used more liberally than it is currently. Semi-protection requests get refused because "there is not enough vandalism to justify it at this time" (I'm paraphrasing). How much vandalism is "enough"? This doubly applies to high-profile articles where if nobody vandalizes today, someone will vandalize tomorrow, or a week from now, or a month from now. I make no distionction between BLPs and other articles, except to say that BLP articles that are targets of vandalism should be semi-protected until the subject dies. For example Joe Liberman's biography is permanently semi-protected, as well it should be. Non-BLP articles can be given a little more latitude, but for an article about Kazakhstan, there's no reason to think that people will ever stop making jokes, and that's been semi-protected and move-protected, but it took a while to get there. (HAGGER?) There's no reason we can't extend this logic, which is working in practice, to any targets of vandalism. We don't choose which pages to semi-protect; the vandals do. And if the vandals want us to semi-protect every page in all of creation, then they can vandalize all five thousand random pages about numbered asteroids and we'll semi-protect those too. I really have no patience for this blind tolerance for wasting valuable contributors' time. Once we get rid of the automated tools, people will start to see the real problem and will eventually do something about it. Flagged revisions is definitely a step in the right direction.

To make myself clearer: we tolerate admin-bots and anti-vandal-bots because humans can't do the jobs themselves. Instead of just letting anyone use these powerful bots, we should instead ask ourselves why we can't manage our problems by hand? Is there a way we can reduce the workload instead of automating the response? I think reducing the workload is the more sustainable solution in the long run, and it might help forestall the sort of misunderstanding that may arise from using automated anti-vandalism tools.

If Jimbo Wales really cares about letting anyone edit any page on Wikipedia, he can stop being a celebrity and start spending a few hours patrolling recent-changes or new-pages like everyone else does. Then he might understand why his project is getting out of control.


Posted by: maggot3

Huggle is basically an automated recent changes viewing tool. You have a queue of all the recent changes, it shows the diff of each one and you can click a button to revert and warn the person.

Posted by: GlassBeadGame

QUOTE(maggot3 @ Fri 6th June 2008, 12:15pm) *

Huggle is basically an automated recent changes viewing tool. You have a queue of all the recent changes, it shows the diff of each one and you can click a button to revert and warn the person.


Thank you, mag. Shalom lost me at "userspace essay."

Posted by: Shalom

QUOTE(maggot3 @ Fri 6th June 2008, 2:15pm) *

Huggle is basically an automated recent changes viewing tool. You have a queue of all the recent changes, it shows the diff of each one and you can click a button to revert and warn the person.


That's my other concern. Sometimes I revert and don't warn, or I'll choose to start with a level 1, 2, 3, or 4 warning at my discretion, or I'll use one of the more obscure warning templates such as {{uw-joke1}}. Huggle users just warn the way the software tells them to warn. They will give a level 1 warning to the sickest profanity and defacement imaginable. I like reducing the number of warnings so that if a vandal really won't stop, we don't have to spend four warnings giving ourselves permission to block. Two warnings is plenty. Three strikes, you're out.

Posted by: Jon Awbrey

Damn! The old "TL;DR" article at ED used to be one of the funniest things on the Internet, the only page in all of ED that I routinely linked. And now some witless drudge has gone and deleted it, and even Xpunged the long version from the history.

Bleeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee!

Jon cool.gif

QUOTE(Jon Awbrey @ Fri 6th June 2008, 2:12pm) *

http://www.encyclopediadramatica.com/TL;DR

Jon cool.gif

QUOTE(Shalom @ Fri 6th June 2008, 2:10pm) *

Someday I'd like to write a userspace essay titled "Why I don't use performance-enhancing tools." I've made about 25,000 edits on English Wikipedia, not counting alternate accounts, IP addresses, deleted edits, edits on other projects, etc. Every single edit was a real edit, except for rollback and page moves. I do not use automated tools, and I never have, and I probably never will (though I am free to change my mind at any time).

Even as a human vandalism patroller I am not immune to mistakes. On my RFA as Shalom I admitted that I made a mistake reverting a series of anon edits to the biography of Josh Hancock saying that he had died in a car accident without giving a source. I did take a few seconds to check Google (not Google News, just plain Google) to see if Josh Hancock had died, but I didn't find anything. It turns out this anon editor was a little ahead of the curve, and I found out he was right from two messages by established users on my talk page. I left an apology on the anon's talk page, but I don't think he read it. It's too bad. I understand from reading old posts on this site that similar shenanigans have occurred with other biographies. The problem is that, from my experience as a patroller, and reacalling that (to paraphrase Mark Twain) reports of Sinbad's death have been greatly exaggerated, in most cases where an anon edits a page to say somebody just died, the anon is lying and the person is alive and well. I assumed bad faith because, without a source to back up these edits, the default assumption was that Josh Hancock was still alive until I was certain that he was dead. That being said, the level 3 warning ("Please stop." etc.) I left for the user was a bit excessive, and I'm sorry I didn't use a level 2 warning instead. But that's a minor detail. Gurch also reverted another user who re-added the same material until the edits finally stuck after a third try by someone else. Such is life. I wish there were a smoother way for anons who don't know about sourcing issues to tell us that people have died, but there isn't. It's one of the flaws in the system that can't be resolved without semi-protecting all 250,000+ BLPs, and I'm not in favor of doing that.

Returning to the issue at hand, not using automated tools is not a foolproof bulwark against human error. But what editing by hand does for me is to make sure that every single one of my edits is a fully conscious decision, and I am not trusting a piece of computer code to make any edits on my behalf. (Templates are another matter.) I made a mistake on the Josh Hancock article, but it was an honest mistake, which I made with full awareness, based on the information I had at the time, that my action seemed to be correct. In retrospect, it was not correct, but at least I can look back and know that I tried to make the right decision with full consideration to all available factors, and without relying on automated tools to make a decision on my behalf.

Just a couple of days ago I noticed that Addbot, operated by Addshore, had added the {{uncat}} template to a hundred "year in baseball" articles. I left him a note asking him to fix it, and to his credit, he responded quickly, undid all the edits, and diagnosed the problem. (The "category" link was based on a template and was not actually in the wiki-code for those pages, so the bot did not see it and thought there was no category.) Next time Addshore applies for RFA, I'll support him. (I opposed his first try.) So for something like that, it's okay to find a problem after the fact and correct it. For Neil's case, that approach doesn't work. An editor got blocked for no good reason, and we probably lost one more potential contributor, at least for a long while. We really have a way of shooting ourselves in the foot by welcoming users with one hand and chasing them away with the other. That's another discussion for another thread.

I keep getting sidetracked. My point is that I would not feel terrible if automated anti-vandalism tools were simply shut down altogether. I'm not saying this in an effort to let the site be swamped with vandalism - God forbid. What I'm saying is, if you can't review edits by hand and make an informed decision about those edits, then you should not be using a bot to make an uninformed decision about those edits. I have no issue with ClueBot and its clones: they do a good job. But letting users undo edits without actually thinking carefully about those edits is not a good idea, and I have serious questions about whether these tools are really a net benefit to the project.

You may ask what might happen if these tools were disabled. Aside from the hurt feelings that Gurch and AmiDaniel and others would sustain, and I don't take that lightly, would the project be worse off? I think what would happen is people would start to realize that the vandalism problem really is becoming unmanageable. We're putting fingers in the dike by letting people use automated tools to do things they can only do half as fast by hand, but the flood of vandalism is starting to weaken the dike. I think Wikipedia as such is sustainable, in the sense that people will always be willing to contribute content and funds. I think the anti-vandalism model is not sustainable. As Requests for adminship becomes more difficult to pass, kids will start to lose their primary motivation for doing anti-vandalism work, and adults like me already understand that Wikipedia needs our content much more than it needs our mindless reverting of other people's junk. So in the end, it simply won't get done. We'll get more complaints from people wondering why a vandalism item remained on a page for three months, and then maybe we'll do something about it. I think semi-protection should be used more liberally than it is currently. Semi-protection requests get refused because "there is not enough vandalism to justify it at this time" (I'm paraphrasing). How much vandalism is "enough"? This doubly applies to high-profile articles where if nobody vandalizes today, someone will vandalize tomorrow, or a week from now, or a month from now. I make no distionction between BLPs and other articles, except to say that BLP articles that are targets of vandalism should be semi-protected until the subject dies. For example Joe Liberman's biography is permanently semi-protected, as well it should be. Non-BLP articles can be given a little more latitude, but for an article about Kazakhstan, there's no reason to think that people will ever stop making jokes, and that's been semi-protected and move-protected, but it took a while to get there. (HAGGER?) There's no reason we can't extend this logic, which is working in practice, to any targets of vandalism. We don't choose which pages to semi-protect; the vandals do. And if the vandals want us to semi-protect every page in all of creation, then they can vandalize all five thousand random pages about numbered asteroids and we'll semi-protect those too. I really have no patience for this blind tolerance for wasting valuable contributors' time. Once we get rid of the automated tools, people will start to see the real problem and will eventually do something about it. Flagged revisions is definitely a step in the right direction.

To make myself clearer: we tolerate admin-bots and anti-vandal-bots because humans can't do the jobs themselves. Instead of just letting anyone use these powerful bots, we should instead ask ourselves why we can't manage our problems by hand? Is there a way we can reduce the workload instead of automating the response? I think reducing the workload is the more sustainable solution in the long run, and it might help forestall the sort of misunderstanding that may arise from using automated anti-vandalism tools.

If Jimbo Wales really cares about letting anyone edit any page on Wikipedia, he can stop being a celebrity and start spending a few hours patrolling recent-changes or new-pages like everyone else does. Then he might understand why his project is getting out of control.



Posted by: dtobias

QUOTE(Jon Awbrey @ Fri 6th June 2008, 2:38pm) *

Damn! The old "TL;DR" article at ED used to be one of the funniest things on the Internet, the only page in all of ED that I routinely linked. And now some witless drudge has gone and deleted it, and even Xpunged the long version from the history.


But do you really need to go quoting back, in full, the posting that you're saying is too long (twice already)? Top-posting / fullquoting is enough of a problem in e-mail lists... do you need to do it on forums too?

Posted by: Jon Awbrey

What's Top-Posting?

And do they have a Bot for that?

Jon cool.gif

QUOTE(dtobias @ Fri 6th June 2008, 2:47pm) *

QUOTE(Jon Awbrey @ Fri 6th June 2008, 2:38pm) *

Damn! The old "TL;DR" article at ED used to be one of the funniest things on the Internet, the only page in all of ED that I routinely linked. And now some witless drudge has gone and deleted it, and even Xpunged the long version from the history.


But do you really need to go quoting back, in full, the posting that you're saying is too long (twice already)? Top-posting / fullquoting is enough of a problem in e-mail lists … do you need to do it on forums too?


Hey! Maybe I can submit the dig infra to ED as a replacement for their old "TL;DR" article?

Jon cool.gif

QUOTE(Jon Awbrey @ Fri 6th June 2008, 2:38pm) *

Damn! The old "TL;DR" article at ED used to be one of the funniest things on the Internet, the only page in all of ED that I routinely linked. And now some witless drudge has gone and deleted it, and even Xpunged the long version from the history.

Bleeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee!

Jon cool.gif

QUOTE(Jon Awbrey @ Fri 6th June 2008, 2:12pm) *

http://www.encyclopediadramatica.com/TL;DR

Jon cool.gif

QUOTE(Shalom @ Fri 6th June 2008, 2:10pm) *

Someday I'd like to write a userspace essay titled "Why I don't use performance-enhancing tools." I've made about 25,000 edits on English Wikipedia, not counting alternate accounts, IP addresses, deleted edits, edits on other projects, etc. Every single edit was a real edit, except for rollback and page moves. I do not use automated tools, and I never have, and I probably never will (though I am free to change my mind at any time).

Even as a human vandalism patroller I am not immune to mistakes. On my RFA as Shalom I admitted that I made a mistake reverting a series of anon edits to the biography of Josh Hancock saying that he had died in a car accident without giving a source. I did take a few seconds to check Google (not Google News, just plain Google) to see if Josh Hancock had died, but I didn't find anything. It turns out this anon editor was a little ahead of the curve, and I found out he was right from two messages by established users on my talk page. I left an apology on the anon's talk page, but I don't think he read it. It's too bad. I understand from reading old posts on this site that similar shenanigans have occurred with other biographies. The problem is that, from my experience as a patroller, and reacalling that (to paraphrase Mark Twain) reports of Sinbad's death have been greatly exaggerated, in most cases where an anon edits a page to say somebody just died, the anon is lying and the person is alive and well. I assumed bad faith because, without a source to back up these edits, the default assumption was that Josh Hancock was still alive until I was certain that he was dead. That being said, the level 3 warning ("Please stop." etc.) I left for the user was a bit excessive, and I'm sorry I didn't use a level 2 warning instead. But that's a minor detail. Gurch also reverted another user who re-added the same material until the edits finally stuck after a third try by someone else. Such is life. I wish there were a smoother way for anons who don't know about sourcing issues to tell us that people have died, but there isn't. It's one of the flaws in the system that can't be resolved without semi-protecting all 250,000+ BLPs, and I'm not in favor of doing that.

Returning to the issue at hand, not using automated tools is not a foolproof bulwark against human error. But what editing by hand does for me is to make sure that every single one of my edits is a fully conscious decision, and I am not trusting a piece of computer code to make any edits on my behalf. (Templates are another matter.) I made a mistake on the Josh Hancock article, but it was an honest mistake, which I made with full awareness, based on the information I had at the time, that my action seemed to be correct. In retrospect, it was not correct, but at least I can look back and know that I tried to make the right decision with full consideration to all available factors, and without relying on automated tools to make a decision on my behalf.

Just a couple of days ago I noticed that Addbot, operated by Addshore, had added the {{uncat}} template to a hundred "year in baseball" articles. I left him a note asking him to fix it, and to his credit, he responded quickly, undid all the edits, and diagnosed the problem. (The "category" link was based on a template and was not actually in the wiki-code for those pages, so the bot did not see it and thought there was no category.) Next time Addshore applies for RFA, I'll support him. (I opposed his first try.) So for something like that, it's okay to find a problem after the fact and correct it. For Neil's case, that approach doesn't work. An editor got blocked for no good reason, and we probably lost one more potential contributor, at least for a long while. We really have a way of shooting ourselves in the foot by welcoming users with one hand and chasing them away with the other. That's another discussion for another thread.

I keep getting sidetracked. My point is that I would not feel terrible if automated anti-vandalism tools were simply shut down altogether. I'm not saying this in an effort to let the site be swamped with vandalism - God forbid. What I'm saying is, if you can't review edits by hand and make an informed decision about those edits, then you should not be using a bot to make an uninformed decision about those edits. I have no issue with ClueBot and its clones: they do a good job. But letting users undo edits without actually thinking carefully about those edits is not a good idea, and I have serious questions about whether these tools are really a net benefit to the project.

You may ask what might happen if these tools were disabled. Aside from the hurt feelings that Gurch and AmiDaniel and others would sustain, and I don't take that lightly, would the project be worse off? I think what would happen is people would start to realize that the vandalism problem really is becoming unmanageable. We're putting fingers in the dike by letting people use automated tools to do things they can only do half as fast by hand, but the flood of vandalism is starting to weaken the dike. I think Wikipedia as such is sustainable, in the sense that people will always be willing to contribute content and funds. I think the anti-vandalism model is not sustainable. As Requests for adminship becomes more difficult to pass, kids will start to lose their primary motivation for doing anti-vandalism work, and adults like me already understand that Wikipedia needs our content much more than it needs our mindless reverting of other people's junk. So in the end, it simply won't get done. We'll get more complaints from people wondering why a vandalism item remained on a page for three months, and then maybe we'll do something about it. I think semi-protection should be used more liberally than it is currently. Semi-protection requests get refused because "there is not enough vandalism to justify it at this time" (I'm paraphrasing). How much vandalism is "enough"? This doubly applies to high-profile articles where if nobody vandalizes today, someone will vandalize tomorrow, or a week from now, or a month from now. I make no distionction between BLPs and other articles, except to say that BLP articles that are targets of vandalism should be semi-protected until the subject dies. For example Joe Liberman's biography is permanently semi-protected, as well it should be. Non-BLP articles can be given a little more latitude, but for an article about Kazakhstan, there's no reason to think that people will ever stop making jokes, and that's been semi-protected and move-protected, but it took a while to get there. (HAGGER?) There's no reason we can't extend this logic, which is working in practice, to any targets of vandalism. We don't choose which pages to semi-protect; the vandals do. And if the vandals want us to semi-protect every page in all of creation, then they can vandalize all five thousand random pages about numbered asteroids and we'll semi-protect those too. I really have no patience for this blind tolerance for wasting valuable contributors' time. Once we get rid of the automated tools, people will start to see the real problem and will eventually do something about it. Flagged revisions is definitely a step in the right direction.

To make myself clearer: we tolerate admin-bots and anti-vandal-bots because humans can't do the jobs themselves. Instead of just letting anyone use these powerful bots, we should instead ask ourselves why we can't manage our problems by hand? Is there a way we can reduce the workload instead of automating the response? I think reducing the workload is the more sustainable solution in the long run, and it might help forestall the sort of misunderstanding that may arise from using automated anti-vandalism tools.

If Jimbo Wales really cares about letting anyone edit any page on Wikipedia, he can stop being a celebrity and start spending a few hours patrolling recent-changes or new-pages like everyone else does. Then he might understand why his project is getting out of control.




Posted by: GlassBeadGame

QUOTE(dtobias @ Fri 6th June 2008, 12:47pm) *

QUOTE(Jon Awbrey @ Fri 6th June 2008, 2:38pm) *

Damn! The old "TL;DR" article at ED used to be one of the funniest things on the Internet, the only page in all of ED that I routinely linked. And now some witless drudge has gone and deleted it, and even Xpunged the long version from the history.


But do you really need to go quoting back, in full, the posting that you're saying is too long (twice already)? Top-posting / fullquoting is enough of a problem in e-mail lists... do you need to do it on forums too?


Irony.

QUOTE(Shalom @ Fri 6th June 2008, 12:10pm) *

Someday I'd like to write a userspace essay titled "Why I don't use performance-enhancing tools." I've made about 25,000 edits on English Wikipedia, not counting alternate accounts, IP addresses, deleted edits, edits on other projects, etc. Every single edit was a real edit, except for rollback and page moves. I do not use automated tools, and I never have, and I probably never will (though I am free to change my mind at any time).

Even as a human vandalism patroller I am not immune to mistakes. On my RFA as Shalom I admitted that I made a mistake reverting a series of anon edits to the biography of Josh Hancock saying that he had died in a car accident without giving a source. I did take a few seconds to check Google (not Google News, just plain Google) to see if Josh Hancock had died, but I didn't find anything. It turns out this anon editor was a little ahead of the curve, and I found out he was right from two messages by established users on my talk page. I left an apology on the anon's talk page, but I don't think he read it. It's too bad. I understand from reading old posts on this site that similar shenanigans have occurred with other biographies. The problem is that, from my experience as a patroller, and reacalling that (to paraphrase Mark Twain) reports of Sinbad's death have been greatly exaggerated, in most cases where an anon edits a page to say somebody just died, the anon is lying and the person is alive and well. I assumed bad faith because, without a source to back up these edits, the default assumption was that Josh Hancock was still alive until I was certain that he was dead. That being said, the level 3 warning ("Please stop." etc.) I left for the user was a bit excessive, and I'm sorry I didn't use a level 2 warning instead. But that's a minor detail. Gurch also reverted another user who re-added the same material until the edits finally stuck after a third try by someone else. Such is life. I wish there were a smoother way for anons who don't know about sourcing issues to tell us that people have died, but there isn't. It's one of the flaws in the system that can't be resolved without semi-protecting all 250,000+ BLPs, and I'm not in favor of doing that.

Returning to the issue at hand, not using automated tools is not a foolproof bulwark against human error. But what editing by hand does for me is to make sure that every single one of my edits is a fully conscious decision, and I am not trusting a piece of computer code to make any edits on my behalf. (Templates are another matter.) I made a mistake on the Josh Hancock article, but it was an honest mistake, which I made with full awareness, based on the information I had at the time, that my action seemed to be correct. In retrospect, it was not correct, but at least I can look back and know that I tried to make the right decision with full consideration to all available factors, and without relying on automated tools to make a decision on my behalf.

Just a couple of days ago I noticed that Addbot, operated by Addshore, had added the {{uncat}} template to a hundred "year in baseball" articles. I left him a note asking him to fix it, and to his credit, he responded quickly, undid all the edits, and diagnosed the problem. (The "category" link was based on a template and was not actually in the wiki-code for those pages, so the bot did not see it and thought there was no category.) Next time Addshore applies for RFA, I'll support him. (I opposed his first try.) So for something like that, it's okay to find a problem after the fact and correct it. For Neil's case, that approach doesn't work. An editor got blocked for no good reason, and we probably lost one more potential contributor, at least for a long while. We really have a way of shooting ourselves in the foot by welcoming users with one hand and chasing them away with the other. That's another discussion for another thread.

I keep getting sidetracked. My point is that I would not feel terrible if automated anti-vandalism tools were simply shut down altogether. I'm not saying this in an effort to let the site be swamped with vandalism - God forbid. What I'm saying is, if you can't review edits by hand and make an informed decision about those edits, then you should not be using a bot to make an uninformed decision about those edits. I have no issue with ClueBot and its clones: they do a good job. But letting users undo edits without actually thinking carefully about those edits is not a good idea, and I have serious questions about whether these tools are really a net benefit to the project.

You may ask what might happen if these tools were disabled. Aside from the hurt feelings that Gurch and AmiDaniel and others would sustain, and I don't take that lightly, would the project be worse off? I think what would happen is people would start to realize that the vandalism problem really is becoming unmanageable. We're putting fingers in the dike by letting people use automated tools to do things they can only do half as fast by hand, but the flood of vandalism is starting to weaken the dike. I think Wikipedia as such is sustainable, in the sense that people will always be willing to contribute content and funds. I think the anti-vandalism model is not sustainable. As Requests for adminship becomes more difficult to pass, kids will start to lose their primary motivation for doing anti-vandalism work, and adults like me already understand that Wikipedia needs our content much more than it needs our mindless reverting of other people's junk. So in the end, it simply won't get done. We'll get more complaints from people wondering why a vandalism item remained on a page for three months, and then maybe we'll do something about it. I think semi-protection should be used more liberally than it is currently. Semi-protection requests get refused because "there is not enough vandalism to justify it at this time" (I'm paraphrasing). How much vandalism is "enough"? This doubly applies to high-profile articles where if nobody vandalizes today, someone will vandalize tomorrow, or a week from now, or a month from now. I make no distionction between BLPs and other articles, except to say that BLP articles that are targets of vandalism should be semi-protected until the subject dies. For example Joe Liberman's biography is permanently semi-protected, as well it should be. Non-BLP articles can be given a little more latitude, but for an article about Kazakhstan, there's no reason to think that people will ever stop making jokes, and that's been semi-protected and move-protected, but it took a while to get there. (HAGGER?) There's no reason we can't extend this logic, which is working in practice, to any targets of vandalism. We don't choose which pages to semi-protect; the vandals do. And if the vandals want us to semi-protect every page in all of creation, then they can vandalize all five thousand random pages about numbered asteroids and we'll semi-protect those too. I really have no patience for this blind tolerance for wasting valuable contributors' time. Once we get rid of the automated tools, people will start to see the real problem and will eventually do something about it. Flagged revisions is definitely a step in the right direction.

To make myself clearer: we tolerate admin-bots and anti-vandal-bots because humans can't do the jobs themselves. Instead of just letting anyone use these powerful bots, we should instead ask ourselves why we can't manage our problems by hand? Is there a way we can reduce the workload instead of automating the response? I think reducing the workload is the more sustainable solution in the long run, and it might help forestall the sort of misunderstanding that may arise from using automated anti-vandalism tools.

If Jimbo Wales really cares about letting anyone edit any page on Wikipedia, he can stop being a celebrity and start spending a few hours patrolling recent-changes or new-pages like everyone else does. Then he might understand why his project is getting out of control.


Posted by: Lar

QUOTE(guy @ Fri 6th June 2008, 12:31pm) *

No doubt we need a National Huggle Association to match the National Rifle Association. "Huggle doesn't cause problems - people cause problems." tongue.gif

Er, sorry, that's what I wanted to say, actually. No tool by itself is evil (although some might be more prone to misuse than others)...

Posted by: thekohser

QUOTE(Shalom @ Fri 6th June 2008, 2:10pm) *

To make myself clearer: we tolerate admin-bots and anti-vandal-bots because humans can't do the jobs themselves. Instead of just letting anyone use these powerful bots, we should instead ask ourselves why we can't manage our problems by hand? Is there a way we can reduce the workload instead of automating the response? I think reducing the workload is the more sustainable solution in the long run, and it might help forestall the sort of misunderstanding that may arise from using automated anti-vandalism tools.

If Jimbo Wales really cares about letting anyone edit any page on Wikipedia, he can stop being a celebrity and start spending a few hours patrolling recent-changes or new-pages like everyone else does. Then he might understand why his project is getting out of control.


Strange that you'd rank someone "100" in the Board election who would work to halt anonymous IP editing and keep Jimbo in check on his increasingly-out-of-control project. Your choice, though. It must be shown respect.

Posted by: guy

QUOTE(Jon Awbrey @ Fri 6th June 2008, 7:12pm) *

http://www.encyclopediadramatica.com/TL;DR

Make some time and read it. It's much more worth reading than a good few thousand posts I can think of.

Posted by: Somey

QUOTE(Shalom @ Fri 6th June 2008, 1:10pm) *
You may ask what might happen if these tools were disabled. Aside from the hurt feelings that Gurch and AmiDaniel and others would sustain, and I don't take that lightly, would the project be worse off? I think what would happen is people would start to realize that the vandalism problem really is becoming unmanageable. We're putting fingers in the dike by letting people use automated tools to do things they can only do half as fast by hand, but the flood of vandalism is starting to weaken the dike. I think Wikipedia as such is sustainable, in the sense that people will always be willing to contribute content and funds. I think the anti-vandalism model is not sustainable.

And when all those "regular," "normal" people start showing up from the Middle East, Africa, and the rest of the third world, they'll probably be treated as "vandals" too, for trying to push their "third-world POV" on "stable" en.wikipedia articles... at which point they'll have to program bots for that too.

I actually thought that what I'd been calling the "lockdown phase" of Wikipedia's life-cycle was still three or four years away, but I'm beginning to think it's more like one year away. If they ever implement the "flagged revisions" feature, vandal-fighters might see some real improvement in their situation, at least until the vandals start setting up means of dealing with it... but then again, I suppose there are plenty of people who actually enjoy vandal-fighting as it's currently defined. Who knows, maybe even enough to derail the initial attempts to implement flagged revisions.

Posted by: michael

I occasionally use tools, although bots have usurped two of the thee tasks I used them for - notification of CSD speedy tags and notification of PRODs. The other one, nominating images for deletion, has not yet been completely taken over. I find them to be useful and they make repetitious tasks much more bearable.

Posted by: ByAppointmentTo

QUOTE(Jon Awbrey @ Fri 6th June 2008, 7:12pm) *

http://www.encyclopediadramatica.com/TL;DR

Jon cool.gif

QUOTE(Shalom @ Fri 6th June 2008, 2:10pm) *

Someday I'd like to write a userspace essay titled "Why I don't use performance-enhancing tools." I've made about 25,000 edits on English Wikipedia, not counting alternate accounts, IP addresses, deleted edits, edits on other projects, etc. Every single edit was a real edit, except for rollback and page moves. I do not use automated tools, and I never have, and I probably never will (though I am free to change my mind at any time).

Even as a human vandalism patroller I am not immune to mistakes. On my RFA as Shalom I admitted that I made a mistake reverting a series of anon edits to the biography of Josh Hancock saying that he had died in a car accident without giving a source. I did take a few seconds to check Google (not Google News, just plain Google) to see if Josh Hancock had died, but I didn't find anything. It turns out this anon editor was a little ahead of the curve, and I found out he was right from two messages by established users on my talk page. I left an apology on the anon's talk page, but I don't think he read it. It's too bad. I understand from reading old posts on this site that similar shenanigans have occurred with other biographies. The problem is that, from my experience as a patroller, and reacalling that (to paraphrase Mark Twain) reports of Sinbad's death have been greatly exaggerated, in most cases where an anon edits a page to say somebody just died, the anon is lying and the person is alive and well. I assumed bad faith because, without a source to back up these edits, the default assumption was that Josh Hancock was still alive until I was certain that he was dead. That being said, the level 3 warning ("Please stop." etc.) I left for the user was a bit excessive, and I'm sorry I didn't use a level 2 warning instead. But that's a minor detail. Gurch also reverted another user who re-added the same material until the edits finally stuck after a third try by someone else. Such is life. I wish there were a smoother way for anons who don't know about sourcing issues to tell us that people have died, but there isn't. It's one of the flaws in the system that can't be resolved without semi-protecting all 250,000+ BLPs, and I'm not in favor of doing that.

Returning to the issue at hand, not using automated tools is not a foolproof bulwark against human error. But what editing by hand does for me is to make sure that every single one of my edits is a fully conscious decision, and I am not trusting a piece of computer code to make any edits on my behalf. (Templates are another matter.) I made a mistake on the Josh Hancock article, but it was an honest mistake, which I made with full awareness, based on the information I had at the time, that my action seemed to be correct. In retrospect, it was not correct, but at least I can look back and know that I tried to make the right decision with full consideration to all available factors, and without relying on automated tools to make a decision on my behalf.

Just a couple of days ago I noticed that Addbot, operated by Addshore, had added the {{uncat}} template to a hundred "year in baseball" articles. I left him a note asking him to fix it, and to his credit, he responded quickly, undid all the edits, and diagnosed the problem. (The "category" link was based on a template and was not actually in the wiki-code for those pages, so the bot did not see it and thought there was no category.) Next time Addshore applies for RFA, I'll support him. (I opposed his first try.) So for something like that, it's okay to find a problem after the fact and correct it. For Neil's case, that approach doesn't work. An editor got blocked for no good reason, and we probably lost one more potential contributor, at least for a long while. We really have a way of shooting ourselves in the foot by welcoming users with one hand and chasing them away with the other. That's another discussion for another thread.

I keep getting sidetracked. My point is that I would not feel terrible if automated anti-vandalism tools were simply shut down altogether. I'm not saying this in an effort to let the site be swamped with vandalism - God forbid. What I'm saying is, if you can't review edits by hand and make an informed decision about those edits, then you should not be using a bot to make an uninformed decision about those edits. I have no issue with ClueBot and its clones: they do a good job. But letting users undo edits without actually thinking carefully about those edits is not a good idea, and I have serious questions about whether these tools are really a net benefit to the project.

You may ask what might happen if these tools were disabled. Aside from the hurt feelings that Gurch and AmiDaniel and others would sustain, and I don't take that lightly, would the project be worse off? I think what would happen is people would start to realize that the vandalism problem really is becoming unmanageable. We're putting fingers in the dike by letting people use automated tools to do things they can only do half as fast by hand, but the flood of vandalism is starting to weaken the dike. I think Wikipedia as such is sustainable, in the sense that people will always be willing to contribute content and funds. I think the anti-vandalism model is not sustainable. As Requests for adminship becomes more difficult to pass, kids will start to lose their primary motivation for doing anti-vandalism work, and adults like me already understand that Wikipedia needs our content much more than it needs our mindless reverting of other people's junk. So in the end, it simply won't get done. We'll get more complaints from people wondering why a vandalism item remained on a page for three months, and then maybe we'll do something about it. I think semi-protection should be used more liberally than it is currently. Semi-protection requests get refused because "there is not enough vandalism to justify it at this time" (I'm paraphrasing). How much vandalism is "enough"? This doubly applies to high-profile articles where if nobody vandalizes today, someone will vandalize tomorrow, or a week from now, or a month from now. I make no distionction between BLPs and other articles, except to say that BLP articles that are targets of vandalism should be semi-protected until the subject dies. For example Joe Liberman's biography is permanently semi-protected, as well it should be. Non-BLP articles can be given a little more latitude, but for an article about Kazakhstan, there's no reason to think that people will ever stop making jokes, and that's been semi-protected and move-protected, but it took a while to get there. (HAGGER?) There's no reason we can't extend this logic, which is working in practice, to any targets of vandalism. We don't choose which pages to semi-protect; the vandals do. And if the vandals want us to semi-protect every page in all of creation, then they can vandalize all five thousand random pages about numbered asteroids and we'll semi-protect those too. I really have no patience for this blind tolerance for wasting valuable contributors' time. Once we get rid of the automated tools, people will start to see the real problem and will eventually do something about it. Flagged revisions is definitely a step in the right direction.

To make myself clearer: we tolerate admin-bots and anti-vandal-bots because humans can't do the jobs themselves. Instead of just letting anyone use these powerful bots, we should instead ask ourselves why we can't manage our problems by hand? Is there a way we can reduce the workload instead of automating the response? I think reducing the workload is the more sustainable solution in the long run, and it might help forestall the sort of misunderstanding that may arise from using automated anti-vandalism tools.

If Jimbo Wales really cares about letting anyone edit any page on Wikipedia, he can stop being a celebrity and start spending a few hours patrolling recent-changes or new-pages like everyone else does. Then he might understand why his project is getting out of control.



HA... that's exactly what I was going to say.

biggrin.gif

Posted by: dogbiscuit

Just how many kiddies are sitting there, day after day, pressing buttons VandalFighting ™ ?

it's a scary thought of the sheer volume of mindless tedium, and the thought that the sheer volume of vandalism that might be being hidden by these robots. The other scary thought is that they might be intelligent people doing this.

Posted by: Milton Roe

QUOTE(Shalom @ Fri 6th June 2008, 2:10pm) *

I think semi-protection should be used more liberally than it is currently. Semi-protection requests get refused because "there is not enough vandalism to justify it at this time" (I'm paraphrasing). How much vandalism is "enough"? This doubly applies to high-profile articles where if nobody vandalizes today, someone will vandalize tomorrow, or a week from now, or a month from now. I make no distionction between BLPs and other articles, except to say that BLP articles that are targets of vandalism should be semi-protected until the subject dies. For example Joe Liberman's biography is permanently semi-protected, as well it should be. Non-BLP articles can be given a little more latitude, but for an article about Kazakhstan, there's no reason to think that people will ever stop making jokes, and that's been semi-protected and move-protected, but it took a while to get there. (HAGGER?) There's no reason we can't extend this logic, which is working in practice, to any targets of vandalism. We don't choose which pages to semi-protect; the vandals do. And if the vandals want us to semi-protect every page in all of creation, then they can vandalize all five thousand random pages about numbered asteroids and we'll semi-protect those too. I really have no patience for this blind tolerance for wasting valuable contributors' time. Once we get rid of the automated tools, people will start to see the real problem and will eventually do something about it.


No, it's more a matter of "Why do we need the cotton gin when we have plenty of slaves?"

There is almost no problem of grunt work that could no be solved in the real world by having the bosses actually do some of the grunt work, until they decide that what the company really, really needs is a automatic gruntwork machine. Or that some kind of gruntwork prevention is needed. The disconnect happens because the people with exposure to shit, and not those making the decisions about hot to handle shit. Example: if you're a real estate agent in 2006, USA, and the guys you sell the house to don't have any income, and default on the loan 3 months after it's made, what's that to you? It's not your problem. It's not even your agencies' problem. That loan is long gone. They probably converted it to collateralized mortgage-backed securities and sold it in little pieces to the French or Chinese, by then. So you don't really ask for W-2 or tax return, and your agency doesn't either. You really don't want to know. Meanwhile, you just continue to pour your used motoroil down the stormdrains and hope you never see it again.

In the ideal world, when the military officer wants his men to dig a useless trench, the first thing that happens is he should be handed a shovel. This generally stimulates "strategic thought." But the military hardly ever works that way. And wikipedia doesn't either.

We can fantisize. You have a vote on sprotection of all pages. Anybody who votes NO, is given the task of fixing IP vandalism, and has to do a certain number of these before any edits for a day, including chat and talk page work. People who vote "yes," are not required to fix one ever again. ohmy.gif

laugh.gif I told you it was fantasy.

Milt

Posted by: GlassBeadGame

QUOTE(Milton Roe @ Fri 6th June 2008, 5:32pm) *



No, it's more a matter of "Why do we need the cotton gin when we have plenty of slaves?"



I hate to spoil your perfectly well taken point, which I absolutely agree with in substance, but the cotton gin actually made slavery much more profitable in the South and contributed to its regional nature slavery in antebellum times. Let's not http://wikipediareview.com/index.php?s=&showtopic=18138&view=findpost&p=106263 over this fact.