Help - Search - Members - Calendar
Full Version: HiveMind: 404 Not Found
> Wikimedia Discussion > General Discussion
anon1234
SqueakBox just noticed that the hivemind and hive2 pages have gone offline. SqueakBox suspects it demonstrates the success of the failed policy page Wikipedia:Attack sites. I am not so sure, as that policy didn't seem that relevant to Daniel Brandt.

The urls were:
http://www.wikipedia-watch.org/hivemind.html
http://www.wikipedia-watch.org/hive2.html
Daniel Brandt
QUOTE(anon1234 @ Sun 15th April 2007, 8:25pm) *

SqueakBox just noticed that the hivemind and hive2 pages have gone offline. SqueakBox suspects it demonstrates the success of the failed policy page Wikipedia:Attack sites. I am not so sure, as that policy didn't seem that relevant to Daniel Brandt.

The urls were:
http://www.wikipedia-watch.org/hivemind.html
http://www.wikipedia-watch.org/hive2.html

I also took down the IRC logs. As I move toward litigation, it has become clearer to me that Jimbo and the Foundation are responsible for the behavior of their editors, because they control the structure of Wikipedia. There is no point in confusing the judge and jury.
thekohser
QUOTE(Daniel Brandt @ Sun 15th April 2007, 11:29pm) *

I also took down the IRC logs. As I move toward litigation, it has become clearer to me that Jimbo and the Foundation are responsible for the behavior of their editors, because they control the structure of Wikipedia. There is no point in confusing the judge and jury.

I for one say we can all live without the Hive material for a number of months, or even years, if it means a clearer case against Wikipedia being properly presented to a jury and judge. You go, Daniel!

Greg
GlassBeadGame
QUOTE(Daniel Brandt @ Sun 15th April 2007, 9:29pm) *

I also took down the IRC logs. As I move toward litigation, it has become clearer to me that Jimbo and the Foundation are responsible for the behavior of their editors, because they control the structure of Wikipedia. There is no point in confusing the judge and jury.


Seems like a reasonable course of action to me too.
everyking
QUOTE(Daniel Brandt @ Mon 16th April 2007, 4:29am) *

QUOTE(anon1234 @ Sun 15th April 2007, 8:25pm) *

SqueakBox just noticed that the hivemind and hive2 pages have gone offline. SqueakBox suspects it demonstrates the success of the failed policy page Wikipedia:Attack sites. I am not so sure, as that policy didn't seem that relevant to Daniel Brandt.

The urls were:
http://www.wikipedia-watch.org/hivemind.html
http://www.wikipedia-watch.org/hive2.html

I also took down the IRC logs. As I move toward litigation, it has become clearer to me that Jimbo and the Foundation are responsible for the behavior of their editors, because they control the structure of Wikipedia. There is no point in confusing the judge and jury.


I assume this means you've decided not to continue the practice of identifying editors? If so, I think it's a very good move. I think it would have weakened your case to have those pages up, and it clearly wasn't having the effect of pressuring Wikipedia into backing down on your article--they were just using it as a pretext to take an even harder line against you.
BobbyBombastic
if WP interprets this as Brandt going "silent," dare I say that a silent Brandt at this point is scarier than a loud, clamoring Brandt. biggrin.gif
gomi
Note that when Mr. Brandt does something, he does it right: there any no google-caches or archive.org copies of the pages.
Joseph100
Appeasement and trying to work with in the Wikipedia universe of codified lying, deceit, and vindictive power tripping basement dwelling administrators, is pointless waste of time.

Issues, with Wikipedia are so deep and systemic and its culture, so polluted with deceit, lying and evil that the only solution is to sue it, for past and present wrongs. Suing Wikipedia, is the only language, that will get the undivided attention, of those 1100 or so administrators and the handful of top leaders included Jimbo, who is responsible for that Orwellian cyber nightmare of Wikipedia, to the fact that they not God King's, out of reach of the law, and that wiki policy lies much lower in precedent to the Constitution of United States.

Daniel Brandt efforts are to be applauded, encouraged and even monies given to this effort. I can send you $10 PayPal contribution to your lawsuit.

Aman bro, give them hell
GlassBeadGame
QUOTE
It's another veiled legal threat by Brandt ([14]) . Careful, he's trying to build a good argument, and he's probably setting us up for his proverbial "home run". Nothing pleases Brandt more than taking down a huge sum of information to preserve his own "privacy". // Sean William (PTO) 20:47, 16 April 2007 (UTC)
Retrieved from "http://en.wikipedia.org/wiki/User_talk:Jimbo_Wales"
---- From JW's user talk page


Threat? WTF?

He linked to us in that post. Was that a cleaver attempt to test limits?
bernie724
QUOTE(BobbyBombastic @ Mon 16th April 2007, 8:44am) *

if WP interprets this as Brandt going "silent," dare I say that a silent Brandt at this point is scarier than a loud, clamoring Brandt. :D


According to this Wikilawyer they (WP?) are "negotiating" with him; whatever that means.

"We are negotiating with him. Please do not modify his user pages. Fred Bauder 21:07, 16 April 2007 (UTC)"

http://en.wikipedia.org/w/index.php?title=...oldid=123301371
Somey
QUOTE(bernie724 @ Mon 16th April 2007, 4:44pm) *
...they (WP?) are "negotiating" with him; whatever that means.

Fred seems to have blanked Daniel's user and user-talk pages. Also, the article is now under (semi-?) protection, apparently. Rather decent of him, I suppose! smile.gif

We should probably wait until this all plays out before rushing to any judgements about it, but in the meantime, maybe we should have our own "arbcom ruling" that goes something like this:

A website that engages in the practice of publishing objectively-presented information concerning the private life of any person, in such a way that the information can be edited anonymously, at any time, by anyone, in any way, without formal attribution, over the public and/or explicitly-stated objections of that person, will be regarded as an attack site whose users are not entitled to protection of their own privacy, assuming they themselves engage in such activity.

Does that about cover it? Or, maybe, too many qualifiers? Or too few...?
michael
I think Fred Bauder is referring to the email communication that he is initiating. Bauder also filed Daniel Brandt's appeal. A few other ArbCom members said that Brandt should take his concerns via email, and I suppose that's what he's doing.
Nathan
Sounds good to me.
anon1234
I can say that this is the standard response of Wikipedia to real legal threats, they fold like a wet paper bag. I am almost positive there is a secret guide somewhere that says be strong and don't give in when presented with legal threats or anything else, but as soon as a real lawsuit is filed they are under orders to make it go away using any means necessary or something like that. They really do not want Daniel Brandt to actually go to court because any ruling could serve a broad limitation on Wikipedia's operation, it is unlikely that any ruling will help Wikipedia. It is part of a general strategy of avoiding lawsuits that Wikipedia and other similar corporations follow.

For Daniel's own sake, he should involve a lawyer in these negotiations with Fred Bauder and others. Wikipedia in situations like this is likely to make false claims of certain things being impossible or telling him he is likely to lose and other such things designed to dissuade him from proceeding. These claims may be less true than they appear. Remember that Fred Bauder is a real lawyer. An individual should not have to negotiate with an experienced lawyer without representation himself, because it places the individual at a major disadvantage (although Daniel is probably better off than most, but still it places him at a disadvantage.)
Daniel Brandt
CODE

How to get the complete set of history pages for an article.

1. Click on history, and click on 500.
2. In your address bar, change the URL to 5000 by adding a zero, and reload.
   You probably have the whole enchilada now on one page.
3. Save the page as a plain text source file, or download it with curl.
4. Write a script to extract the ID numbers, and insert these
   numbers into a curl command that runs on Linux.

Here's a little BASIC program I used to do this, which gives you an idea
of how the script should work:

OPEN "DBHIST.TXT" FOR INPUT AS #1
OPEN "IDOUT" FOR OUTPUT AS #2
STARTIT:
IF EOF(1) THEN CLOSE : END
LINE INPUT #1, A$
C = INSTR(A$, "oldid=")
IF C = 0 THEN GOTO STARTIT
B$ = MID$(A$, C + 6)
C$ = LEFT$(B$, 9)
' the number could be either 8 or 9 digits; strip last character if necessary
IF RIGHT$(C$, 1) = CHR$(34) THEN D$ = LEFT$(C$, LEN(C$) - 1) ELSE D$ = C$
PRINT #2, "curl -o ";
PRINT #2, D$;
PRINT #2, " -s http://en.wikipedia.org/w/index.php?title=Daniel_Brandt\&oldid=";
PRINT #2, D$
PRINT #2, "sleep 3"
GOTO STARTIT

That program will give you lines that look like this:

curl -o 123211686 -s http://en.wikipedia.org/w/index.php?title=Daniel_Brandt\&oldid=123211686
sleep 3
curl -o 123149927 -s http://en.wikipedia.org/w/index.php?title=Daniel_Brandt\&oldid=123149927
sleep 3
curl -o 122075839 -s http://en.wikipedia.org/w/index.php?title=Daniel_Brandt\&oldid=122075839
sleep 3

Run the script. If you are running curl from Windows, you need quotations marks
around the URL instead of the escape in front of the ampersand.

You end up with a file for each revision, where the filename is the ID number.
If you want to make the file appear the way it did on Wikipedia, you then have
to insert this in the header of each file: <base href="http://en.wikipedia.org/w/">

This line in the header allows you to pick up the extra fluff directly from Wikipedia
when you view the file from your local disk using your browser. The DBHIST.TXT that
you got in step 3 is a handy index to the files (with edit summaries and editor's name
and date stamp). You'll probably only want to print out the juicy ones (there were
2,444 files for my bio page). Be sure to save all the Talk archive pages too; those
are always juicy, and they are as "published" as the article pages because Jimbo lets
them get crawled by the search engines.

Now the bad guys can wipe your history, and you still have it! I wonder if I could start
a business as a forensic archivist for victims of Wikipedia. I could charge outrageous
fees to lawyers!

Jonny Cache
QUOTE(Daniel Brandt @ Mon 16th April 2007, 9:13pm) *

How to get the complete set of history pages for an article.
...


Wouldn't it just be easier to subpoena the Akashic Records?

Jonny cool.gif
interiot
QUOTE(Daniel Brandt @ Mon 16th April 2007, 8:13pm) *

CODE

How to get the complete set of history pages for an article.

1. Click on history, and click on 500.
2. In your address bar, change the URL to 5000 by adding a zero, and reload.
   You probably have the whole enchilada now on one page.
3. Save the page as a plain text source file, or download it with curl.
...


Wouldn't it be easier to just use http://en.wikipedia.org/wiki/Special:Export, and make sure "include only the current version" is unchecked?
SqueakBox
QUOTE(Daniel Brandt @ Tue 17th April 2007, 1:13am) *
CODE
How to get the complete set of history pages for an article.

1. Click on history, and click on 500.
2. In your address bar, change the URL to 5000 by adding a zero, and reload.
   You probably have the whole enchilada now on one page.
3. Save the page as a plain text source file, or download it with curl.
4. Write a script to extract the ID numbers, and insert these
   numbers into a curl command that runs on Linux....

Nice one Mr Brandt. I should have thought of posting the 5000 bit here myself. I wished Fred the very best with the negotiations and being a wikipedian I'm following NPOV and wishing you the very best too.
Daniel Brandt
QUOTE
Wouldn't it be easier to just use http://en.wikipedia.org/wiki/Special:Export, and make sure "include only the current version" is unchecked?
I knew there was a Special:Export function, but I didn't know there was that page for history versions too. Anyway, I see two problems. One is that it only goes 100 deep, and I had 2,444 files to get. Another is that from my previous experience with the XML versions, it's okay for the raw text and it's great because the files are as small as you can get them, but I'm not sure how easy it is to convert them into the full-blown web page with logo, sidebars, templates, pics, and stuff that you see in your browser. I used the XML download for my plagiarism study because all I wanted was the raw text for comparison purposes. If it ended up as something I needed to display in the end as one of my 142 "hits" then I went back to fetch the full page and inserted the highlighting manually.

But this time I wanted to be able to create a pretty screen shot for any history page. In other words, it should look just like the original did in your browser. The files I downloaded convert to the Real Thing with only the addition of that one "base" command in the headers. Also, most of the links in the file go where they are supposed to go originally, which is rather convenient.

QUOTE
Nice one Mr Brandt. I should have thought of posting the 5000 bit here myself. I wished Fred the very best with the negotiations and being a wikipedian I'm following NPOV and wishing you the very best too.
I got the "5000" idea from your email — thanks, it saved me from having to concatenate a number of "500" pages. No one has been negotiating with me, although I did notice that Bauder blanked my user and user_talk pages. That's so the "banned user" sentence doesn't show up in the Google snippet as the number two link on a search for my name. I think Bauder must be negotiating with someone else. At least they're still thinking about it, which is better than nothing.
SqueakBox
QUOTE(Daniel Brandt @ Tue 17th April 2007, 4:04am) *
QUOTE
Nice one Mr Brandt. I should have thought of posting the 5000 bit here myself. I wished Fred the very best with the negotiations and being a wikipedian I'm following NPOV and wishing you the very best too.
I got the "5000" idea from your email — thanks, it saved me from having to concatenate a number of "500" pages. No one has been negotiating with me, although I did notice that Bauder blanked my user and user_talk pages. That's so the "banned user" sentence doesn't show up in the Google snippet as the number two link on a search for my name. I think Bauder must be negotiating with someone else. At least they're still thinking about it, which is better than nothing.

Thanks for that. When I told DennyColt the same thing he claimed he already knew about it (which surprised me as I only found out myself after asking for help in searching at the tech desk on the VP) but to be honest I would expect a more mature response from yourself. I am a great fan of Ctrl F myself and only wish wikipedia would extend to 50,000.
LamontStormstar
QUOTE(Daniel Brandt @ Mon 16th April 2007, 6:13pm) *

CODE
Now the bad guys can wipe your history, and you still have it! I wonder if I could start
a business as a forensic archivist for victims of Wikipedia. I could charge outrageous
fees to lawyers!





Or you can go to the Special:Export page and get it easily as long as that function isn't blocked.
Somey
QUOTE(SqueakBox @ Mon 16th April 2007, 11:11pm) *
Thanks for that. When I told DennyColt the same thing he claimed he already knew about it (which surprised me as I only found out myself after asking for help in searching at the tech desk on the VP) but to be honest I would expect a more mature response from yourself.

I guess I just assumed everyone knew you could change the parameters in the URL string manually, but hey, whatever... Still, what do you mean by "more mature response"? Are you saying that preferring not to concatenate multiple shorter results pages is immature? I suppose in a way, it might be... I hadn't really considered it before.

Meanwhile, "Denny" hasn't appeared on Wikipedia in over three days. (You can almost smell the improvement...) One of the last things he posted was to his user page:
QUOTE
Anyone who links to, supports, endorses, advertises, or promotes attack/hate sites that seek to hurt or "out" the real life identities of Wikipedians and that defends the right to link to such vile, hateful material is in my opinion of questionable morality.

Well, at least he didn't include those of us who actually participate in such sites, eh? Of course, we're not a "hate site," even if some of us detest some of the things they do. Still, I've gotten into the habit of stating my own versions of these kinds of demagogue-pronouncements:

Anyone who links to, supports, endorses, advertises, writes articles for, edits, or promotes attack/hate sites such as Wikipedia that seek to publish real-life personal information of private individuals in a pseudo-objective fashion, over their clear objections, and in such a way as to allow anyone with an IP address to anonymously alter that information, and that defends the right to link to such a vile, hateful website is in my opinion totally and completely immoral.

Trust me, I can keep this up for years if I have to! smile.gif
thekohser
QUOTE(LamontStormstar @ Tue 17th April 2007, 12:35am) *

Or you can go to the Special:Export page and get it easily as long as that function isn't blocked.

I'm not seeing this as a historical option. For example, when I queried "Gregory Kohs" to try to find the article that once existed about myself, I got nada. Same for "Wikipedia Review", which is now just a redirect to the Criticism of Wikipedia article. Which makes about as much sense as redirecting an article about a particular asteroid to the article about solar flares.

Anyway, two contributors have now mentioned this Special:Export function, but I didn't find it functional for retrieving old versions whatsoever. Am I doing something wrong?

Greg
Jonny Cache
QUOTE(thekohser @ Tue 17th April 2007, 9:18am) *

QUOTE(LamontStormstar @ Tue 17th April 2007, 12:35am) *

Or you can go to the Special:Export page and get it easily as long as that function isn't blocked.


I'm not seeing this as a historical option. For example, when I queried "Gregory Kohs" to try to find the article that once existed about myself, I got nada. Same for "Wikipedia Review", which is now just a redirect to the Criticism of Wikipedia article. Which makes about as much sense as redirecting an article about a particular asteroid to the article about solar flares.

Anyway, two contributors have now mentioned this Special:Export function, but I didn't find it functional for retrieving old versions whatsoever. Am I doing something wrong?

Greg


Well, if you don't think that your Karmic Credit Rating up to petitioning the Court Of Akashic Records, then your next best option is asking Superman to zoom away from the Earth at tachyonic speeds until he catches up with the wavefront of light that was emitted in the process of typing in those articles. I am sure that the details of how to do this are recorded somewhere in that article about how Wikipedia Is In The Real World — well, if it hasn't been deleted already, in which case all you have to do is ask Superman ...

And so it goes ...

Jonny cool.gif
guy
QUOTE(thekohser @ Tue 17th April 2007, 2:18pm) *

Anyway, two contributors have now mentioned this Special:Export function, but I didn't find it functional for retrieving old versions whatsoever. Am I doing something wrong?

Were these articles deleted? If so, they can only be recovered by an Admin.
SqueakBox
QUOTE(Somey @ Tue 17th April 2007, 5:34am) *
I guess I just assumed everyone knew you could change the parameters in the URL string manually, but hey, whatever... Still, what do you mean by "more mature response"? Are you saying that preferring not to concatenate multiple shorter results pages is immature? I suppose in a way, it might be... I hadn't really considered it before.


By mature response I simply m,eant that Mr Brandt doesnt pretend he knows it all
Somey
Ahh, okay. I think I get it now, sorry about that...

I myself, of course, have never pretended to be "mature" in any realistic way whatsoever. Morally upright, perhaps, and occasionally conciliatory towards others, but certainly not mature...
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.