Printable Version of Topic

Click here to view this topic in its original format

_ General Discussion _ Science 2.0 Wikis As Neurons in a Collaborative Science Neural Net

Posted by: Milton Roe

Science 2.0: Wikis as the Neurons in an Artificial Neural Net

EVOLUTION AND THINKING. It has been noted that any evolutionary process is capable of creative activity, without central direction or central intelligence. This is the "blind watchmaker" in Darwin's scheme, and the "invisible hand" of the market in Adam Smith's scheme (which we know Darwin read). Such systems actually "think" in a very slow way. More on this later. But the key point is that all of them are sort of alike-- they only differ in how fast they run, and what they can build.

Now, in order to have a system that uses Darwinian evolutionary processes to change, all you need is reproduction, variation, and selection. Actual preexisting intelligence can also be folded into, or added onto, otherwise brainless evolutionary schemes. When that happens, this speeds the whole process up. A well-known instance of this is Adam Smith's original marketplace, where intelligence goes into buying of products at stores, and this behavior drives a larger economic feedback loop which selects for better products. Many people doing this actually create an economy where nobody is in charge, but it functions anyway.

The same kind of thing, however, happens in organic evolution, when brains get involved with selection directly: we call these kinds of special brain-driven Darwinian selection "sexual selection" and "artificial selection" (respectively). They both make ordinary organic evolution run at super-speed. In fact, Darwin used artificial selection of fancy show-pigeons as one of his thought experiments for how the evolutionary process might work, even without any overall grand plan. And the brains to do this kind of intelligent selection don't need to be powerful: for example, peahens apparently select peacocks just like humans do, and that's how peacocks got those ridiculous tails even before humans started to mess with them. Whatever peahens like, they eventually get-- welcome to the world of female choice in mates.

(There's an argument to be made that humans got their large brains in much the same way peacocks got their large tails, but that goes in a direction to be left for later.)

MEMETICS. Any cultural thought needs a repository or store of information to do thinking with and on, and that repository needs to be able to "reproduce," and to be changed and selected for, by minds or other factors. A simple example of this is an idea, (also called a meme). Think of a joke, or a catchy tune, for an example. These are held in the memories of the people who tell them or whistle them, and the best ones are those that get reproduced.

This kind of thing can also be very complex, like a religion. Think of a Gideon Bible that sits in the drawers of hotels as being like a crystallized virus, waiting to be opened and read. It isn't only true that an egg is a chicken's way of making more eggs. It's also true that the committed Christian is a bible's way of making more bibles. When you look at a Gideon bible, you're seeing a virus of the mind, but in inactive form. It's not really alive, any more than a computer worm sitting inert on a flash-drive, is alive. It needs a processor before it can be active and reproductive, just as a computer-virus needs a computer, or a biological virus needs a cell. You might not want to open the thing, if your brain is running Microsoft Outlook.

THINKING WITH DATABASE/FILE SYSTEMS. In 1930's and particularly in an article in 1945 called "As We May Think", American administrator and engineer Vannevar Bush developed a simple linked filing system called "memex" (for "memory extender"). Bush's system was originally conceived as microfilm-based, for Bush then had no idea of the modern computer (of course he made the connection later). But memex did originally have the idea of modern "hyperlinked" text, which recorded not only links between documents, but also "trails" of links made by previous searchers, along with comments they may have added during the search. Thus, the system was capable of Darwinian change. Bush wrote presciently in 1945:

QUOTE
Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified. The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities. The patent attorney has on call the millions of issued patents, with familiar trails to every point of his client's interest. The physician, puzzled by a patient's reactions, strikes the trail established in studying an earlier similar case, and runs rapidly through analogous case histories, with side references to the classics for the pertinent anatomy and histology…. There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record. The inheritance from the master becomes, not only his additions to the world's record, but for his disciples the entire scaffolding by which they were erected. --V. Bush. As We May Think (1945).


Such a system of records, and records of best trails between records, all of them changeable with time, is capable of evolving very much like an economy, or an ecosystem, and in the same way. In fact, we see that all of these systems are information processors. They differ only in form. In any information processing system, pieces of information are linked by some interaction, and some kind of record is made of these links. For processing to happen in a Darwinian way requires that "good" or "useful" links survive over "non-useful" links. This happens automatically, if being "useful" helps a link or trail of links survive.

In nature, an organism might be more likely to survive if internally, links were made between receipt of a noxious or damaging stimulus, and some behavior to avoid it. Similarly, in the world of knowledge, a review document, which is a record of links or citations to documents, might be more likely to be reproduced and thus "survive" if it is a useful and complete review, with a logical description of links in a way which provides insight. In a knowledge web, there might then be another level of more links to the best reviews of links, and so on. In this way, the system would "vote" on the best reviews, just as Google keeps track of how the web "votes" on the most useful websites, by looking to see which sites are most often linked. All of this is Bush's memex system in action.

THE WEB, SEEN AS LINKED DATABASE HOST. In order to be maximally useful as a target for human thinking, a given piece of information or document, needs to be short enough to be easily readable at one sitting, needs to be easily reproduce-able, needs to be cheaply available to all, and to be changeable though selection (that is, able to be edited or added to in some fashion) so that it may be improved as a target for selection by minds. On the web, the usual web-page does not really fit the bill, because the average reader of the page can't really change it. Even self published web-logs ("blogs") can't be changed much by the commentary which is added to them, although having commentary, like having blogs searchable with Google, is all moving in the right direction.

But now, we see a new idea for the internet which is called "Web 2.0." Web 2.0 is not a piece of technology or particular software, so much as it is an IDEA for how to use the old web. The idea is to use the world wide web software in a more participatory way, with much more user-generated content. The idea is that documents of any sort, including bits of thought, computer programs, works of art, etc., might all be available to be evaluated and changed by viewers (!). This kind of thing invites vandalism (one thinks of a bunch of brushes and paints sitting alongside a Van Gogh in a museum), but there are ways to deal with this. In the digital world in particular, a work can be fixed easily by having other viewers change it back to the original, if they don't like the change.

ENTER THE WIKI: As a bit of Web 2.0, in particular, be now welcome a thing called a "wiki." A wiki, as originally invented, is merely a set of hyperlinked internet web-pages, each of which is editable at will, by any viewer who is given permission to edit. Basically, it is a memex for the general reader, hosted on the internet by a type of database software. A "wiki" (from a Hawaiian word for "quick") can refer either to one of the pages in such a linked net, or to an interlinked set of such pages. Previous editions and edits are also available and recorded for each page, so that no information is lost, and anybody can change a bad edit back, or begin from a previous better version. Edits can add information, or add links between information, or can add links to other links.

The best known example of a set of wikis, is "Wikipedia," a web encyclopedia in Wiki form, started in 2001. But Wikipedia didn't invent wikis-- it merely uses them as a software device. Wikipedia started out when philosopher Larry Sanger proposed to fellow entrepreneur Jimmy Wales that they might use wikis to get the general public to put together basic encyclopedia articles to be used as a feedstock, for expert review. But the expert review part never came about (Sanger is still trying to get it going, after ending his work for Wales), and Wikipedia went on, without any experts at all. But we're getting ahead of the story.

The set of interconnected webpages called a WikiWikiWeb was actually invented in 1994 by Ward Cunningham, as being "the simplest online database that could possibly work" (see http://en.wikipedia.org/wiki/Wiki ). And so it is. But just as the simplest computer can do anything that the most complicated one can, so the simplest on-line database is capable of doing anything that any database can, including evolve in a Darwinian way. In particular, when the nodes of such a database (the wikis) are watched by minds, the complex of a WikiWeb on a server host, is capable of evolution and thought.

A web of wikis, of which Wikipedia is but one example, has odd emergent properties. The reason is that any wiki can act as the input of other wikis (though a hypertext link), and can at the same time be the focus of many linking changes, by many editors. Thus, a wiki page acts in many ways like a neuron in a neural net. A neuron (either a real one or an artificial digital one) has many ways to change the inputs, and the output is the finished product. This output, in turn, forms just one link in a web of interconnected neurons. Again, links between neurons can evolve Darwinistically, if there is a way to reproduce then with variation, and to select among them This is what editors do. In that case, the net product of neuron linking is evolutionary, and in consequence, is a slow kind of creative conversation and thinking.

[Indeed, in 1989 the Nobel Prize-winning immunologist Gerald Edelman proposed boldly the idea of "neural Darwinism". In this concept, links between real neurons in all brains are selected in a sort of Darwinisitic way, both at the time the brain is being grown/built (when the links are grossly physical) and also later, as the brain learns (when the links consist of more subtle emphasizing of connections that are already made). Our task with computers, then, is merely to model with artificial neurons what a human brain already does with 100 billion cellular neurons.]

Sure enough, ANY net of wikis will be found to be changing, and even drawing conclusions of a type, about the information they contain AS a database. This happens even if a "point of view" about which links are best, is supposed to be forbidden, as it is on Wikipedia. The reason for this is that a point of view (POV) is hardly possible to avoid in making links. Anything that functions as a POV ends up being selected for, and amplified by, those who make the connections between wikis, if the link is approved by others who run across it. Thus, any set of wikis, like any ecosystem or economy, is a kind of brain, though a crude and artificial one. One may can try to keep such a brain from thinking, but anytime a database is allowed to duplicate and connect, it will end up thinking Darwinistically about something, merely by making connections and reproducing the best ones. What will it think ABOUT? Why, whatever its users and makers are thinking about. It is audience-driven, like a stand-up comedy act being honed as it goes. At Wikipedia, the content seems to be mainly what's on TV or the newest game-box. But in theory, as with memex, the subject can be anything.

WIKIS AS DELIBERATE THINKING NODES. Now, the most interesting use of wikis is for places where collaboration ---- with point-of-view, and original work and research-- IS explicitly encouraged, as in any creative program. The wiki now becomes like a product on the market, but like a joke that improves with telling, it's a product that anyone can change and reproduce. The computer keeps track of all this, so that things far more complex than jokes can be manipulated and edited by society. Wikis have been used in science, in government, in product design, and as the basis for any collaborative project between people. Indeed, there are Wiki projects hosted by the WikiMediaFoundation (which also hosts Wikipedia) which are supposed to be creative in this way (as Wikipedia is not). Interestingly, most of them there aren't working too well, possibly because the really creative and informed people are busy elsewhere.

This May, 2008 edition of Scientific American describes a wiki-driven scientific collaboration which turned out (not surprisingly, for those familiar with Wikipedia) to have startling emergent properties. The projects in the set vary, but one notably successful one is OpenWetWare, a collaborative wiki (or set of wikis, as you will) that is active as a database, on the subject of how to get laboratory techniques to work. Other science-related topics are also being discussed, such as job-hunting. The site now has more than 6,000 pages on various topics, edited by 3000 registered users. Yes, this is small by Wikipedia standards (Wikipedia has 10 million pages), but it does have some differences.

One of them is no anonymity. Unlike Wikipedia, which can be edited by anybody (including anonymous IP accounts and users with fake names, which are encouraged), OpenWetWare is visible to anyone, but ("unlike the oft-defaced Wikipedia" says Scientific American) you cannot edit OpenWetWare unless you've registered and proved to the site managers that you belong to a legitimate research organization. Thus, "We've never yet had a case of vandalism" says Jason Kelly, a graduate student who sits on the site's steering committee. Scientific American then adds something Wikipedia editors all know: even if damage were done, it could be rolled by with the click of a mouse. Like Wikipedia, the site retains records of all previous versions of all pages. (Wikipedia users also know this much by itself, is not enough to win a war with determined vandals).

The OpenWetWare site has proven to be interesting for those who are used to the scientific process. The science research paper is also a sort of artificial neuron in serving as the focus of a database with many inputs and some outputs. But a typical science paper has limited inputs-- usually these are audience comments when the work is pre-presented at meetings, comments by professors and its few authors, and (of course) the comments by the handful of "peer reviewers" who go over the paper before it's re-edited for publication (assuming it does get published). After that, the editing is done forever, except for a few later-published errata notes, if these are needed. The paper is reproduced as journal copies and database abstracts and even on-line journals, but it's never again alterable. The best that can happen is that it serves as part of another review, or the discussion in another published paper that cites it. But the cycle time for this is long-- often 6 months to a year. On the web, however, such a process could in theory only take days, or even hours.

PROBLEMS. Are there still problems with using this for science collaboration projects, in which new material is produced? Yes. Two of Wikipedia's most notorious problems (vandalism, and input by complete ignoramuses) has been somewhat solved in Science 2.0 collaborative sites, by simply doing something that Wikipedia has so far refused to do: check credentials and identity of contributors. But there are still other problems. Here are two:

[1]. One problem is that you can never have enough credentials. The Science 2.0 systems that now exists still do not have a good reputation management system which tracks the reputation of individual posters. Professor X, for example, might have a wonderful CV, but half the time these days he may be a bit senile, and he's bad-tempered besides. So how do you know to avoid reading to taking seriously anything he says?

[2]. The second problem is that of credit. This is not a bad as it may seem, for anybody can see what you said that was new in a science collaboration on a Wiki, and you'll get credit for it. But what's harder to control is that if you choose to participate in such a system, you no longer have infinite time to tackle and be the first to analyze your own data, if you're foolish enough to post it. That may lead to somebody making the same insight you would have, but a day sooner. If your collaborator gets to the mountaintop a step before you do, does it count? For example, Sherpa Tenzing Norgay probably was actually only the second person to set foot on the very top of Mt. Everest, but does that really matter? Everyone knows that Norgay and Hillary were an equal team. In a usual scientific paper with a string of authors, usually place in line does not "count", for the first couple of places. In unusual collaborations (such as papers from giant accelerator projects), even scores of people are included as equals. But what if the data is allowed out before everyone can put their names on it at the same time? Then separate publications do cause problems. The credit for important discoveries has been missed by a day or two in submission time to a journal by different people.

Here's an example what might happen in a more extended and less formal collaboration, of the kind that might take place online: Famously, Albert Einstein, who'd been working on a differential geometric theory of gravity called "General Relativity" for several years, had the bad idea in 1915 to telling differential geometer David Hilbert what the new theory needed to do, and needed mathematically to look like. With Einstein's lead, genius mathematician Hilbert then figured it all out in a month or two, but by then Einstein had worked it out also. By shear luck, Einstein beat Hilbert to publication by only a couple of days. One can imagine that it might have gone the other way--- then, who would have properly gotten credit for general relativity? Hilbert could not possibly have done it, had not Einstein laid out what the math needed to do. But Hilbert was by far the better mathematician, as everyone including Einstein admitted. No Nobel prize was involved here, but we all know how important general relativity is. This kind of thing can be expected to happen more and more often in Science 2.0.

PAYOFFS. For all of these, Science 2.0 offers a landscape of collaboration which would only have been dreamed of a dozen years ago, even as the internet appeared for most people in the West, and began to offer us email and searchable papers on-line, as "Web 1.0." With Science 2.0, not only are you "competing" against all the scientists in the world interested in your subject in real-time, but you can also potentially collaborate with them too. This means that people might put suggestions and wisdom into a work pre-publication, that you never could have gotten, otherwise. They will help find mistakes. They will suggest connections. They will act like super-peer-reviewers who have power to help, but less to harm. Some of these people will be tenured already, and won't care about stealing ideas or about who gets credit for them (think of open source software development). So Science 2.0 can be like perpetually being able to share your work in a "plenary" talk at a scientific meeting, before a large hall of the best experts in your field, any time you like. Even if you're a mere first-year graduate student.

As any scientist knows, THAT opportunity is like gold. Because the questions and comments and judgments from the audience are guaranteed to be knowledgeable, piercingly insightful, and creative. And free.

-- Milt

Posted by: Jon Awbrey

It's the Summer of '42 all over again …

Jon cool.gif

Posted by: Milton Roe

QUOTE(Jon Awbrey @ Mon 21st April 2008, 7:22am) *

It's the Summer of '42 all over again …

Jon cool.gif

Meaning? Vannevar Bush's cute little idea finally did get mechanized, you know. You're using it. And it did change the world. And it's not done yet.

Posted by: Moulton

That's a first class essay, Milton.

One tiny suggestion...

QUOTE(Milton Roe @ Mon 21st April 2008, 3:16am) *
Now, in order to have a system that uses evolution, all you need is reproduction and selection.


In order to have a system that evolves, you need reproduction with perturbations (random or intentional) and selection.

Posted by: Jon Awbrey

QUOTE(Milton Roe @ Mon 21st April 2008, 3:43am) *

QUOTE(Jon Awbrey @ Mon 21st April 2008, 7:22am) *

It's the Summer of '42 all over again …

Jon cool.gif


Meaning? Vannevar Bush's cute little idea finally did get mechanized, you know. You're using it. And it did change the world. And it's not done yet.


Uncle Milty,

I wrote my first programs for learning and reasoning agents somewhere in the late (19)70's and spent pretty much the whole subsequent decade developing and integrating various modules of what I would come to call an "Architecture For Inquiry". The research that I carried out for an M.A. in Quantitative Psych in 1989 acquainted me with the entire literature on adaptive, behaviorist, clonal selection, competitive-cooperative, connectionist, embryological, evolutionary, genetic, and just plain ordinary learning models, including the work of http://nobelprize.org/nobel_prizes/medicine/laureates/1972/edelman-bio.html on Models Of Clonal Selection (MOCS) applied to the cognitive domain. I returned to grad school in Systems Engineering in the (19)90's as a way of working on unfinished business in AI, and spent the next decade developing a research programme on "Inquiry Driven Systems".

So I do have a perspective on these topics.

Jon cool.gif

Posted by: Jon Awbrey

A previous discussion along these lines began with this post.

Jon cool.gif

Posted by: Milton Roe

QUOTE(Moulton @ Mon 21st April 2008, 12:46pm) *

That's a first class essay, Milton.

One tiny suggestion...

QUOTE(Milton Roe @ Mon 21st April 2008, 3:16am) *
Now, in order to have a system that uses evolution, all you need is reproduction and selection.


In order to have a system that evolves, you need reproduction with perturbations (random or intentional) and selection.

Wups, yes for sure, as otherwise there are no alternative to "select" between. It can't be perfect reproduction. There either must be intentional change, or "noise," or both.

Some wag long ago suggested that if our genome was a triple helix like collagen, with an error-correction system that had a voting system for any 2 agreements out of 3, we'd probably all still be microbes. smile.gif

QUOTE(Jon Awbrey @ Mon 21st April 2008, 1:28pm) *

QUOTE(Milton Roe @ Mon 21st April 2008, 3:43am) *

QUOTE(Jon Awbrey @ Mon 21st April 2008, 7:22am) *

It's the Summer of '42 all over again …

Jon cool.gif


Meaning? Vannevar Bush's cute little idea finally did get mechanized, you know. You're using it. And it did change the world. And it's not done yet.


Uncle Milty,

I wrote my first programs for learning and reasoning agents somewhere in the late (19)70's and spent pretty much the whole subsequent decade developing and integrating various modules of what I would come to call an "Architecture For Inquiry". The research that I carried out for an M.A. in Quantitative Psych in 1989 acquainted me with the entire literature on adaptive, behaviorist, clonal selection, competitive-cooperative, connectionist, embryological, evolutionary, genetic, and just plain ordinary learning models, including the work of http://nobelprize.org/nobel_prizes/medicine/laureates/1972/edelman-bio.html on Models Of Clonal Selection (MOCS) applied to the cognitive domain. I returned to grad school in Systems Engineering in the (19)90's as a way of working on unfinished business in AI, and spent the next decade developing a research programme on "Inquiry Driven Systems".

So I do have a perspective on these topics.

Jon cool.gif

That's extremely impressive! So what happened to your programme? It sounds like GOOGLE should want to hire you, and I'm not surprised that you ran afoul of Wikipedia. Or academia, quite frankly, if that is what happened. We do not live in a culture of direct selection for performance most of the time, you will have noticed, except sometimes by accident (or by parallel selection for some co-segregatating marker of performance, like social influence). How Edelman managed to get the Nobel must be a story in an of itself.

Posted by: Milton Roe

QUOTE(Milton Roe @ Mon 21st April 2008, 7:16am) *

Science 2.0: Wikis as the Neurons in an Artificial Neural Net

I re-edited this to add the Vannevar Bush material, and to make it readable for people new to Wikipedia. Johnny Cache: You did your work on evolutionary systems before Wikis were invented. But only a Wiki is fully "instant feedback mega-meme" for the web, like a joke or (even better) like a standup comedy routine or performance or sermon honed before many audiences. If you're going to do distributed processing on the web in anything like a decent cycle time, you can't wait for the standard scientific review process for prepublication. So the Wiki acts like presentation of an abstract at a scientific congress. It's really the first dbase software which allows this kind of thing. So I think we need to mark it as a significant peice of progress in the field, even if you've "seen it all before" from other views. Afterall, it's still memex on steroids. But speed brings emergent properties, as we saw with Deep Blue.

M.

Posted by: Jon Awbrey

QUOTE(Milton Roe @ Thu 24th April 2008, 10:40pm) *

QUOTE(Milton Roe @ Mon 21st April 2008, 7:16am) *

Science 2.0 : Wikis as the Neurons in an Artificial Neural Net


I re-edited this to add the Vannevar Bush material, and to make it readable for people new to Wikipedia. Johnny Cache: You did your work on evolutionary systems before Wikis were invented. But only a Wiki is fully "instant feedback mega-meme" for the web, like a joke or (even better) like a standup comedy routine or performance or sermon honed before many audiences. If you're going to do distributed processing on the web in anything like a decent cycle time, you can't wait for the standard scientific review process for prepublication. So the Wiki acts like presentation of an abstract at a scientific congress. It's really the first dbase software which allows this kind of thing. So I think we need to mark it as a significant peice of progress in the field, even if you've "seen it all before" from other views. Afterall, it's still memex on steroids. But speed brings emergent properties, as we saw with Deep Blue.

M.


If you want to have a real discussion about this stuff then maybe we can.

Right now my impression is that there's a whole lot of Artificial Intellectual History that you just don't seem to know.

So I'm presently uncertain where to begin.

Some of my Summae are currently accessible from http://www.wikipediareview.com/Directory:Jon_Awbrey at Wikipedia Review, though some of them are still in the process of being re*formatted for the umpteenth time.

This could be one starting point:This could be another:There is also this somewhat curious mid-wood summary of how I saw the trees just before I went back to grad school in 1993, but only the first part has been wikified yet, and I haven't gotten around to retyping the bibliography:Jon cool.gif

Posted by: JohnA

Shouldn't this be on the meta-forum?

Posted by: Milton Roe

QUOTE(Jon Awbrey @ Fri 25th April 2008, 3:05am) *

If you want to have a real discussion about this stuff then maybe we can.

Right now my impression is that there's a whole lot of Artificial Intellectual History that you just don't seem to know.

So I'm presently uncertain where to begin.

Some of my Summae are currently accessible from http://www.wikipediareview.com/Directory:Jon_Awbrey at Wikipedia Review, though some of them are still in the process of being re*formatted for the umpteenth time.

This could be one starting point:
  • http://www.wikipediareview.com/Directory:Jon_Awbrey/Papers/Differential_Logic_and_Dynamic_Systems
This could be another:
  • http://www.wikipediareview.com/Directory:Jon_Awbrey/Papers/Inquiry_Driven_Systems
There is also this somewhat curious mid-wood summary of how I saw the trees just before I went back to grad school in 1993, but only the first part has been wikified yet, and I haven't gotten around to retyping the bibliography:
  • http://www.wikipediareview.com/Directory:Jon_Awbrey/Essays/Prospects_For_Inquiry_Driven_Systems
Jon cool.gif

Agggh, gak, gak. Links to a couple of hundred pages of math research is not going to help me any. It just makes we want an executive summary, or (failing that) some real-world working examples of the ideas (as in, some code or a link to something coded, so I can see it work). I have read some of the papers you cite, even the 1956 one in which the number of words that Klingons have for "kill" is discussed. Of course, there's a whole lot of AI theory that anybody in the field doesn't know: it's a huge field and people don't talk to each other. Also, it's not my field, so of course I don't have a great grasp of it. I certainly didn't know Pierce actually contributed to trying to solve Boolean network problems. I think his work must have been rediscovered, as ideas don't work nearly as well cultural-evolutionarily, when stuck in manuscript trunk for 50 years in some old dusty library, as when posted as a Wiki.

Did YOU know that xubernetika (I'm not looking up how to post the Gk letters here) the word for not only rudder but steersmanship, and that cybernetics was actually first used as a word for control theory science from this root, independently by Ampere (the electricity/math guy) prior to 1843 (essay published after his death) and Weiner actually independently came up with both same word and idea later, and didn’t know about Ampere until told after his book came out?

And yet, even if I read all your stuff (which I did not) we'd still be talking past each other, because I did read enough to see that you're talking about pure AI (no humans involved, no human-use-of-human-beings). I'm not. Use of Wikis already assumes there's intelligence in the system, and those brains already "know" how to frame hypotheses, etc. They are already intelligent by themselves, and the question is how best to make them hyper-intelligent, as a distributed system. That's quite different, though some of the same problems arise. Obviously, for example, intelligence may arrive de novo from distributed non-intelligent systems, since all the real-world examples of intelligence have! Just tell me the kind of mind you're talking about, and I'll tell you the scale you need to look at to find it "distributed."

I note the fact of you posting this stuff on a wiki on Wikipedia Review. For what purpose, may I ask, if you don’t' already have some idea of what I'm talking about?

One of the (many) ways to look at intelligence (AI or otherwise) is the ability to connect dots on a map of some kind, but (as you note) this problem is very different if you know all about the map, vs. if you know only a bit of it around point A and perhaps nothing at all about the territory around point B. There have been a lot of Darwinian, (memetic, genetic) and population driven search optimization algorithms, which aim at the metaheuristics problem, which are more along the line of what happens with the memex/wiki approach (in which individual people do their own metaheuristics mostly, but some of it also arrives emergently from the collaboration, yet without any formal mathematical understanding at all-- this is swarm behavior). Even in cases where something reasonable is known of the entire 3-D map as in a topo map (of course you can never have a complete picture), we've had some remarkable progress just lately in computer vision and control systems (the DARPA Grand Challenges for bot-driven vehicles made gigantic progress through a recent 3 years; since they are held near where I live, I got to watch them in person)*. So these things are evolving and getting better. Are you perhaps working on any of them?

Milt

* I'm told that, historically, the first determinant Hessian blob detector was actually Ichabod Crane…