From Social Intelligence, Vol. 1, No. 2, pp.87-120; an edited
version of a paper originally submitted to Hypertext 87 conference. Reprinted with permission of the author.
WWW conversion by Russell Whitaker.
Released to the WWW on 20 October 1995
Last updated: 23 September 1996
Moved from www.asiapac.com/ to www.foresight.org/ on March 27, 1997
Hypertext Publishing and the Evolution of Knowledge
Media affect the evolution of knowledge in society.
A suitable hypertext publishing medium can speed the evolution of knowledge
by aiding the expression, transmission, and evaluation of ideas. If one
aims, not to compete with the popular press, but to supplement journals
and conferences, then the problems of hypertext publishing seem soluble
in the near term. The direct benefits of using a hypertext publishing medium
should bring emergent benefits, helping to form intellectual communities,
to build consensus, and to extend the range and efficiency of intellectual
effort. These benefits seem numerous, deep, and substantial, but are hard
to quantify. Nonetheless, rough estimates of benefits suggest that development
of an adequate hypertext publishing medium should be regarded as a goal
of first-rank importance.
The evolution of knowledge
Knowledge is valuable and grows by an evolutionary process. To gain valuable
knowledge more rapidly, we must help it evolve more rapidly.
Evolution proceeds by the variation and selection of replicators. In the
evolution of life, the replicators are genes; they vary through mutation
and sexual recombination and are selected through differential reproductive
success. In the evolution of knowledge, the replicators are ideas; they
vary through human imagination and confusion and are likewise selected through
differential reproductive success - that is, success in being adopted by
new minds. (These ideas are memes,
Dawkins' terminology .)
Evolutionary epistemology  maintains
that knowledge grows through evolution. Animals - and even plants - can
be said to know of certain regularities in their environments;
this knowledge, embodied genetically, certainly evolved. Like genes, folk
traditions are passed on from generation to generation; surviving traditions
tend to embody knowledge that aids survival. Karl
Popper describes science in evolutionary terms, as a process of conjecture
and refutation, that is, of variation and selection .
The scientific community evolves knowledge with unusual effectiveness because
it has evolved traditions and institutions that foster the effective replication,
variation, and selection of ideas. Teaching, conferences, and journals replicate
ideas; the lure of recognition helps bring forth new ideas; peer review,
refereeing, calculation, and direct experiment all help select ideas for
acceptance or rejection. Every community evolves ideas, but science is distinguished
by unusually rigorous and reality-based mechanisms for selection - by the
nature of its critical discussion.
To improve critical discussion and the evolution of knowledge, we can seek
to improve the variation, replication, and selection of ideas. To aid variation,
we can seek to increase the ease and expressiveness of communication. To
aid replication, we can seek to speed distribution, to improve indexing,
and to ensure that information, once distributed, endures. To aid selection,
we can seek to increase the ease, speed, and effectiveness of evaluation
and filtering. The nature of media affects each of these processes, for
better or worse.
The nature of a medium can clearly affect critical discussion and hence
the evolution of knowledge. Consider how the lack of modern print media
would hinder the process: Imagine research and public debate in a world
where all publications took ten years to appear, or had to contain at least
a million words apiece. Or imagine a world that never developed the research
library, the subject index, or the citation. These differences would hinder
the evolution of knowledge by hindering the expression, transmission, and
evaluation of new ideas. If these changes would be for the worse, then a
medium permitting faster publication of shorter works in accessible archives
with better indexing and citation mechanisms should bring a change for the
better. The naive idea that media are unimportant in evolving knowledge
- that only minds matter - seems untenable.
The effects of media on variation, replication and selection can be described
in more familiar terms as effects on expression, on transmission,
and on evaluation. These categories provide an analytical framework
for examining how media affect critical discussion and the evolution of
The newest of major media is television, but it seems poorly suited for
critical discussion. Its cost limits access, limiting the range of ideas
expressed; political regulation worsens the problem. Its nature - a stream
of ephemeral sounds and images pouring past on multiple channels - does
not lend itself to the expression of complex, interconnected bodies of information.
Transmission of new information is often very fast, but in a form awkward
to file, index, and retrieve. Viewers cannot easily or effectively correct
televised misinformation. It is hard to imagine researching anything by
watching television, save television itself. Similar remarks apply to radio.
The medium of paper publishing does better. It is relatively open, inexpensive,
and expressive. Paper books and journals have been the medium of choice
for expressing humanity's most complex ideas. Published items endure, and
they can be copied, filed, and quoted. Paper books and journals, however,
suffer from sluggish distribution and awkward access.
Paper publishing's greatest weakness lies in evaluation. Here, refereed
journals are best - but consider the delay between having a bad idea and
receiving public criticism, that is, the cycle-time for public critical
An author's (bad) idea leads to a write-up, then to submission, review,
rewriting, resubmission, publication, and distribution: only then (after
months' delay) does it become public. This then leads to reading by a critic,
an idea for a refutation, write-up, submission, publication, and distribution:
only then (after further months) has the idea received public criticism.
This cycle can easily take a year or more, though the total thinking-time
required may be only a matter of days. And even then the original publication
exists in a thousand libraries, unchanged and unmarked, waiting to mislead
The sluggishness of paper publishing forces heavy reliance on communication
in small groups. There, cycles of expression, transmission, and evaluation
are fast and flexible, but operate within the narrow bounds of the community.
This limits both the criticism of bad ideas and the spread of good ones.
Computer conferencing systems aim to combine the speed of electronic media
with the text-handling abilities of paper media. They can combine some of
the virtues of small-group interactions with those of wide distribution.
The better computer conferencing systems have much in common with hypertext
publishing systems, though all presently lack one or more essential characteristics.
Since they are diverse and rapidly evolving, it seems better to describe
what they might become than to try to take a snapshot of their present state.
A hypertext publishing medium is a system in which readers can
links across a broad and growing body of published works. Hypertext
publishing therefore involves more than the publication of isolated hypertexts,
such as HyperCard
stacks. This paper follows Jeff Conklin 
in taking 'a facility for machine support of arbitrary cross-linking between
items' as the primary criterion of hypertext.
Hypertext publishing systems can provide an open, relatively inexpensive
medium having the expressiveness of print augmented by links. Electronic
publication of reference-links, indexes, and works will speed the transmission
of ideas; criticism-links and filtering mechanisms will speed their evaluation.
The nature and value of such systems is the topic of the balance of this
Randy Trigg has stated an ambitious long-term goal for computer media and
In our view, the logical and inevitable result will be the transfer
of all such activities to the computer, transforming communication within
the scientific community. All paper writing, critiquing, and refereeing
will be performed online. Rather than having to track down little-known
proceedings, users will find them stored in one large distributed computerized
national paper network. New papers will be written using the network, often
collaborated on by multiple authors, and submitted to online electronic
In spirit, this embraces a broader goal: transforming communication within
the community of serious thinkers, including those outside the scientific
community. It also embraces a narrower goal: transforming communication
within smaller communities which still must use paper media to publicize
their results. None of these goals entail competing with local newspapers,
glossy magazines, or popular books; they aim only at providing better tools
for communities of knowledge workers.
Kinds of hypertext
With these goals in mind, it may help to distinguish among several sorts
Full vs. semi-hypertext: Full hypertext supports links, which can
be followed in both directions; semi-hypertext supports only pointers or
references, which can be followed in only one direction. As we shall see,
true links are of great value to critical discussion, and hence to the evolution
Fine-grained vs. coarse-grained hypertext: This embraces two issues.
First, can one efficiently publish short works, such as brief comments on
other works? Second, can a critic link to paragraphs, sentences, words,
and links - or only to author-defined chunks of text? Fine-grained linking
has value chiefly in a critical context: given fine-grained publishing,
authors can structure their work to match their ideas, but critics will
often want to pick nits or blast small, vital holes in parts of
an author's structure - parts that may not be separate objects. To do so
neatly requires fine-grained linking.
Public vs. private hypertext: A public hypertext system will be
a hypertext publishing system - if it is any good. A public system must
be open to an indefinitely large community, scalable to large sizes, and
distributed both geographically and organizationally; no central organization
can control access or content. Closed or centrally-controlled systems are
effectively private. Public systems will aid public discussion.
Filtered vs. bare hypertext: A system that shows users all local
links (no matter how numerous or irrelevant) is bare hypertext.
A system that enables users to automatically display some links and hide
others (based on user-selected criteria) is filtered hypertext.
This implies support for what may be termed social software, including
voting and evaluation schemes that provide criteria for later filtering.
'Hypertext publishing': This paper will use the terms hypertext
publishing and hypertext medium as shorthand for filtered,
fine-grained, full-hypertext publishing systems. The lack of any of these
characteristics would cripple the case made here for the value of hypertext
in evolving knowledge. Lack of fine-grained linking would do injury; lack
of any other characteristic would be grievous or fatal. Most important is
that the system be public: the difference between using a small, private
system and using a large, public system will be like the difference between
using a typewriter and filing cabinet and using a publisher and a major
To support the evolution of knowledge effectively, a hypertext publishing
medium must meet a variety of conditions. Nelson [6,7]
and Hanson  have specified some
of them; the following describes an overlapping set and relates it to the
evolution of knowledge. Several conditions are included because they conflict
with common practice in computer systems administration, yet seem necessary
for a functioning publishing system.
Must support effective criticism
Hypertext publishing must support links across a distributed network of
machines, and these links must be visible regardless of the wishes of the
linked-to author. The resulting medium can greatly enhance the effectiveness
of critical discussion. Since this conclusion is pivotal to the argument
of this paper, it deserves detailed consideration. Consider how the critical
process works in paper text, the current medium of choice, and how it may
be expected to work in hypertext:
In each case, we start with a published paper making a plausible statement
on an important issue - but a statement that happens to be wrong. Imagine
the results in the medium of paper text and in hypertext. In both, some
readers see that the statement is wrong. In both, a few know how to say
why, clearly and persuasively. Then the cases diverge.
Faced with a paper publication, these critics may (1) fume, (2) complain
to an officemate or spouse, (3) scribble a cryptic note in the margin, or
(4) write a critical letter that may (5) eventually be published in a subsequent
issue. Steps (1-3) contribute little to critical discussion in society:
they fail to reach a typical reader of the offending paper and leave no
public record. Step (4) is an expensive gamble in time and effort: it demands
not only the effort of handling paper and addressing an envelope, but that
of describing the context, specifying the objectionable points, and stating
what may (to the critic) seem a stale truism that everyone should know
already. Depending on editorial whim, step (5) then may or may not
result. Even at best, readers won't see the critical letter until weeks
or months after they have read and absorbed sthe offending paper.
In a hypertext publishing medium, critics can be more effective for less
effort. Those who wish to can write a critical note and publish it immediately.
They can avoid handling papers and envelopes because the tools for writing
will be electronic (and at hand). They can avoid describing the context
and the objectionable points because they can link directly to both. They
can quote a favorite statement of the truism by linking to it, rather than
restating it; if its relevance is clear enough, they needn't even write
an explanatory note. And not only is all this easier than in paper text,
but the reward is greater: publication is assured and prompt, and links
will show the criticism to readers while they are reading the erroneous
document, rather than months later.
In short, criticism will be easier, faster, and far more effective; as a
consequence, it will also be more abundant. Abundant, effective criticism
will decrease the amount of misinformation in circulation (thereby decreasing
the generation of further misinformation). Abundant, effective criticism
of criticism will improve its quality as well. Reflection on the ramifying
consequences of this suggests that the improvement in the overall quality
of critical discussion could be dramatic.
Must serve as a free press
To maximize the effectiveness of criticism, a hypertext publishing system
must serve as a genuine free press. In addition to being scalable, open,
and having diverse ownership, it should allow anonymous reading (and perhaps
authoring under partially-protected pseudonyms). These conditions all facilitate
broad participation with a minimum of constraints, aiding expression and
As Ithiel de Sola Pool notes, in the U.S., restrictions on free speech in
new media have typically stemmed from their identification as tools of commerce,
rather than as forms of speech or publication .
To reduce the chance of bad legal decisions regarding First
Amendment rights in hypertext publishing, we should recognize that the
participants are authors, publishers, libraries, and readers; we should
avoid commercial terms such as information providers, vendors, and buyers.
Must handle machine-use charges
To have a free press, it seems that one must charge for machine use. Computer
time and storage space have become cheap and abundant, but not free and
unlimited. Even cheap and abundant resources must be rationed - imagine
a hacker deciding to store the integers from one to infinity on a 'free'
system. The choice is not whether to ration, but how. One can ration machine
resources by reserving them for free use by a small, subsidized elite that
is implicitly subject to strong social controls: this is a solution used
by institutions on the ARPANET. One can ration storage space by having a
privileged editor delete authors' material: this is a solution used by many
computer conferences and bulletin boards. One can ration by imposing wasteful
costs on people, making them wait in lines long enough to cut demand to
match supply. Or, one can charge what the service costs, so that additional
users will pay for additional machines, allowing indefinite expansion and
access without editing or discrimination. Charging is the solution that
has made on-line services available to high-school kids and retired farmers.
It is worth noticing just how low those charges can be. The cost of long-term
storage of data on a spinning disk drive is now in the range of cents per
kilobyte - this makes text it cheaper to store than to write, even if one
types at full speed without thinking and values one's time at minimum wage.
The cost of an hour's rental of a processor and a megabyte of RAM is again
a fraction of minimum wage. (Telecommunications is a greater expense, but
its charges are harder to fudge.) In short, the main cost of using computers
(telecommunications aside) is already the value of the time one spends.
On the whole, charging will increase openness and convenience, as it does
in the free-press system of conventional publishing.
Must handle royalties
To have the familiar incentives of a free press, hypertext publishing must
handle royalties. Royalties can eventually enable people to make a living
as writers, and will encourage the production of boring but valuable works,
such as indexes. The experience of conventional publishing suggests that
royalties will be inexpensive for readers: if a hardcover book costs twenty
dollars and takes six hours to read, typical author's royalties amount to
roughly fifty cents per reading-hour. Paperback royalties and magazine-writer's
earnings are less.
Must support flexible filtering
An open publishing medium with links presents a major problem: garbage.
If anyone can comment on anything, important works will become targets for
hundreds or thousands of links, most bearing comments that readers will
regard as worthless or redundant. A bare hypertext system would become useless
precisely where its content is most interesting.
To deal with this problem, authors must have exclusive rights to unique
names, so readers can use those names as indicators of quality. Readers
must be able to rate what they read, so that their judgments can aid later
readers' choices. Readers must be able to use automatic filters (configured
to match their preferences) to sift sets of links and choose which are worth
displaying. Making it easy for readers to send each other pointers to documents
would aid personal recommendation. Further, readers should be able to attach
triggers to items - for example, a trigger that sends a message whenever
a (highly-rated) item appears in a place of special interest. This could
dramatically reduce the effort of scanning and re-scanning the key writings
in a field to find links to relevant advances.
Without such mechanisms, critical discussion would choke on masses of low-quality
material. With them, as we shall see, effective processes seem possible.