This Web version is generated by filtering some 75 files marked up for the
A raw file looks like this.
The filter, which translates
markup and produces a variety of indexes and other pages,
consists of several
At the center of the electronic catalogue is a 3.4 MB file consisting of
all the catalogue entries, in the order in which they appear in the printed version, with one entry per line.
Each entry is marked up with
and preceded by a four digit number (quad) with the first entry getting 0001 and the last 3354: i.e. the lines are in sorted numerical order.
An entry marked up for HTML
looks like this.
Since the main file is sorted, entries can be retrieved by binary search using
function. Retrieval times are the same for all entries.
A similar file holds the bibliographic references.
Retrieval and delivery of the entries is controlled by several
The main script,
takes a quad or string of quads as input, looks up the associated lines, and surrounds the returned string with the
tags necessary to output a Web page.
The various indexes (geographic, phylogenetic, alphabetic) consist of names together with the associated quad packaged in the appropriate
That is, index lines have the form:
<a href = "cgi-bin/getent?2961">Ventricaria ventricosa</a>
Selecting this link initiates the following sequence:
DBMfile bound to an associative array.
HTMLcode that permits the entry to be displayed as a single
WWWpage with links to preceding and following pages (not yet generated), to searching scripts, and to the table of contents.
getentcan delete them or include them as appropriate. The records by default are not displayed since that results in a smaller and less distracting page. On each recordless page there is a link that will reload the page without excising the records.
getentloads, it reads this file into an associative array, with the quad representing a name being the hash key and the
HTMLcoded update information being the value. When
getentis assembling the page an addendum comment string is inserted if one is available (see Coelarthrum boergesenii for an example).
troffmarkup. Second, we wanted the
WWWversion to mirror the printed version so that we could get feedback that we could incorporate in print. [Thanks a lot to those who used the electronic version to improve the printed product.] Third, we wanted to reduce loading time and other transit delays. Fourth, we did not want to restrict accessibility to only certain browsers. There are no images (well, one map), no browser-specific constructs (except
getent, the central
CGIscript, hobbles itself by avoiding frames, which are not (were not, anyhow) handled by all browsers.