The application of Molecular Engineering (ME) to spacecraft information systems will require a revolution in software engineering techniques to deal with levels of system complexity so large they must be considered different in kind from present systems. This will be exacerbated by the blurring or erasure of the line between hardware and software. Agoric systems are examined as a possible means of stabilizing and controlling a vast continuously changing field of information entities. Genius materials will revolutionize the ability of information systems to interact with the environment that smart materials have already begun. Very capable Virtual Reality (VR) systems become possible and VR is suggested as a palliative for the man/machine interface bandwidth problem.
At a macroscopic level any manned spacecraft has similarities to a living entity. It has a brain consisting of crew members; it supports that brain by supplying nutrients and oxygen and removing wastes; it has a nervous system consisting of wire and optic fibre; it senses and responds to its environment, sometimes at a reflex level; it is mobile; and it excretes wastes. In a ship with active sensing, analysis and effecting systems distributed through the very materials of the vessel and in which the crew is tightly coupled to the ship through VR the spacecraft is a single interconnected information system and the biological model is a useful paradigm for design.
The rate of advance in computer science is such that attempts to predict future capabilities are problematic at best. Computer science literature is replete with vast over and under expectations. One need only look as far as the early work in speech , machine translation and vision for the former. Many of us need only look at the powerful workstations on our desks to exemplify the latter.
How then does one approach an already difficult job of prediction when the additional wild card of ME is included? Our only choice is to presume trends will continue. The increase in computing power per monetary unit has been exponential for ninety years and seems likely to remain so for the foreseeable future . In this context ME is just another step on our march towards the very small.
We will assume mature, robust technologies develop in a number of areas of computer science. Key among these are machine perception, speech understanding, real time animated 3d graphics, real time expert decision making and pattern recognition. For the more advanced possibilities we require the existence of a black box molecular assembler and disassembler technology .
Important advances in ME have occurred in the last few years: notably protein folding by design and the use of Scanning Tunneling Microscopes (STM)   and lasers  to manipulate atoms and groups of atoms. Single atomic bonds have been selected and broken.
A great deal can be surmised about future systems without detailed knowledge of the technology used to implement it, the architecture of the spacecraft, the form of propulsion or even the specific mission. Figure 1 is an information systems view of spacecraft systems. The interfaces define the possible flows of matter and energy between subsystems.
It matters little at our level of analysis whether the the computing, storage and communication structures are biological , based on Josephson junctions or other quantum mechanical devices      , novel materials like diamond , optical principles    or even mechanical. ME will make processors smaller but the basic nature of a computing element does not change. Anything which is computable can be computed by a Turing machine; likewise any machine architecture can be simulated on any other (there are authors who plead a special case for wetware ). Information flow and information processing are independent of the underlying hardware except for performance measures.
No architecture can serve all purposes. Single Instruction Single Data (SISD) are useful for simple tasks of all sorts. Single and Multiple Instruction Multiple Data (SIMD and MIMD) parallel processors are appropriate for simulation and modeling of large decomposable systems. A 64K element SIMD machine (Thinking Machines Connection Machine) already exists and 1024K ones are under consideration  and future machines may have millions or billions of computing elements . It is not a foregone conclusion that any of the current architectures are scalable. It is also not yet apparent how to go about programming such machines to use the available power efficiently. Data flow machines are good for text data base searches and optic/holographic techniques for image data searches  . Neural nets work well when pattern discrimination learning  or complex stimulus response learning is required.
If assembler and disassembler technology allow construction and destruction of processors on demand, the line between process and processor vanishes. It is an open question what level of such dynamic reconfiguration of inter-element connections and of the capabilities and numbers of the processing elements themselves will be feasible or controllable.
Systems design begins with a definition of precisely what is to be done. We suggest the following range of requirements for advanced manned spacecraft:
Life support: Keep the crew alive and well, mentally and physically. This includes temperature, humidity, pressure, gas concentrations; waste management; food and water supplies; medical records and support; entertainment, provision of an interesting and changing ships environment using visual, auditory and olfactory cues; monitoring of trace chemicals, radiation levels and other environmental hazards.
Science support: Collect, process, archive and analyze sensor data; design and implement experiments; literature searches; modeling, theorizing, visualizing and correlation.
Engineering support: Control, monitor, plan and project consumables use; monitor and control the energy and motive source(s); monitor aspects of the external environment that interact with ship's systems; communications links; engineering data on science and life support systems; predict, detect, locate, bypass and repair faults; develop, override, upgrade and add subsystems.
Command support: Management data systems including crew records, manpower requirements and allocation, consumables budgeting; control and monitoring of ship attitude, relative velocities, acceleration and location; course planning, execution and monitoring ; generation of scenarios for crew and ship training exercises.
Defense support: Threat analysis including meteors, large energy or particle flows, proximity of other vessels; weapons inventory, selection, targeting and tracking; internal warnings and evasive maneuvers. Detection of hostile programs and computer viruses. Enforcement of information security and privacy. Internal security.
These capabilities may be reduced to a small number of architectural requirements based on the primary information flows and processing needs.
Crew Interface: With the crew "sipping from a firehouse" of information flow potentially beyond the 2 GB/day of the currently planned Earth Observation System (EOS) , the human/machine interface bandwidth will need to be drastically increased.
Sensor Effector Net (SEN): The ship must respond to its environment quickly and efficiently. Much of this should be homeostatic and not normally brought to the attention of the crew. Information flows are primarily vertical: data flows upward and control downward. Feedback and reflex loops should operate at the lowest level in the system at which the appropriate data and control is available.
Archive and Retrieval: Massive quantities of data of must be stored and readily retrievable by the crew or subsystems. The data should be distributed and held redundantly. Mission critical data may be stored in even higher redundancy.
Modeling and Simulation: The ability to use data resources, particularly for a science mission and the crew interface, requires powerful means of analysis, synthesis and presentation. In many cases a loss of data is not critical and faults may be handled by reconfiguration and restart. Time and mission critical operations may require shadowing, as in the current space shuttle .
If the human capacity for integrating the global patterns of their environment and responding to threats and opportunities is to be used effectively a crew must be tightly coupled to the ship's information systems. Ultimately it is the crew which must define policies, set goals and plan strategies. To do so they must have better means of examining data than those extant. New man/machine interface devices    and research in VR systems are beginning to break this logjam. Even with VR there must be innovation in presentation methods and in extraction of key features .
An integrated VR system will engage as many human sensory modalities as possible. Today's systems use audio visual and elementary tactile inputs; future ones will include sophisticated tactile feedback and perhaps olfactory and kinesthetic ones as well. The vast array of chemical traces required for control of olfactory input almost certainly will require sophisticated ME. Motion, muscular, tactile, pressure, texture and temperature sensations may also be supplied.
The primary outputs from the crew are voice and gesture. Recent advances in speech understanding   have been significant, and some research in gestural    interfaces has been occurring. Highly interactive graphical interfaces    have been a hot research topic for many years.
A secondary, subconscious output channel is carried by body language and such indicators of stress and psychological state as galvanic skin response, vital signs, pheromones and sweat. There are three classes of VR:
Internal VR: A direct brain interface is a total immersion VR indistinguishable from reality and not limited by it. Recent work with silicon to neuron interfaces    shows such interface is possible in a very limited sense. ME is probably necessary for the basic research preliminary to any suggestion of feasibility. We will leave this possibility in the realm of pure conjecture.
Boundary VR: The data suit  or an ME version derived from Drexler's space suit musings  controls reality at the skin. Some elements already exist  . Kinesthetic sensations are limited to what can be simulated with pressure at appropriate locations on the body; the use of tricks as in flight simulators; and the actual freedom of motion of the crew member at the time. Walk around molecules are already being suggested as a research tool .
External VR: A "value-added" reality. Crew activities are monitored and the ship responds via conventional output devices, holographic projections (computer generation of color holographic film is already possible ), and conceivably assembler/disassembler mediated reconfiguration of the ship. The VR is limited to what can be added to what is already there. A great deal of the work to date has been done as an artistic endeavor  ; with some in virtual control panels    and Computer Aided Engineering and Design (CAE/CAD) .
The more sensory modalities brought into play, the more information a human can process at one time . Relying on one sensory modality can cause an overload, or a failure to attend to important cues. In systems from aircraft to nuclear power plants this is overcome by using color, flashing lights, audio frequency, timbre, etc. VR can combine sensory inputs in unusual ways  to indicate warning, data uncertainty, etc. Any sensory input data, raw or analyzed or combination thereof can be mapped onto any sense modality if it might be helpful in the extraction of patterns from that data.
The ship information systems will contain numerous expert systems. Some will be a part of subsystem operations and will rarely interact directly with the crew. Others will be "consultants" with expert knowledge in given fields. Given the rapid advances in 3D animation, audio generation and speech understanding, personifications of these programs in the VR may be one of the most effective means of interacting with them . That simulated personalities need not be terribly sophisticated to be useful is exemplified by the Eliza program .
A sufficiently good VR system allows teleoperation indistinguishable from being there. This flexibility allows control (or control assistance) of the spacecraft or exploratory drones from any point close enough to satisfy response delay criteria. For example, a landing crew could retain full control from the ground. Predictive VR could allow control from longer ranges in some cases.
It is apparent the full merger of sensor, effector and computing element technologies is rapidly approaching . Integrated circuit size sensors and mechanical devices have already been built . Within a few years processing and conversion between analog and digital domains will be integral with sensing and effecting elements.
ME opens up a wide range of possible sensor and effector technologies. Work on smart or adaptive materials is underway, and they are thought to be in use on advanced military aircraft  along with many other 'black' technologies . Self healing materials are already under discussion . Optimistically, the entire fabric of the vessel will be a 'genius' material woven with microsensors and microeffectors. ME opens the possibilities of materials that can grow subsystems, heal damage and that are homeostatic like living systems.
It will be possible to monitor and react to an an unheard of variety of signals. Small size allows us to interleave elements that are optimal for different tasks . The entire volume and surface of the vessel is available. For short wavelength, or high amplitude phenomena, volumetric sensor arrays can deliver both spatial and temporal variations. For low amplitude or and low frequency phenomena, the volume or surface can be used as a single antenna. The large air shower complex at Dugway Proving ground is representative of this distributed, sensing in depth approach. Sensors are distributed on the surface and 3 metres underground. Surface units have individual processors and communicate with their neighbors .
Fields: Monitor or generate static and dynamic electric and magnetic fields across a wide spectrum of frequency, amplitude, phase, and direction; acoustic waves across a wide spectrum of frequency, amplitude, phase, and direction; and stress fields in materials. Monitor gravitational field properties .
Matter: Monitor microenvironments inside and outside the spacecraft. This includes the direction, mass and energy of particles, dust and meteors; pressures and temperatures; detection and release of trace chemicals; incoming and outgoing projectiles; monitor and control life support systems.
Position: Control and continuous monitoring of acceleration and attitude.
If assembler and disassembler technology is available, SEN's may reconfigure their physical structure based on current priorities. A ship need not waste volume or surface for a table or a microwave antenna when it is not required .
In figure 2 we show a general high level model of an SEN . Recognizers are processing nodes whose primary job is the extraction of information from one or more sensor data streams. The data streams may be any combination of raw sensor data or processed data from inferior (lower level) Recognizers. The Recognizer passes the abstracted and processed data stream upwards to its superior Recognizer and laterally to a motor Ganglia on its peer level. Each level in the tree is a higher level of abstraction in both the sensor and effector worlds.
Ganglia are controllers of the effector or 'motor' systems. The higher up the effector hierarchy the Ganglia is, the more complex its 'motor' output. It receives abstract commands from superior motor Ganglia and generates a series of less abstract commands to inferior Ganglia . The peer relations between Recognizers and Ganglia allow reflex and feedback control  to occur at the lowest possible level in the system at which sufficient information and motor control are accessible.
There are times when normal filtering of data must be overriden and for this reason data flow is bidirectional. A Recognizer will supply data on demand. In some cases Recognizers will declare data to be "interesting" and pass it directly to a node further up the tree  . Motor Ganglia report exceptional conditions.
Distributed hierarchical processor networks resembling this have been built and work quite well in large scale automation systems . Since behaviors are generated at a low level and actions occur in parallel, there is a great deal of similarity between the process control approach, the recently defined subsumption architecture  in robotics and vector force fields of neuron activation in the control of chordate limb movement . The primary difference is scale. Where existing process systems consist of a few hundred or a few thousand macroscopic sensing and effecting elements with a tree depth of three or four, the envisioned ME spacecraft systems may contain many millions of microscopic elements and much greater tree depths.
The uppermost Ganglia is the highest reflex and control point and is the interface with the crew. Like the lower brain  it is a filter which decides what sensory data must be attended to. It also responds to crew commands (voice or gestural commands, etc) . In a fully robotic craft it is the final decision maker.
Due to its distributed nature, this architecture is robust in the face of data loss and node faults. Retries and reconfiguration are satisfactory means of handling fault tolerance. Even if a node or set of nodes become isolated, they will still be able to carry out much of their normal duties.
Learned reflex capabilities will make each spacecraft unique. The longer it is in space, the better adapted it becomes. Whether by learning from experience, training or specific programming, this adaptation represents valuable knowledge. The first exploratory vessel on a given mission  could short circuit its successor's learning curve with a download.
Idle systems might be placed in a 'dreaming' state in which they learn to respond to imaginary scenarios, both likely and fanciful. Replacement of the environment by an ME based simulator will create an accurate ship and crew training ability.
The archive combines a digital library with what are currently the flight and scientific data recorders. The importance of the library portion will vary with the mission: a deep space exploratory mission will obviously have different requirements than a commercial Low Earth Orbit (LEO) shuttle. A client/server architecture with multiple database and supercomputing units seems appropriate. These units should be placed in the central core of the ship for maximum radiation shielding.
ME makes it possible to store national libraries worth of information in a small space. If data exists in digital form anywhere, duplicating it costs little. History shows that if storage is available it will be filled, so it is likely that a significant portion of human knowledge will be on every spacecraft.
The day of the computer archive as a simple numeric data repository is long gone. Workstations are already capable of displaying high resolution high quality color images and stereo CD quality sound. The Joint Photographic Experts Group (JPEG) standard  has tamed the storage requirements for these . Capacity for 3D and stored video is starting to appear, the later enabled by the Moving Pictures Experts Group (MPEG) compression standard . Entire archives of photographic data and major reference works   are becoming available on line or CD. Large businesses are leading the way to the all digital world  . It is likely only a few decades will pass before all written, audio and image archives are available in digital form.
Storage is not an end to itself. Vannevar Bush's Memex paper  contains basic requirements relevant to this day. Sophisticated means of connecting data items such as hypertext   and object oriented databases (OOD) and of navigating in the data space  will be needed.
True hypertext client-servers such as Xanadu  are commercially available. But even hypertext is not enough when search for specific information or patterns over a large domain is required. ME vastly increases the computing power available but brute force cannot solve all problems. What if one wants to request the system to search all planetary image data to date for structures matching a particular verbal criteria? This is not a simple problem.
A spacecraft will not carry a crew large enough to have domain experts in all areas of human knowledge, so hypertext is also insufficient in this case. Domain expert systems (library assistants) in a VR could assist with the data navigation task and enforce information privacy and security requirements.
One must be able to display, analyze and model systems. These are some of the most compute intensive tasks in science: ones whose practitioners continuously cry out for more and more powerful supercomputer technology . ME puts massively parallel supercomputing into tiny packages and makes it available as a shipboard facility. But if there are not significant advances in parallel languages and algorithms this power will be wasted.
Onboard supercomputing will allow the spacecraft to model its own interactions with the environment, whether for aerobraking or predictive modeling of engine and power plant parameters.
Close coupling of modeling, expert systems and hypertext data retrieval in HyperIntelligence  and Knowledge Management systems  are needed if all these resources are to be fully utilized.
Complexity is perhaps the most difficult issue in computer science and one which the capabilities of ME simply exacerbate. Complexity is the limiting factor on software systems from user friendly spreadsheets to banking systems to SDI.
Object Oriented Programming Systems (OOPS) help control complexity in the engineering of individual processes and allows reuse and replacement of objects within those processes. To a great extent it solves the complexity problem at the single process level.
Client-Server supplies a formalism for systems design at the level of cooperating processes. But what happens when numbers of client and server processes rises to millions (or billions and billions) and are continuously in flux (whether on computer or human time scales) with new services, changing system requirements, upgrades and bug fixes appearing at random intervals?
The NeXTstep interface   is one of the first to attack this problem. When a new server process is loaded, it declares its willingness to provide a service to all comers. It thereafter automatically appears in the Services menu of all 'App' client processes. Client and server need have no a priori knowledge of each other. A standard communication mechanism guarantees they can exchange data if they share any standard types. This is a step forward but it evades the decision making issue by leaving it in the hands of the user. There is no indication of how processes might understand the utility of new services and make trade offs among service providers with varying performance, capability and resource utilization.
Agora systems research    may be another part of the way forward, at least in so far as identifying, evaluating and optimizing the cost and price of services. If research brings about mature systems, large programming systems will be artificial ecologies or markets. This seems an excellent match to the capabilities of ME, particularly if assembler and disassembler technology erase the dividing line between process and processor.
Cooperative hinting algorithms  have been explored recently for problems involving large problem spaces. Processes share a blackboard where hints are left as to where to look or not to look.
Research seems to have not addressed the formalism by which a client 'decides' to use a class of service that it was not originally coded to use. In many cases this is not important: a process that requires data conversion of a particular sort need only find the best performance bargain that fulfills its requirements. Anything more probably implies the Client is (or is front ended by) some form of expert system.
In a market driven system each process will attempt to provide the best service (in quality and delivery time) possible at any given time and place. To do so means that it must minimize input costs; maximize input quality and delivery time; and maximize output price. This mini-max problem should lead to behaviors comparable to those of economic entities. The following are some of the major factors in decision making:
Transaction costs: The mix of closely coupled or "merged" processes versus outside suppliers is controlled by the cost of decision making and contracting.
Communication costs: The physical location of a process should minimize the sum of communications costs with clients and other servers. Bandwidth on a communication link is a limited resource and its usage must be optimized.
Processor costs: Speed and reliability must optimized. For some applications a massively parallel processor in the radiation safe core of the spacecraft is more desirable real estate than an SISD processor on the hull. There are optimizations regarding whether a process should build or rent a processor; add to an existing one; share a processor with other processes or even compile itself into hardware.
Storage costs: Access speed and reliability must be optimized. There are higher costs associated with safe storage in the core versus local RAM with a higher risk of damage from ionizing radiation. Local storage may be enlarged by assemblers, a process may move to a facility with a larger or closer mass storage.
Energy costs: All processes utilize energy. If there are differences in quality or cost of energy that vary from point to point, that will be part of the location optimization equation. Energy is a limited resource and use must be optimized such that the highest value to the crew and mission goals is gained.
Matter costs: Since a ship has limited stores of matter, that matter must be used to provide the highest value to the crew and mission goals. This is not only an issue of consumables: there may be a limited number of atoms of critical isotopes. If prices of a critical material rise, some processes may profit by substituting the use of higher for lower valued resources.
Many decisions have hysteresis because there is a cost associated with a change of strategy. A process cannot move to a new processor with every slight change in client/server relationships and volume of business. One can imagine a highly successful new server starting in a shared processor, moving to a private processor and finally to a special purpose processor (andvice versa for the competition). A step in either direction is delayed in time because the gain in efficiency must be sufficient to overcome the cost of the changeover. This could allow systems to freeze into suboptimal configurations. Hysteresis can lead to hill-climbing phenomena. The distance from a local maxima to a global maxima may not be crossable. It is possible that the equivalent of business cycles caused by changing crew requirements will keep the system in a sufficient state of flux that resources will move quickly to their highest value usage. This could be modeled by simulated annealing with a time dependent minimum energy state. The system could be jostled into new configurations by biasing the simulated temperature close to the freezing point and randomly changing it .
Systems become particularly interesting when we allow assembly and disassembly. Goals and requirements are not constant with time, so we will have cycles of boom and bust based on the mission plan. A switch from interplanetary to planetary science observations will cause a massive turn over in processes and resource utilization. Processes that cannot adapt to the new 'economic' environment will die off and new processes will be instantiated and thrive . Mass and energy utilization will shift with priorities.
This is a system in which a free market exists solely to supply the needs of an aristocracy. If the crew is the ultimate landlord to which rents acrue and is the ultimate consumer of all services, an agora system will optimize around crew goals. Rents must be returned to the crew so the money supply remains stable and the crew remains the primary financial input. It would not do at all to have the Chess Server corner the financial system and optimize the ship to defeat the Milky Way Galaxy chess grandmaster. Monetary flow is like a potential field over the network of processors. The crew control of the money supply biases the system to supply the needs of the crew rather than optimize around a random local field fluctuation.
Adding a new server is much like releasing a fish in a pond: either it finds a niche and competes successfully or it dies. Where it ends up and whether it reproduces are not wholly predictable in advance. This may be difficult for those schooled in current engineering dogma to accept. It can only be said that all the truly complex systems in nature work on these chaotic principles.
We have discussed a wide range of technological possibilities for future spacecraft information systems. At the conservative end of the prediction spectrum the technological revolution is mostly invisible. Capabilities are great, but an astronaut or cosmonaut would find things recognizable. Intelligent systems give the vessel autonomy and reliability far beyond that of the present. Smart materials and subsystems heal themselves to a great extent and give very specific alerts when there is trouble. The interior of the ship is comfortable but pragmatic. A crew sits at control panels which are recognizable as such although they are viewing panels controlled by voice and gesture rather than the familiar physical buttons, knobs and toggles.
The optimistic prediction is an entirely different matter. The coming together of all the technologies discussed in this paper give us a spacecraft which is more living organism than vessel. The crew controls every facet of operations through VR and can couple so tightly into the vessel systems that they effectively become the ship. Physical controls are unnecessary but the ship will obligingly grow them on demand should they be required. The interior is dedicated to the comfort of the crew and is filled with ever changing art, music, color. The ship is a jinn and the crew are its masters.
Thanks for discussions and comments are due to Professor R. H. Perrott, Pat Crookes and John Flanagan of Queens University. Also many thanks to Dr. K. Eric Drexler and Christine Peterson for stimulating discussions over many years, without which this paper would never have been written.