|
|||
May 18 10:00 1998 Page 1
|
|||
|
|||
1. INTRODUCTION
Even as the first digital computers were being designed, the founders
of computer science were remarking on the differences between artificial and natural systems and worrying about the limitations imposed by the artificial systems' need for reliability and precision. Speaking in 1948, John von Neumann cautioned It is unlikely that we could construct automata of a much higher
complexity than the ones we now have, without possessing a very advanced and subtle theory of automata and information. A fortiori, this is inconceivable for automata of such complexity as in possessed by the human central nervous system. This intellectual inadequacy certainly prevents us from getting
much farther than we are now. ... A simple manifestation of this factor is our present relation to error checking. John von Neumann, The General and Logical Theory of Automata
(1948) in Jeffress, Lloyd (Ed.), (1951), Cerebral Mechanisms in Behaviour, Wiley (New York). The experience of the past fifty years shows that von Neumann was too
pessimistic. He believed that expanding from computers of a few million logical elements to computers of several billion elements would require major advances in the theory of computation. Yet we routinely construct such machines today, and they operate on the same foundations that von Neumann himself erected. But over the next fifty years -- and even over the next ten years —
two new technologies will force us to address von Neumann's concern in earnest. Each of these technologies will enable us to construct systems composed of myriads of information-processing units at almost no cost, provided we don't demand that all the elements work and that we impose no requirements for precision interconnect or precise geometrical arrangement of the parts. To exploit the potential of these systems will require the ability to reliably obtain desired results by engineering the cooperation of vast numbers of unreliable parts that are interconnected in unknown, irregular, and time-varying ways. We call this the challenge of ^amorphous computing_. One technology calling for amorphous computing is microfabrication.
Microelectronic mechanical components are becoming so inexpensive that we could economically mix information-processing units, actuators, and sensors into bulk materials such as paints, gels, and concrete. Imagine coating a building or a bridge with smart paint that reports on traffic loads and wind loads, monitors the integrity of the structure, or even heals small cracks by shifting material around. A wall that could sense vibrations and move slightly on its own could monitor the premises for intrusion or cancel noise. A room lined with active skins could push dirt and dust into a corner for removal. |
|||
|
|||
[[footnote: Progress in microfabrication
reasonable reference?]] |
.: what's a
|
||
|
|||
|
||
May 18 10:00 1998 Page 2
|
||
|
||
The other technology arises from the astonishing progress in
fundamental biology, where we are learning enough about chemical mechanisms in cells that we may be able to harness them to support information-processing. Cells reproduce themselves and obtain power from the environment, which enables the creation of copies with little manufacturing effort. Imagine a discipline of cellular engineering that tailor-makes programmable cells to function as sensors or delivery vehicles for Pharmaceuticals or chemical factories for the assembly of nanoscale structures. [[footnote: As an example, Knight and Sussman [ref] describe a
biochemically plausible mechanism for constructing digital logic signals and gates within living cells. The idea is to represent signals by concentrations of naturally-occurring DNA-binding proteins, and to implement the nonlinear amplification required to maintain the digital abstraction by in vivo DNA-directed protein synthesis. Also reference ISAT study?]] There remain significant hurdles to overcome before we can
fabricate such systems. Mixing microelectronics into bulk materials requires learning how to supply power and communication to the elements; and cellular engineering requires knowing a great deal more about the mechanisms of protein synthesis than we do at present. Yet these are challenges in active fields where there has been rapid progress, and it seems plausible to expect continued progress. [[footnote on approaches to power, and on progress in protein
synthesis ...]] Beyond the technological challenges of fabricating these systems,
however, lies the intellectual challenge of programming and controlling them. To make progress here, we must invent novel software methodologies. We must understand what kinds of languages that are appropriate for describing amorphous computational processes. We must isolate appropriate primitive computational mechanisms and develop means for combining and abstracting them. This paper presents some initial ideas about methods and
organizational principles for programming amorphous systems. Section 1 describes a simple model of an amorphous computing system. Section
2 describes some primitive computing mechanisms that are well-suited
to the amorphous context, and outlines how combining these primitives could produce a program for monitoring a bridge covered with smart paint. Section three presents two programming paradigms for amorphous systems, one inspired by plant growth and one inspired by differentiation and growth of tissues and organs. For each paradigm, there is a corresponding higher-level language that abstracts away the detailed mechanisms of the amorphous model and permits programmers to focus on the organizational metaphor. Notably, the plant-growth language can express programs that direct an amorphous computing system to create structures with any prespecified topology, and the differentiation language .... |
||
|
||
|
||
May 18 10:00 1998 Page 3
|
||
|
||
2. A MODEL FOR AMORPHOUS COMPUTING
To better delineate the study of amorphous computing, we contrast it
with two related but distinct enterprises. The first is the study of emergent behavior and self-organizing systems, which has exhibited how some coherent behaviors of large-scale systems can ''emerge'' from purely local interactions of individual elements. [[footnote: Nobel laureate Ilya Prigogine and his associates have
studied these phenomena and shown that they are important in physical
and chemical as well as biological systems~\cite{Prigogine}. extend this footnote and make it better]] Some of the phenomena of emergent behavior may be useful for
engineering amorphous systems, but it is not our goal to investigate the principles of self-organization {\em per se}. As engineers, we want to construct systems that have prespecified, well-defined behaviors, not just to study whatever happens to evolve. [[do we count activation-inhibition as emergent behavior phenomena?]]
A second closely related area is the study of cellular automata
(which notably also traces back to von Neumann) [[footnote. comment on vN's work on self-reproducing automata]]
Like amorphous computing, cellular automata models postulate massive
numbers of identically programmed elements that operate in parallel and communicate locally. Unlike amorphous models, cellular models generally assume that the elements are arranged on a regular grid. Cellular models also assume that computation proceeds synchronously, whereas the mechanisms for amorphous computing must be independent of the detailed configuration and reliability of the elements. For example, smart paint should be able to determine geometric properties of the surface that it coats without initial knowledge of the positions of the paint's computational elements. [[take a look at the papers on inhomogeneous cellular automata, and
perhaps cite them as exceptions. also, some of the general phenomena of CAs may also occurs in the amorphous case, so mention this as a possibility]] [[note that the following is a rewording of stuff written by
radhi and newts]] For the investigations in this paper, we'll adopt a model that
postulates a large number of computing elements distributed on a surface or throughout a volume. We assume the distribution to be random but homogeneous. In principle, elements may contain sensors and actuators in addition to processors and memory, but we will make no use of this in the examples in this paper. [[footnote on the possibility of inhomogeneous mixes; on the
|
||
|
||
|
||
May 18 10:00 1998 Page 4
possibility of processors moving around?]]
The elements are all identically manufactured and programmed. They
have no {\em a priori} globally unique identifiers, but they do have random-number generators (not pseudorandom) with which they can generate (probabilistically) distinct identifiers under program control. The processors are asynchronous in the sense that they have similar clock speeds, but they do not operate in lockstep. Elements are unreliable; we adopt a failure model in which elements may stop executing at any time, but are otherwise well-behaved. [[footnote: This failure model is known as {\em stopping failures},
add something about how in general things could be worse, but this is very hard]] Each element communicates with the elements within a fixed distance of
it (the element's {\em neighbors}). We assume this distance is much smaller than the dimensions of entire system and the density of element distribution is high enough so that each element has about the same number of neighbors. Processors communicate by broadcast. A sender has no (\em a priori} assurance that a message has been received, although higher-level protocols with message acknowledgement can be built upon this substrate. [[footnote: In the context of microfabrication technology, the
the particular communication mechanism we have in mind is wireless
broadcast in which processors share the same channel. Thus, collisions may occur when two processors with overlapping broadcast regions send messages simultaneously. We assume that the receiver can detect a collision because it receives a garbled message, but the sender can not detect a collision. This model is similar to that of broadcast networks with overlapping broadcast regions, such as packet radio networks \cite{Tanenbaum}. In the context of cellular engineering we might imagine that cells
transmit chemical signals ..... ????? ]] show wave propagation and do a bit of analysis on it.
show how it gives local coordinates, via countdown. |
||
|
||
SOME USEFUL PRIMITIVE MECHANISMS
* averaging (regularizing?) to refine coordinates, so can do
better than just normal wave prop. do some analysis in terms of the parameters of the model. Has Newts written this up? what other types of regularization are there?
* filtered? wave propagation (wave accumulation?); gated
propagation * leader election, talk about the idea of organizing elements into clubs and ref
|
||
|
||
|
||
May 18 10:00 1998 Page 5
paper by rad and newts
* activation-inhibition processes. should we include this here?
It would be good if we could demonstrate how to use it for something. or is there a way to tie it to Newt's growing point stuff? * ???[can we work out some sort of regularization mechanism???]
* end this section by showing how to do region definition, as in
the amorph-comp manifesto |
||
|
||
ORGANIZING METAPHORS FOR AMORPHOUS COMPUTING
Intro paragraph:
metaphors come from biology (but we make no claims this is how biology
works)
Useful primitive elements. But can we do anything to build
structure. One place to look is biology. Developmental biology does some of this. |
||
|
||
design in terms of these metaphors and we create languages to express
these designs. * topological control via growing points. overview of Newts's
work. points as a way to serialize the parallel process, show compilation into lower level language * differentiation and growth. Ron's work.
|
||
|
||
|
||