Artificial-Life Simulators and Their Applications Howard Gutowitz Ecole Superieure de Physique et de Chimie Industrielles Laboratoire d'Electronique 10 rue Vauquelin 75005 Paris, France and The Santa Fe Institute 1399 Hyde Park Road Santa Fe, NM 87501 hag@santafe.edu January 30, 1995 Abstract Artificial Life (Alife) is a rapidly growing field of scientific research linking biology and computer science. It seeks to understand how life- like processes can be embodied in computer programs. Advances in this area promise to illuminate fundamental questions both in biology ("What is life?") and in Computer Science ("How to make robust and adaptable computer programs?"). Much of the work in artificial life is directed toward building com- puter simulations of artificial creatures and the artificial worlds in which they live. This report surveys major efforts in this area, with attention to developments likely to lead to practical applications in the short to middle term. This document is intended to be at once a critical introduction to the field and a resource guide for those who wish to explore further. 1 Contents 1 Introduction 4 2 Connections between Alife and Traditional Computer Sci- ence 5 2.1 Generalities : : : : : : : : : : : : : : : : : : : : : : : : : : : : * * 5 2.2 Specific Problem Domains : : : : : : : : : : : : : : : : : : : : 7 3 The Fundamental Algorithms of Artificial Life 9 3.1 Neural Networks : : : : : : : : : : : : : : : : : : : : : : : : : 10 3.2 Evolutionary Algorithms : : : : : : : : : : : : : : : : : : : : : 10 3.2.1 Variant evolutionary algorithms : : : : : : : : : : : : : 12 3.3 Cellular Automata : : : : : : : : : : : : : : : : : : : : : : : : 13 4 Selected Packages in Depth 14 4.1 A Radically Bottom-Up Approach: Tierra : : : : : : : : : : : 14 4.1.1 Evolvable instruction set : : : : : : : : : : : : : : : : : 15 4.1.2 Mutability : : : : : : : : : : : : : : : : : : : : : : : : : 16 4.1.3 Emergent fitness function : : : : : : : : : : : : : : : : 17 4.1.4 Conclusions : : : : : : : : : : : : : : : : : : : : : : : : 17 4.2 Ants as a Model Organism: MANTA : : : : : : : : : : : : : : 18 4.2.1 Reactive agents : : : : : : : : : : : : : : : : : : : : : : 18 4.2.2 Agents and Classes : : : : : : : : : : : : : : : : : : : : 19 4.2.3 Abstract and concrete classes : : : : : : : : : : : : : : 20 4.2.4 A typical experiment : : : : : : : : : : : : : : : : : : : 21 4.2.5 Conclusions : : : : : : : : : : : : : : : : : : : : : : : : 21 4.3 Artificial Biophysics: LEE : : : : : : : : : : : : : : : : : : : : 22 4.3.1 The simulator : : : : : : : : : : : : : : : : : : : : : : : 22 4.3.2 The code : : : : : : : : : : : : : : : : : : : : : : : : : : 23 4.3.3 A typical experiment : : : : : : : : : : : : : : : : : : : 24 4.3.4 Conclusions : : : : : : : : : : : : : : : : : : : : : : : : 25 4.4 A General-Purpose Simulator: Swarm : : : : : : : : : : : : : 25 4.4.1 The prototype : : : : : : : : : : : : : : : : : : : : : : : 26 4.4.2 Future versions of Swarm : : : : : : : : : : : : : : : : 28 4.4.3 Conclusions : : : : : : : : : : : : : : : : : : : : : : : : 30 2 5 Conclusions 30 6 Research Groups in Artificial Life 31 6.1 Research Groups in France : : : : : : : : : : : : : : : : : : : : 31 6.2 Foreign Groups : : : : : : : : : : : : : : : : : : : : : : : : : : 33 7 World Wide Web Sources for Artificial Life Simulation 36 7.1 General : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 36 7.2 Genetic Algorithms : : : : : : : : : : : : : : : : : : : : : : : : 37 7.3 Genetic Programming : : : : : : : : : : : : : : : : : : : : : : 37 7.4 Neural Nets : : : : : : : : : : : : : : : : : : : : : : : : : : : : 38 7.5 Alife Simulators and Research Groups : : : : : : : : : : : : : 38 7.6 Computer Science and Autonomous Agents : : : : : : : : : : : 39 8 Program List 40 9 Internet Resource Guide 43 10 References 48 3 1 Introduction According to its founder, "artificial life is a field of study devoted to under- standing life by attempting to abstract the fundamental dynamical principles underlying biological phenomena, and recreating these dynamics in other physical media-such as computers-making them accessible to new kinds of experimental manipulation and testing" [Lan91 ]. Artificial life (Alife) cuts broadly across several established fields; biology and computer science at the core, and more peripherally physics, sociology, psychology, and philosophy. Any overly general presentation will be empty. Still, our emphasis here on the role of simulators in Alife allows us to bring out a number common themes, and concentrate on the relationship between Alife and computer science. We proceed as follows: We first sketch in the background by means of a quick survey of the connections between Alife and traditional computer sci- ence. This allows us to appreciate the range of computational issues which might be addressed by Alife simulators. We then describe some of the "funda- mental algorithms" used almost universally in artificial life simulations. The fundamental algorithms are methods to simulate biological evolution and learning in a population of artificial creatures. These creatures, in turn, are usually represented as objects in an object-oriented programming paradigm. The way these algorithms and objects are used together in simulation pack- ages is the subject of the next section: an in-depth analysis of several repre- sentative simulation platforms. We conclude with an extensive bibliographic section. For some, this bibliographic section represents the main value of this report. It is meant to supply concrete links for those seeking to connect their own research with artificial life. At the same time, these references are the best proof of our central assertion: that artificial life is experienci* *ng explosive growth, making broad connections across traditional disciplines, and, increasingly, to practical applications in industry. Responding to the need for rapid diffusion of information, most current work is reported first, and sometimes exclusively, via the World Wide Web (WWW). WWW is a a collection of hypertext documents residing on a network of thousands of computers scattered across the global. A user with access to the internet and appropriate software can read these documents and follow the links they contain to other, related, documents. This present report can be processed as 4 hypertext1. This is the preferred way to read this document. In hypertext format all the "Universal Resource Locator" (URL) links become active. That is, a user need merely click on a reference of interest in order to obtain the full text. In printed form, the URL references appear as footnotes. References General access to the Alife literature may be obtained through a series of conference proceedings, e.g. [For91 , MW91 , LTFR91 , DGN+ 93 , BM94 , HS94 ], There are a few popular introductions to Alife, notably [Lev92 ] and [Heu94 , BT94 ] in French. An extensive and well-annotated bibliography on Alife can be found in [Gro ]. Finally, there is a new journal, "Artificial Life" [CL4 ], published by MIT press. 2 Connections between Alife and Traditional Computer Science Below we undertake a detailed examination of a number of representative Alife simulators, packages which provide a set of tools for building simulations of particular processes and phenomena. Many of the technical issues which must be addressed in Alife simulation are related to issues in traditional computer science, artificial intelligence in particular. This section seeks to place Alife simulation efforts in a computer-science context. 2.1 Generalities In the field or artificial intelligence, Alife-related work proceeds under the general heading of autonomous agents. An autonomous agent is a program which contains some sort of sensor and effector system. The agent operates within a software environment such as an operating system, a database, or a computer network. The sensors are used to observe features of this external environment. The effectors may alter the state of the environment or the state of other agents. Software agents pursue goals such as acquiring information about the environment or modifying its state, either individually or in teams. They do so without continuous intervention of a programmer/user. ______________________________ 1http://alife.santafe.edu/topics/simulators/dret/dret.html 5 In part, the autonomous-agent community breaks into sub-communities along geographic lines. While in North America this work is treated under the heading "distributed artificial intelligence (DAI)", in Japan one refers to "Multi-Agent and Cooperative Computing (MACC)" and in Europe, "Mod- eling Autonomous Agents in a Multi-Agent World (MAAMAW)". CKBS (cooperating knowledge-based systems) is international subfield. CKBS sys- tems typically have a stronger emphasis on real-world problems, stressing performance and reliability issues over theoretical concerns [Dee92 , Dee93b , Dee94a ]. More specifics on some of these subfields and applications is given below. The subset of artificial life work which is conducted in artificial intellig* *ence departments is typically distinguished by several hallmark features. fflIt is tightly coupled to practical, industrial problems. fflWhile biological metaphors abound in this literature, no serious at- tempt is made to connect with empirical natural science. fflWhile the autonomous-agent movement within artificial intelligence is a movement toward sub-symbolic, physically grounded (situated) compu- tation, the traditional artificial intelligence concern to explain higher- order cognitive processing remains in clear evidence. With respect to artificial life simulators, the autonomous-agent literature is a rich source of inspiration, methodology, and software. While simulators built in the strict context of Artificial Life research are typically small, ad- hoc, programs written by at most a few individuals over a short period of time (order 1 year), Autonomous Agent simulators may represent the investment of tens or hundreds of man-years and may be refined to the point where they can serve in large, sensitive applications, such as air traffic control [N.R94 * *]. General simulation platforms SOAR is a major artificial intelligence simulation platform [LRN86 , LNR87 ]. It is designed to simulate the behavior of collections of expert systems. A related system is CLIPS, though originally a single-agent expert system, a multi-agent extension has been developed [CLI ]. General references [GBH87 , GH89 , GB92 ] [BG88 ] [HMSB87 ] [IEE91 ] [WD92 ] 6 2.2 Specific Problem Domains Without attempting an exhaustive survey, this section considers some partic- ular problem domains in which Alife and traditional computer science meet. Learning There are extensive interconnections between the fields of neu- ral networks and artificial intelligence. These are explored for example in [FH87 , Hin90 , Kni89 , Wat91 , Dor94 ]. For some engineering applications see [DKS91 ]. Learning in a multi-agent setting provides numerous challenges for theo- ries originally developed to explain learning in isolated individuals, see e.g. [LS87 , SW89 , Gre91 , Sia91, Dal93 , BM91 , Wei93 ]. Virtual Reality Virtual Reality (VR) is a burgeoning field of computer science with widespread practical applications and tight connections with artificial life. Both Virtual Reality and artificial life practitioners seek to* * use the computer to represent life-like processes operating in artificial, but life- like worlds. There are marked differences in style between the two fields: the user of a VR simulator is often a participant in the activities of the artifici* *al world, while this is seldom the case in a Alife simulator. Verisimilitude is typically a raison d'^etre for a VR simulator, while Alife simulators typically allow for radical departures from natural law. The creators of one simulation system in particular, VEOS [BC93 ] are very explicit about the Alife influences on the design of their system. [HD92 ] is a useful guide to VR applications, ranging from aero-space to visualization, through education, games, military, and telecommunications applications. Autonomous Agent Psychology/Robotics T. Tyrrell [Tyr ] and P. Maes [Mae90 , Mae91 ] have written large agent-based simulators aimed at under- standing "action selection," the mechanisms by which an organism (real or artificial) selects which among a variety of (often mutually incompatible) be- haviors to execute at an given moment. Such work has immediate practical implications for robots, as well as providing an experimental platform for the evaluation of psychological theories. While Tyrrell and Maes focus is on the behavior of individuals in a single generation, K. von Roeschlaub's simula- 7 tor [vR ] is aimed at the study of the evolution of behavior, in particular, in predator-prey competition. In traditional robotics, programmers attempted to anticipate and explic- itly control every aspect of the action of the robot. Such control systems tend to be "brittle", that is, tend to fail when the robot is placed in some unanticipated situation. By contrast, decentralized, adaptive control of robot motion, championed notably by Brooks [Bro86 , Dor95 ] seeks to build robotic controllers which continuously learn and adapt to changing environments. One of the problems which has been most successfully treated in this domain is the construction of walking robots [Bro89 , JMD94 ]. Traffic Control A typical application for distributed artificial intelligence is in the control of traffic. Some traffic control may concern physical vehicles [MTD93 ], or simply the flow of informations packets in a network [FD93a , FD93b , Fle93a, Fle93b]. Air traffic control, in particular, has been intensively studied [Ndo93 ]. The issue is to promote cooperation among air traffic controllers themselves, and between the controllers and aircraft. Smooth cooperation is obviously required in this environment in order to achieve a safe, orderly and expedi- tious movement of traffic in airspace. The autonomous-agent approach, in which for instance, a software agent is assigned to assist and represent each of the players (controllers and aircraft) has proved fruitful. Intelligent Manufacturing Yet another domain in which monolithic, cen- tralized control structures are giving way to distributed systems of agents. In this approach, which sometimes goes under the name of "Holonic Manu- facturing", each machine or process is endowed with an autonomous-agent controller. The agent monitors the state of its machine, tries to satisfy its "needs" in terms or raw material etc., possibly competing with other agents for resources. See [Dee93c , Dee93a , Dee94b , WD93 , KD93 ]. Education Artificial intelligence has been extensively as the basis for comput* *er- aided instruction. A number of Alife simulators have been developed to teach principles of biology, especially to children. Mitchell Resnick [RM , Res94 ] * *is a leader in this area. Some of these programs are commercially available as 8 educational games. SimLife, in particular, is a well-made program which has achieved significant commercial success [KB92 , Inc]. Computer Viruses A computer virus may be viewed as a kind of au- tonomous agent. It is a computer program which attempts to satisfy an agenda without continuous human intervention in its operation. In practice, viruses are distinguished from autonomous agents in that they are generally rather simple in construction, and generally have but one major aim: to re- produce and spread copies of themselves to many computers. All computer programs depend on other programs (such as the operating system) to exe- cute. However, viruses are distinguished by the fact that they often integrate their code directly into that of other programs, such that execution of the host program causes execution of the viral program. Further, viruses often have destructive effects on their host computers. These destructive viruses are well-known in personal computing, and protec- tion of computers against destructive viruses has become a major industry. Viruses hold particular interest for Alife since they have properties very sim- ilar to those of biological viruses. It is with respect to these viral programs that the claim of "strong" Alife, i.e. the claim that some computer pro- grams are truly "alive", may be best defended. Viruses may be designed to have constructive effects. For example, a virus programmed to seek out and destroy anomalies in a data base could be used as a distributed method for maintaining the integrity of the data base. For further information see: [Spa91 , Spa94 , FJK92 ]. 3 The Fundamental Algorithms of Artificial Life The motor propelling most artificial life simulations is an algorithm which allows artificial creatures to evolve and/or adapt to their environment. Each of these algorithms is a major topic in itself, with wide-spread scientific and industrial applications. These subjects are well treated elsewhere; here we need only a brief sketch and a few meta-references. The fundamental algorithms fall into two dominant categories: learning algorithms, typified by neural networks and evolutionary algorithms, typified 9 by genetic algorithms. 3.1 Neural Networks Many Alife researchers, especially those concerned with higher-order pro- cesses such as learning and adaptation, endow their organisms with a neural net which serves as an artificial brain. Neural network are learning algo- rithms. They may be trained e.g. to classify images into categories. A typical task is to recognize to which letter a given hand-written character corresponds. A neural net is composed of a collection of input-output devices, called neurons, which are organized in a (highly connected) network. Normally the network is organized into layers: an input layer which receives sensory input, any number of so-called hidden layers which perform the actual computa- tions, and an output layer which reports the results of these computations. Training a neural network involves adjusting the strengths of the connections between the neurons in the net. The field of neural nets saw impressive growth in the 1980's, and continues to develop rapidly. References For a general theoretical orientation toward neural networks see [RM86 , AR91 , HN90 , HKP91 , Hin89 , Tou91 ], or [Jod94a , Jod94b ] in French. The internet resources [FUN , Pol] are useful for tracking the latest de- velopment in the field. See also section 7.4. There have been tremendous efforts made to build effective, general-purpose neural-net simulators. Many of these are public domain, and available over the internet. [Mur ] provides a complete review of these simulators, Increasingly, there are attempts to combine genetic algorithm and neural-net approaches, i.e. to use the ge- netic algorithm to evolve neural nets. Some examples may be found in [VO91 , Bes93 , BDM94 ]. 3.2 Evolutionary Algorithms The other major type of biologically inspired fundamental algorithms are the evolutionary algorithms. While neural networks are metaphorically based on 10 learning processes in individual organisms, evolutionary algorithms are in- spired by evolutionary change in populations of individuals. Relative to neu- ral nets, evolutionary algorithms have only recently gained wide acceptance in academic and industrial circles. Established references are correspondingly less abundant, and we must be more complete here. Evolutionary algorithms are iterative. An iteration is referred to as a "generation". The basic evolutionary algorithm begins with a population of randomly chosen individuals. In each generation, the individuals "compete" among themselves to solve a posed problem. Individuals which perform rela- tively well are more likely to "survive" to the next generation. Those surviv- ing to the next generation may be subject to a small, random modifications. If the algorithm is correctly set up, and the problem is indeed one subject to solution in this manner, then as the iteration proceeds the population will contain solutions of increasing quality. The most popular evolutionary algorithm is the genetic algorithm of J. Holland [Hol75 ]. The genetic algorithm is widely used in practical contexts ( financial forecasting: [Das ], management science [Nis93 ] ). It is particularly well-adapted to many-variate problems whose solution space is discontinuous ("rugged") and poorly understood. To apply the genetic algorithm, one must define 1) a mapping from the set of parameter values into the set of (0-1) bit strings, and 2) a mapping from bit strings into the reals, the so-called fitness function. A set of randomly-chosen bit strings constitutes the initial population. In the basic genetic algorithm, a cycle is repeated during which 1. The fitness of each individual in the population is evaluated. 2. Copies of individuals are made in proportion to their fitness. 3. Individuals in the population of copies are altered by mutations and recombinations between pairs of individuals. As in the case of neural nets, fflEvolutionary Algorithms vary widely in their degree of biological real- ism. fflThere are a wide range of implementation details which can have a profound effect on the outcome. 11 fflWhile theory to explain the behavior of evolutionary algorithms exists, it is far from complete. fflWhile most effort in this area is directed toward applications (finding ever more efficient algorithms and applying them to industrial prob- lems) the interest for artificial life remains in the source of these algo- rithms in biological metaphor. 3.2.1 Variant evolutionary algorithms There are numerous variants of the evolutionary paradigm beyond the genetic algorithm, all of which find employment in Alife simulation. These are in brief: Genetic Programming Genetic Programming is a variant evolutionary algorithm in which the genome is represented by a lisp expression. Thus evolution operates on computer programs, rather than bit-strings as in the case of the usual genetic algorithm. For further information see the books and conference proceedings [Koz92 , Koz94 , KEK94 ]. For specific applications see [Tac93 ]. The genetic program- ming repository [JM ] contains a good collection of articles and software for genetic programming. WWW sources are found in section 7.3. Evolutionary Programming Evolutionary programming is an early evo- lutionary method, mainly distinguished from the genetic algorithm of Hol- land in that it does not use crossover as an operator. For an overview of evolutionary programming, see [Atm94 , ?, ?, ?, ?, ?, ?, ?, ?]. For a comparison with genetic algorithms see [DB93 ]. Classifier Systems like the the genetic algorithm, are a brain-child of J. Holland [HHNT88 ]. Classifier systems are fairly complicated relative to genetic algorithms, details cannot be discussed here. For the relationship between classifier systems and genetic algorithms, see [BGH89 ]. Holland himself has been active in bringing classifier systems in contact with autonomous-agent/Alife simulation with his ECHO modeling platform [Hol93 ]. R. Demeur [Dum94 ] has also developed a sophisticated simulation system involving populations of classifier systems. 12 Lindenmeyer Systems Lindenmeyer systems are systems of production rules which are used to model the growth and development of organisms. They can be found at the heart of a number of Alife simulators, e.g. [VOH91 , Dum94 ]. They have also been extensively used in computer animation [Lee ]. References An excellent source for information on the state of evolution- ary algorithm research in France is [Lut94 ]. For a general introduction, see [Dav89 , Dav91 , Gol89 , Raw91 , Dum94 ]. For some specific applications in biology and computer science see [SMM+ 91 , PFB93 , PFB94 ] The internet is an superb source for material on genetic algorithms, e.g. software [Nav ], bibliographies [GMT ] and preprints (see Section 7.2). [Heib ] is a rather complete, up-to-date overview of genetic algorithms and related approaches. The bibliography by Saravan [Sar93 ] collects references not only on genetic algorithms, but also genetic programming, evolutionary programming, classifier systems and the like. 3.3 Cellular Automata A cellular automaton is a discrete dynamical system. Space, time, and the states of the system are discrete. Each point in a regular spatial lattice, called a cell, can have any one of a finite number of states. The states of the cells in the lattice are updated according to a local rule. That is, the state * *of a cell at a given time depends only on its own state one time step previously, and the states of its nearby neighbors at the previous time step. All cells on the lattice are updated synchronously. Thus the state of the entire lattice advances in discrete time steps. Cellular automata are not, strictly speaking, algorithms for learning or evolution, but rather a modeling framework for a good number of Alife simu- lations. Indeed, one may place the origin of the field with the demonstration by von Neumann [vN66 , Lan84 , Lan86 ] that cellular automata are capable of self-reproduction. The self-reproduction theme has been recently pursued by Langton[Lan84 , Lan86 , Lan87 ], who shows that self-reproducing machines can be built much more simply than in the von Neumann construction. Lang- ton is also responsible for the development of a general-purpose cellular- automaton simulation tool, Cellsim [Heia ]. Additional cellular-automaton simulators, as well as an extensive bibliography, can be found in [Gut ]. 13 [EEK93 ] provides a good overview of other applications of cellular au- tomata to biological modeling. For connections of cellular automata to other disciplines, see [Gut91 ]. 4 Selected Packages in Depth The ideal general-purpose Alife simulator would allow the user to choose from a variety of fundamental algorithms, to easily design populations of creatures, to easily collect and analyze data. It would have a good graphical display and run without modification on different types of machines. No existing package meets all of these desiderata, though Swarm (see section 4.4) does or will come close. The packages we have chosen to consider in detail below represent various poles of Alife research and illustrate the range of demands which would be placed on a truly general Alife simulator. 4.1 A Radically Bottom-Up Approach: Tierra Tierra is the most advanced platform for the study of the evolution of artifici* *al organisms at the level of the genome. It was developed by Tom Ray of the University of Delaware and the ATR Human Information Processing Research Laboratories in Kyoto ( see Sec 6.2). Tierra aims to provide an environment in which darwinian evolution can proceed within a computer, without explicit direction or intervention from a human operator. Ray [Ray94 ] draws a distinction between simulation and instantiation of artificial life. In a simulation populations of data structures in a computer program are used to represent populations of biological entities (predators and prey, ants, cells, and the like). Most of the programs discussed in this report of of this type. In an instantiation of artificial life, populations of data structures did not explicitly represent any living organism or process, but rather obey artificial laws abstractly related to the natural laws governing living processes. Proponents of strong Alife, such as Ray, would claim that an instantiation of artificial life is indeed living. To appreciate this distinction we need some structural and operational details of Tierra. 14 4.1.1 Evolvable instruction set In Tierra organisms are machine-language computer programs. An organism is thus a linear string of instructions. The organism is executed by moving an execution pointer along the organism, executing in sequence the instructions encountered. The major conceptual advance in Tierra is the construction of an robust machine-language instruction set. A program (organism) written in this instruction set may be altered by random mutation or recombination and yet remain executable. Thus organisms may evolve under the action of genetic operators. This is not the case for the machine-language instruction sets used for traditional programming. It is worthwhile to see how this works in detail. To create an evolvable instruction set, Ray began with a traditional in- struction set (for the Intel 80x86 processor) and removed all instructions taking numeric arguments. This move reduced the size of the instruction set dramatically, but created the problem of addressing instructions which do not follow each other sequentially in the code, such as is required for a loop or branch. This problem was handled by defining a pair of instruc- tions "no_operation_0" and "no_operation_1" which can serve both as (null) instructions and as binary numbers. Ray then defines a "jump" instruc- tion which transfers execution to an address defined by the binary comple- ment of the bit string immediately following the occurrence of the jump instruction. Concretely, if the execution pointer encounters the instruction "jump no_operation_0 no_operation_0 no_operation_1" it will continue mov- ing along the code, without executing the instructions encountered, until it encounters the complementary sequence "no_operation_1 no_operation_1 no_operation_0" at which point execution of the organism resumes. In prac- tice, a maximum search distance is defined. If this distance is exceeded before a complementary string is encountered, then the jump instruction is simply ignored. A complementary instruction "jump_backward" is defined which causes the execution pointer to move backward up the organism in search of a complementary string. It is clear that "jump" and "jump_backward" in combination allow for the creation of loops. It is best to think of Tierra as a artificial biosphere ruled by its own (artificial) laws, rather than as a direct model of any real biological functio* *n. A number of consequences issue from this trick of conflating numbers with instructions. Among the most important are mutability and emergent 15 fitness. 4.1.2 Mutability Tierra's machine code consists of 32 instructions. Each is is specified by a 5- bit number, and a program is simply a concatenation of these numbers. Thus any string of 5-bit numbers is a valid (executable) program. A program may be converted into another program through application of a genetic operator. For simplicity, let us consider the point-mutation as the genetic operation. Flipping a bit in a program produces a new program. The new program may for instance contain a "no_operation" instruction where the old program had an active operation such as a move or jump. This poses no danger at the level of the operating system: the program may still be executed, though it may now produce no useful or sensible result or action. This mutability property is exploited by Tierra's operating system to drive evolution. The operating systems allocates CPU cycles and memory to each of a population of organisms under its control. At some (user- specifiable) rate, the operating system randomly introduces mutations (er- rors) into organisms. In a typical simulation run, the operating system is "infected" with a single organism, capable of self-reproduction (making a copy of itself at some other location in memory). The operating system repeatedly executes the organism, allowing it to reproduce itself. Errors are occasionally introduced into the offspring, usually producing organisms which can no longer reproduce, but occasionally producing offspring which reproduce faster than their ancestors. When the memory becomes full some organisms must be eliminated, oth- erwise the simulation halts. The operating system chooses which organisms to eliminate according to how well the program executes. For instance, some instructions generate error conditions in certain contexts. An organism re- ceives a demerit from the operating system each time it generates an error condition. An organism is eliminated when it has accumulated enough de- merits (relative to other organisms in the population). The result is that or- ganisms which reproduce quickly and execute relatively without error come to dominate in the population. Normally a wide diversity of organism persist in the population. Basic evolutionary phenomena, such as parasitism, can be exhibited. 16 4.1.3 Emergent fitness function A basic task of evolutionary theory is to take account of the mechanisms underlying "survival of the fittest". Usually, one is lead to define a real-val* *ued "fitness" function which takes as arguments properties of organisms. This is the case in the genetic algorithm, for instance. Organisms with high values of fitness reproduce at a higher rate than those with with low values of fitness. The deep difficulty is that by defining a fitness function researchers impose the operation of the very phenomenon they wish to study: the way in which fitter organisms survive better. Tierra is an artificial environment in which fitter organisms survive better without the need for an explicitly defined fitness function. In Tierra there are two resources: compute-time ("energy") and computer memory ("territory"). Organisms which by some means or other manage to capture more of these resources than other organisms do have differentially higher survival rates. There is nonetheless wide latitude in the criteria used to eliminate organ- isms. For instance, organisms can be eliminated only on the basis of the error conditions they produce, or they can be as well "rewarded" when they are able to execute particularly difficult instructions, reducing their mortality. [Ray91 ] 4.1.4 Conclusions Tierra represents an extreme bottom-up approach to general Alife simula- tion. Its unique advantages, such as mutability and emergent fitness, are bought at the price of making implementation of higher-order functions such as intelligence, perception, and communication difficult. In a sense, attaining this kind of functionality will require geological-time-scale evolution in the Tierrian biosphere. Relatively low-level phenomena, such as multicellularity, have proven a major challenge to obtain in Tierra ([Ray94 ]). User-interface features remain fairly undeveloped in Tierra. While there are some facilities in the MS-DOS version of Tierra for visual display of data, even in the latest version (4.0), there is no X-windows interface or toolbox. 17 4.2 Ants as a Model Organism: MANTA Ants are to artificial life as Drosophilia are to genetics. Each is a model org* *an- isms which serve its respective disciples as a universal test-beds for theories and methods. There are thus many examples of Alife simulators devoted more or less exclusively to the modeling of the behavior of ant colonies. Ants occupy a central place in artificial life due to their relative individual sim- plicity combined with their relatively complex group behavior. Ant colonies have evolved means of performing collective tasks which are far beyond the capacities of their constituent components. They do so without being wired together in any specific architectural pattern, without central control, and in the presence of strong intrinsic noise. Ants can create architectural struc- tures dynamically when and where they are needed, such as trails between nest and food sources, or "living bridges" when swarms of ants migrate in the rain-forest. For further information on the biology of ants, see the landmark book [HW90 ]. The consensus is that comprehension of emergent complexity in ant colonies will serve as a good basis for the study of emergent, collective behavior in more advanced social organisms, as well as leading to new practical methods in distributed computation. In this section we will consider a typical ant simulation system, MANTA (Modeling an ANTnest Activity), written by Alexis Drogoul in the labora- tory of J. Ferber at the University of Paris, VI [Dro93 ]. MANTA is intended to provide a software environment in which questions concerning collective, social computation can be addressed. Drougoul appeals heavily to the for- malism of object-oriented (or better, agent-oriented) programming in the creation of this environment. 4.2.1 Reactive agents Drogoul models ant colonies as a collection of reactive agents. A reactive agent is a type of autonomous agent which fflcan send and receive messages. ffldoes not (in general) learn or "think"; it simply reacts to messages it receives in a stereotyped way. 18 Perhaps the most interesting aspect of Drogoul's MANTA simulator is the way in which he conceptualizes the relationship between individual ant agents and their physical environment. He is working within a strict agent- oriented programming paradigm. Thus, every entity in the system is an agent, including physical processes such as "light". 4.2.2 Agents and Classes MANTA contains three classes of agents fflThe "assistants": The queen, the workers, the males. fflThe "assisted" agents: larva at different stages of development, co- coons. ffl"Physical" agents: light, humidity, garbage, dead ants. The characteristics of the agents are determined in a hierarchical manner. The properties are collected into a programming entity called a class. A class lower in the hierarchy inherits properties of the classes higher in the hierarc* *hy. In the simulation, all agents: fflHave the same abstract form of behavior. (Class EthoBehavior) fflHave a graphic representation (icon). (Class InterfaceBehavior) fflAre localized in space. (Class LocatedBehavior) fflMay Mature. (can change state as a function of time) (Class Matur- ingBehavior) Thus not only are biological and physical agencies treated in the same hierarchy of representation, so are explicitly software functionalities, such as interfaces to the computer screen. For instance, any agent belonging to the class InterfaceBehavior "knows" how to communicate its physical position to the user-interface so that it can be properly displayed on the computer screen. The programmer need merely declare that an agent belongs to this class in order to obtain all the relevant functionality. Some salient characteristics of the class structure: 19 fflSome agents, like eggs, need to be taken care of in order to mature, others, like Light or Food, do not. So the "MaturingBehavior" class has be be refined, those that need to be taken care of are in the sub-class CuringBehavior (abstract class). fflSome agents, like Eggs and Cocoons (concrete classes), are completely described by their "Curing Behavior". Others need to obtain food in order to mature, so they are in the abstract class "Feeding Behavior". fflLarva, in particular, are completely described by their Feeding Behav- ior, but others need to Move and Sense and so on, so they depend on further abstract classes. 4.2.3 Abstract and concrete classes An important distinction is made between abstract and concrete classes. In short, abstract classes provide a collection of methods and abstract struc- tures. These can be tuned to represent the specific chacteristics of concrete actors in the system. An abstract class has a variety of methods which are used by every ob- ject in the class. For example, each has a method called "readEnvironment" which is called at each cycle of the simulation and does things like reduce the amount of food, or increase the age of the agent. What the method "readEnvironment" actually does depends on which type of agent is "read- ing" the environment. For instance, if the agent is of type CuringBehav- ior:FeedingBehavior then "readEnvironment" decreases the amount of feed in the environment. Concrete classes, on the other hand, specify the parameters which con- trol particular instantiations of abstractly described behaviors. Consider, for instance, one of the simplest concrete classes, that which contains EggAgents. An EggAgent has three "stimuli": ffl#egg: indicates the presence of an egg. ffl#cureEgg: needs of the egg in terms of care and simulation. ffl#maturingEgg: becomes positive when the egg has matured . These parameters set up the qualities which characterize Egghood in the simulation. 20 4.2.4 A typical experiment Drogoul used MANTA to study sociogenesis in wasp colonies. Sociogenesis is the process by which a single individual, a gravid queen, gives rise to an entire society of insects. Sociogenesis is a delicate process from the bioener- getic point of view. At the beginning of the process, the only source of energy available to the queen is that supplied by the degeneration of her wing mus- cles. Later, when eggs have developed into foragers, these foragers can bring back food to the nest and in particular nourish the queen. In the intermedi- ate stage, however, the queen must somehow optimize her expenditures on foraging, laying new eggs, caring for existing eggs and larvae, etc. Reasonably good data exists describing the sociogenic process in real in- sects, in particular traces of the number of individuals of various types (eggs, larvae, cocoons, workers) as a function of time. Drogoul ran experiments in which his MANTA simulator was initialized with a single queen, and rea- sonable settings for physical parameters (amount of humidity, light, and so on). He found that he was able to achieve sociogenic processes with some qualitative similarity to natural sociogenic processes. He found, for example, that the egg population underwent oscillations of increasing amplitude both in his experiments and in nature. He also found that the percentage of failed sociogenic processes (those ending in the death of the queen) was similar in both nature and computer simulation. It remains to be seen whether these observations will stand up to careful statistical analysis. In any case, these experiments are suggestive of how powerful simulation tools could be used to study very complex processes in nature. 4.2.5 Conclusions MANTA illustrates some of the difficulties resulting from the cross-disciplinary nature of Alife. The program is rather sophisticated from the computer- science point of view. It is a struggle to make the biology and physics of an artificial world conform to the of agent- and object-oriented program- ming paradigm. In this paradigm, the perfect program is one in which all attributes are organized in strict hierarchical format. One may argue that in many instances conceptually distinct behavior is improperly forced into the same box by this approach. However, the achievement of MANTA is that it brings into being a fully developed expression of often-voiced idea: that 21 ant societies should be represented in software as hierarchies of agents. It thus provides a concrete example showing the strengths and limitations of the approach. Thus far, the natural science content of this project is weak compared to the computer-science content. This could be remedied by closer collaboration with natural scientists. 4.3 Artificial Biophysics: LEE The Latent Energy Environments (LEE) package of Fillipo Menzcer and Rik Belew of the University of California, San Diego, combines several of the themes developed above: neural networks, genetic algorithms, and au- tonomous agents. Further, it continues in the direction charted by MANTA: to connect the physics of an artificial environment to the behavior of the artificial organisms which live within this environment. 4.3.1 The simulator The environment The LEE environment is a two-dimensional torus. On this torus is distributed a collection of "atoms" of various types (A,B,C...). Some atoms can be "combined" by organisms to release energy, as if in a chemical reaction. The energy is latent in the environment and does not become "kinetic" until acted on by the organisms. The energy released in a chemical reaction can be positive or negative, and chemical byproducts may be released as well. The Creatures The creatures performing on this metaphorically chemical stage are simple in structure. They have a "gut" into which they can put one atom at a time. They also have sensors, either contact or ambient. Contact sensors are capable of indicating the presence or absence of an atom directly adjacent to the sensor. Ambient sensors, on the other hand, can perform local, directional averages of atom density. Ambient sensors can be used to support behaviors in which the creature approaches atoms which it requires, while contact sensors are only active in behaviors executed on the actual encounter of the creature with an atom. In addition to these external sensors, there is an internal sensor which reports the presence and type of atom in the gut of the organism. LEE creatures have a collection of "motors" which allow then to move forward, or turn. Finally, they are endowed with neural 22 network brains (of a classic feed-forward structure, with back-propagation of error). The brain coordinates the interaction between the sensory and motor systems. When a creature encounters an atom it can combine it with the atom in its gut if there is one. This may release energy which then becomes available to drive further motion of the agent. The Life Cycle In the main loop in LEE, organisms execute the following actions in order: fflGather information about their surroundings and internal state via their sensors. fflUse neural computation to map sensory information into motor action. fflMake a movement in space by actuating the motors. fflOptionally, feed back the result of the motion to the neural net and adjust weights. fflCompute energy budget. Movement during this cycle may have cost energy, chemical reactions may have either released or consumed en- ergy. If the energy remaining is 0, the organism dies and is removed from the population, if it is high enough, the organism reproduces asex- ually, if it is intermediate, the organism simply continues to the next cycle. Conceptually, these actions are executed in parallel for all organisms in the environment. Reproduction is governed by parameters in the genetic algorithm. Fitness is measured either according to the rate of energy up- take, or the rate of offspring production. As an energy threshold must be achieved in order to produce offspring, these measures are obviously related. Upon reproduction, offspring are subject to mutation. The genome specifies parameters of the sensors, motors, and neural nets. 4.3.2 The code LEE consists of about 7000 lines of C code. It is straight-forward in design. LEE runs on Unix and Macintosh platforms, with a multi-window interface in each case. 23 The physics of the LEE world consists of a two-dimensional matrix of cells. In each cell is a linked list of the atoms and organisms present at that spatial location. The chemistry, on the other hand, is described in a "reaction table" matrix which specifies which reactions between atoms are possible, the energy loss or gain associated with the reaction, and all possible byproducts of the reaction. The population of creatures is represented by linked list of structures. Each structure contains data describing the genotype and phenotype of the individual. The genotype, for instance, specification of the sensory-motor system configuration. The phenotype contains information modifiable during the lifetime of the individual, such as network weights, rate of energy uptake, age, and so on. 4.3.3 A typical experiment A LEE organism needs to learn what kinds of reactions are beneficial (yield energy) which reactions are harmful (require energy) and which reactions among the possible atoms in the world simply do not occur. Its task then is to seek out beneficial reactions, avoid deleterious reactions, and ignore non-reactions. In [FR94 ] a simple environment is considered with 3 atoms, A,B, and C. Such that a reaction A+B releases energy, A+A or B+B require energy, and C does not participate in any reaction. Their goal in these experiments was to compare learning and evolution as a method for obtaining appropriate sensory-motor couplings to accomplish this task. They find, as one might expect, that learning combined with evolution produces better results than either alone. One surprising result, however, is that prediction learning is ineffective relative to reinforcement learning in producing individuals which have advanced capabilities to exploit the energy in their environment. That is, in prediction learning, an organism attempts to anticipate the situation in which it will find itself if it execute* *s a particular action. It is rewarded if its prediction corresponds to the situation which actually obtains when the action is executed. In reinforcement learned, by contrast, the organism is rewarded simply on the basis of the results it obtains (increase or decrease of its energy balance) when it executes an action. Prediction learning might be considered to be more intelligent and thus likely to produce higher fitness than brute reinforcement learning, but the data supports rather the opposite conclusion. 24 4.3.4 Conclusions While LEE is significantly more general its approach than Tierra or MANTA, it is bound to any number of ad-hoc choices in its architecture due the specific research interests of its creators. It does not allow for easy changes in the topology of the space, or for overlays of many spatial variables, for instance. The generality of LEE, such as it is, results from the view of its authors that the processes of learning and evolution are tightly interconnected, and connected as well with the physics of the environment. LEE provides one or a few ways in which these elements can be assembled, and explores the consequences of this assembly. It has been useful for us to consider this work here as it a typical example of a variegated artificial world which a general-purpose Alife simulator would need to support. For further information on LEE, see [FR93a , FR93b , FR94 , ?], and well as section 7.5. Some related systems are: [Ada94 , AB94 , Hol93 , Dum94 ]. 4.4 A General-Purpose Simulator: Swarm The Santa Fe Institute's Swarm project 2 is aimed at the development of a fully general-purpose artificial-life simulator. The primary goal of the Swarm simulation system is to save researchers from having to deal with all of the computer science issues involved in the implementation of con- current, distributed artificial worlds. Swarm provides a wide spectrum of "generic" artificial worlds populated with "generic" agents, a large library of design and analysis tools, and a kernel to drive the simulation. These arti- ficial worlds can vary greatly in their properties, from 2-D spatial worlds in which agents move about to graphs representing telecommunication networks through which static agents trade messages and commodities. Whatever the specific "physical" characteristics of the universe of discourse, Swarm pro- vides a general, uniform framework allowing researchers to concentrate on their specific system of interest, to directly compare scientific results with other users of Swarm, and to eliminate wasteful duplication of basic simula- tion functions from model to model. For example, a researcher interested in simulating a colony of ants could select a generic 2-dimensional world from the Swarm library, along with ______________________________ 2This section is based on the document "An Overview of the Swarm System", by* * N. Minar, H, Gutowitz, R. Burkhart, and C. Langton 25 a generic class of "agent" which already know about 2-D worlds and how to communicate with other such agents. Then the researcher could add attributes and rules to both the "physical" world and the agents. To create an ant simulation, for example, one might fflallow all of the sites in the world to take on a concentration of pheromo* *ne. ffladd a rule to diffuse the pheromone between sites. ffllocate a "nest" at a particular place in space. fflendow the agents with the ability to sense the concentration of pheromone in their neighborhood fflendow the agents with the ability to move in the direction of the highest concentration. A prototype version of Swarm currently exists, and major revisions are underway. Here we will describe salient features of the prototype, many of which will be carried over into the mature version. We then consider briefly the types of improvements which are envisioned. 4.4.1 The prototype The prototype version of Swarm was written at the Santa Fe Institute by Dave Hiebeler, under the influence of Chris Langton. Howard Gutowitz and Nelson Minar also contributed various code modules and design influences. In prototype, Swarm is an operating system for handling populations of interacting, autonomous agents such as those typically found in Artificial Life simulations. Swarm also attempts to make the researcher's job easier by providing tools for user interface and data analysis. Though the prototype is written in pure C, it is object-oriented in style; everything in Swarm is an object. Objects communicate with other objects by sending them messages. The simulation is driven by a special object, the Object List Manager (OLM), which maintains a list of the active object population and sends step messages when it is time for objects to update themselves. All inhabitants of the artificial world (bugs, economic agents, molecules, etc.) are objects. In addition, the environment (or space) the objects live in 26 is itself an abstract object. A space object defines the geometry of a space and manages a set of spatial variables associated to it by the user. The space object defines methods for actions which depend only on spatial geometry, such as the movement of an agent, and does so in such a way that user-defined agents can take immediate advantage of these methods. A space object also makes available to the user a set of general functions operating on spatial variables. The interface to the prototype Swarm consists of a collection of tools for analysis and visualization. Data analysis objects (such as objects that compute averages) exist within the Swarm world; these objects then talk to user interface objects to present X windows displays of data, as well as for storage to files for batch-time data collection. Swarm provides the interface; researchers spends their time developing the simulation. The Prototype Design The prototype Swarm allows the user to create, modify, and destroy objects. Each object has a few standard attributes managed by Swarm, as well as user-specified private data. Objects possess a private data storage. Objects are at liberty to make data in this store public. Swarm computations are organized about regular, global, time steps. The computations performed by an object are described in its step function and post-step function. At each time step, Swarm invokes the step function of each object under its command, causing each object to perform some (usually small) amount of computation. The regular triggering of the individual step functions does not entail synchronous activity of the objects. Objects may be designed, for instance, to respond at irregular intervals to the global trigger. After the step functions of all objects have executed, Swarm invokes the post-step functions of the objects. The need for both step and post-step func- tions for objects is already evident in simulations of simple dynamical systems such as cellular automata. In a cellular automaton, each cell (each object) computes its next state based on its own current state and the current states of its neighbors. To simulate this synchronous process on an asynchronous machine, each object must compute its next state, and save it in some pri- vate store, only posting its new current state when all other objects have finished computing theirs. This distinction between step and post-step may have utility beyond the simulation of synchronous processes, allowing the simulator to be easily ported to a distributed-computation environment, for 27 instance. Much of the flexibility of the Swarm system derives from its message- passing mechanism for modeling the action of one object on another. By placing symbolic communication between objects and "mechanical" action of one object on another in the same framework, the representation of numerous simulation activities is greatly simplified. In particular, the implementation of an "artificial physics" describing the world in which the autonomous agents operate. Physics in Swarm Swarm is designed to handle situations in which agents interact not only with each other, but also with their "physical" environment. Often, the physical environment can be represented by a set of spatial vari- ables. In Swarm these spatial variables are handled just like any other object. To see this, consider a simple autonomous-agent model of the economics of pollution which has been implemented in Swarm. In this simulation agents have the choice of a "red" or a "green" product. The red products are pollut- ing; they emit some pollution into the environment at every time step. This pollution then diffuses through space. This works in detail as follows: at each simulation step, all agents possessing a "red" product send the message to the space module instructing it to "pollute". The space module in turn sends a message to the pollution-field object giving instructions on where the pollu- tion variable should be incremented and by how much. After all agents have contributed their updates to the pollution field, a "diffuse" module sends instructions to the "space" module telling it how the pollution field should be updated to reflect one step of a diffusion process. The advantage of this arrangement is that the "diffuse" module need not encode any information about the geometry of space; this is handled by the space module. In turn, the space module need know nothing about the mechanisms of diffusion; this is handled by the "diffuse" module. The result of this arrangement is a very flexible mechanism for altering physics to suit the experimenter. 4.4.2 Future versions of Swarm In summer 1994, the Swarm programmers began a major conceptual redesign of Swarm, based on the experiences gained from the prototype. This work is being conducted with major funding from ARPA. 28 In order to better express the object oriented design of Swarm simulations, Swarm is being rewritten within a more formal object oriented framework. The language chosen is Objective C, as a compromise between the ubiquity and efficiency of C and the flexibility provided by a runtime object-oriented environment. Some of the design goals for the new version of Swarm are: fflMake Swarm recursive, supporting a hierarchy of functional levels. Thus, an agent at one level of the hierarchy may itself be a whole Swarm operating system consisting of multiple agents interacting with each another level down in the hierarchy. fflAllow agents to associate and dissociate into new agents, so as to permit emergent structural and functional entities. fflProvide a flexible update scheme, allowing a wide variety of synchronous and asynchronous, time-stepped and event-driven simulations. fflSupply general-purpose efficient algorithms for common simulation needs, such as finding nearest neighbors in spaces of various geometries. fflSupply a basic collection of evolution and learning algorithms. fflProvide a convenient, powerful user-interface, building on the Tk/Tcl library to allow both graphical interaction and batchmode processing. fflSupply a "Physics Toolkit" which allows users to simply specify the kinds of Space and Time operating in their artificial world. In par- ticular, give the user 0, 1, 2, ... -dimensional geometries which they can plug and play, as well as choices for continuous or discrete- time updating in each of several sub-phases of a simulation. fflDevelop a general purpose scheme for querying data from sub- collec- tions of agents. Make this query language flexible enough that agents themselves can emit queries, and build sub-collections of other agents. The core Swarm team will continue to concentrate on basic architecture and common needs, and will test the Swarm design across a wide spectrum of potential applications. Long term goals include making Swarm portable to 29 both small environments like Windows and the Macintosh as well as taking advantage of parallel resources like a CM-5 or a network of workstations. In addition, higher-level programmer interfaces are being considered to further simplify the Swarm user's work. References For further information about Swarm, send mail to swarm-request@santafe.edu. The prototype is described in detail in [Gut93 ]. 4.4.3 Conclusions Swarm is a project to watch as an indicator of trends in artificial life simula- tion. The project director, C. Langton, is largely responsible for the setting the direction of artificial life research. If artificial life is an effort to a* *bstract the necessary features of life from the study of alternate embodiments of life-like processes, Swarm is an effort to abstract the necessary features of a* *r- tificial life from the study of alternate embodiments of Alife simulation. In i* *ts present state, Swarm is far from being the user-friendly, portable system its designers envision. A user must be a motivated, seasoned, C programmer in order access any but the most elementary features of Swarm. This difficulty is keenly felt by the researchers associated with this project, and solutions a* *re in development. A number of fundamental modules are yet missing, notably for genetic algorithms and neural nets. It should be clear from this report, however, that good basic packages exist, and these are simply attached to the Swarm operating system. 5 Conclusions Science proceeds in waves. Over the last 20 years, we can distinguish three re- lated and overlapping currents, each beginning in the avant-garde and ending in the mainstream, with the following (very approximate) dates: dynamical systems, notably chaos theory, 1975-1985; neural networks, 1983-1990; arti- ficial life/autonomous agents 1987-present. The purpose of this report has been to illuminate some facets of this latest wave. Like those that proceed it, this wave will inevitably shift in emphasis from theoretical, exploratory work to more directed, applications-oriented research. We have endeavored to 30 highlight aspects of the theoretical developments, and point to some emerg- ing applications. This is a field in the course of rapid mutation and growth. Still, the concern with simulators is likely to remain central. The major message of this report is this: an enormous amount of effort is currently being spent in the Alife community in the creation of yet-another agent-based simula- tor. There are broad commonalities in the design of these simulators which could be built upon to standardize language and eliminate duplication of effort. Swarm, in particular, is a thrust in this direction. On the other hand, any implementation carries with it the biases of its creators. Over- standardization could result in an insalubrious freezing of ideas. The needs addressed by Tierra are likely to remain sufficiently distinct from the needs addressed by SOAR, for instance, that no single simulator will ever satisfy the entire Alife community. Nonetheless, the very effort to comprehend what a fully general-purpose simulator might consist of is an effort toward deeper understanding of the true and unique character of this field of research. 6 Research Groups in Artificial Life 6.1 Research Groups in France In Europe, and France in particular, research in artificial life has not, for the most part, reached the stage of maturity where there are a significant number of research groups devoted entirely to artificial life. Most artificial life work is done in research teams nominally devoted to more traditional concerns. That is, there are any number of groups working genetic algo- rithms, neural nets, robotics, or artificial intelligence, but with an artifici* *al life slant. The work of these groups is often reported at a sequence of major, well-established artificial life conferences in Europe (ECAL-"European Con- ference on artificial life: Paris-1990,Brussels 1993 [DGN+ 93 ]) There is also* * a sequence of meetings devoted to adaptive behavior [MW91 , Mae91 ] There are in addition numerous smaller meetings, sometimes exclusively French, or French-speaking, for instance the recent "Journees Francophones de l'Evolution Artificielle", (Toulouse, Sep. 1994). fflUniversite de Paris VI. Group headed by J. Ferber. Autonomous Agents, applications to biology. 31 Laforia-IBP; Universite Pari 6-C169; 75252 Paris cedex 05; France; Contact: For recent work of this group see [Dro93 , Fer94]. fflEcole Normale Superieure, Departement de Physique Statistique. Gerard Weisbuch heads a small group working on complex systems, including Alife approaches to immunology and economics. Laboratoire de Physique Statistique; Ecole Normale Superieure 24 rue Lhomond; F 75231 Paris cedex 05, France; fflEcole Polytechnique. Work on neural nets and genetic algorithms. Centre de Mathematiques Appliquees; Ecole Polytechnique; 91128 Palaiseau Cedex; Contact: fflEcole Normale Superieure, Adaptive Computation Group. Head: J-A. Meyer. French center for work in Adaptive Computation. Groupe de BioInformatique-URA686 CNRS; Ecole Normale Superieure; 46, rue d'Ulm, 75230 Paris Cedex 05. Some of the work of this group can be obtained by ftp: ftp.ens.fr:`/pub/reports/biologie' fflLIFIA, Laboratoire d'Informatique Fondamentale et d'Intelligence Ar- tificielle, of the IMAG institute (Informatique et Mathematique Ap- pliquee de Grenoble). This large computer science department supports work in autonomous agents and virtual reality, among many other sub- jects [LIF ]. CNRS-IMAG/LIFIA; 46 ave. Felix Viallet; 38031 Grenoble Cedex, France. fflDassault. Some work in genetic programming and autonomous agents. Dassault-Aviation; Artificial Intelligence Department; DGT/DEA/IA2; 78 quai Marcel Dassault; 92214 Saint-Cloud, France; 32 fflEquipe Synthese d'Images. Virtual Reality, Autonomous agents. Emer- gence of Behavior. Universite Paul Sabatier; 118 Rte de Narbonne; 31062 Toulouse France; Contact: fflGroup headed by Paul Bourgine, one of the leaders in Alife work in France. CEMAGREF, Laboratoire de IA et Vie Artificielle; Parc de Tourvoie 92185 Anthony cedex, France; Contact: fflTelecom. Telecom does work in various aspect of Alife, including evo- lution, co-evolution, and autonomous agents. Telecom Bretagne, Laboratoire de IA et Sciences Cognitives; B.P. 832, F-29285 Brest cedex, France; fflCNET. Work on autonomous agents, simulation of collective intelli- gence. CNET Lannion B-RIO/TNT; 22301 Lannion Cedex, France; Contact: fflCNRS-URA 1837. Work on Alife modeling of social insect behavior. CNRS-URA 1837; Laboratoire d'Ethologie et de Psychologie Animale; University Paul Sabatier; 31062 Toulouse France; Contact: 6.2 Foreign Groups Beyond Europe, the major centers of activity in Alife are the United States and Japan. Japan, in particular, has been investing heavily and at an in- creasing pace in potential industrial applications of Alife. It is significant that the next international Alife conference will be held in Japan (Kyoto, 1996). 33 The following list, hardly exhaustive, includes research groups, academic, industrial, and military, with significant efforts in Alife simulation. fflEvolutionary Systems Department; Human Information Processing Re- search; Laboratories; Advanced Telecommunications Research Insti- tute International. Kansai Science City. Head: Katsunori Shimohara. (Many Alife projects, pure and applied, including Tierra). Evolutionary Systems Department; ATR Human Information Process- ing Research Laboratories; 2-2 Hikaridai, Seika-cho, Soraku-gun; Ky- oto, 619-02 Japan; fflLaboratory for Biological Informatics and Theoretical Medicine (BITMed), UC San Diego. Head: Hans B. Sieburg. Work in autonomous-agent approaches to Medicine, and to information retrieval. Department of Psychiatry, M-003-H, University of California, San Diego, LaJolla, CA 92093. fflRowland Institute. Cambridge, Mass. Numerous pure and applied projects in Alife, evolutionary and adaptive Computation. The Rowland Institute for Science; Cambridge, MA 02142, USA; fflUniversity of Sussex. Evolution Robotics. School of Cognitive and Computing Sciences; University of Sussex; Brighton BN1 9QH England. fflUCLA, Artificial Life Group. Head: David Jefferson. One of the most well-established American groups in Alife. Department of Computer Science, University of California, Los Ange- les; Los Angeles CA 90024 . fflMIT is a major center for work related to Alife. It hosted the last Alife conference (Alife IV). Much of the activity is centered at the Media Lab, including: 34 - -Epistemology and Learning Group Concerned with using Alife methods to teach children [RM ] . - -Robotics Group Concerned with building robot colonies. [Bro86 , Bro89 ]. The Media Lab; 20 Ames Street; Cambridge MA 02139 USA; fflBioinformatica, Utrecht. Head: Pauline Hogeweg. Alife pioneer, no- tably in the use of simulation [Hog89 ]. Bioinformatica, Padualaan 8, 3584 CH Utrecht, The Netherlands; fflFree University of Brussels. Group Head: J-L. Deneubourg. Both theoretical and experimental studies of collective computation in social insects. [DGN+ 93 ] Center For Non-Linear Phenomena and Complex Systems; CP 231, Universite Libre de Bruxelles; Bld. de Triomphe, 1050 Brussels, Bel- gium fflThe Santa Fe Institute, Chris Langton, head of Alife group. Influential think tank for Alife and Complex Systems [LTFR91 ]. 1399 Hyde Park Road; Santa Fe, NM 87501, USA; fflXerox Parc, Palo Alto. Bernardo Huberman heads a group which pub- lishes widely on Alife-related subjects [HG93 ]. Dynamics of Computation Group; Xerox Palo Alto Research Center; Palo Alto, CA 94304, USA; fflTokyo University. Group head: Kuni Kaneko. Kaneko, a leading re- searcher in dynamical systems, has turned work in his group toward various Alife subjects. See e.g. [KI92 ]. Department of Pure and Applied Sciences; University of Tokyo; Komaba, Meguro-ku; Tokyo 153, Japan; fflAdaptive Systems Theory Section Defense Research Agency. Worces- tershire, United Kingdom Some work on military applications of Alife. 35 fflCognitive Computer Science Research Group Richard K. Belew, Di- rector. Computational approaches to artificial and natural systems, in particular, adaptive knowledge representations. Dept. of Computer Science and Engineering, 0114; University of Cali- fornia, San Diego; La Jolla, CA 92093-0114 USA; Fax: (619)534-7029; Belew3 7 World Wide Web Sources for Artificial Life Simulation Note: this section, in particular, is much more useful when processed as hypertext. 7.1 General fflChaos Network4 The Chaos Network, applications of chaos theory to social systems. fflCS bibliographies5 Computer Science Bibliographies. fflComplex Adaptive Systems6 Complex (Adaptive) Systems Informa- tion, including many Alife sources. fflComplexity7 Bibliography of measures of complexity. fflalternate8 fflCellular Automata9 ______________________________ 3http://www-cse.ucsd.edu:80/users/rik/ 4http://www.prairienet.org/business/ptech/chaos.html 5ftp://ftp.cs.umanitoba.ca/pub/bibliographies/index.html 6http://www.seas.upenn.edu/ ale/cplxsys.html 7http://149.170.198.4/combib/combib.html 8http://alphard.cpm.aca.mmu.ac.uk/combib/combib.html 9http://alife.santafe.edu/alife/topics/cas/ca-faq/ca-faq.html 36 7.2 Genetic Algorithms fflEvolution Catalog10 Catalog of information about evolution. fflGA repository11Illinois Genetic Algorithms Repository (major source). fflinteractive12 Interactive genetic art (evolves according to user prefer- ences). fflmusic13 Genetically programmed music fflFAQ14 Frequently Asked Questions about Genetic Algorithms. fflCybernetica15 Principia Cybernetica Server-various, including evolu- tionary theory and algorithms. 7.3 Genetic Programming fflsource116 fflsource217 fflsource318 Various sources for Genetic Programming. fflC++ code19 Code for Genetic Programming fflalternate20 ______________________________ 10http://golgi.harvard.edu/biopages/evolution.html 11http://gal4.ge.uiuc.edu/ 12http://robocop.modmath.cs.cmu.edu:8001/htbin/mjwgenformI 13http://nmt.edu/ jefu/notes/notes.html 14http://www.cs.cmu.edu:8001/afs/cs.cmu.edu/project/ai- repository/ai/html/faqs/ai/genetic/top.html 15http://pespmc1.vub.ac.be/ 16http://www.salford.ac.uk/docs/depts/eee/genetic.html 17http://infopad.eecs.berkeley.edu/ burd/gpp/cpu.html 18http://www.cs.wisc.edu/ smucker/smucker.html 19ftp://ftp.cc.utexas.edu/pub/genetic-programming/code 20ftp://ftp.salford.ac.uk/pub/gp 37 7.4 Neural Nets fflUofA21Neural networks and robotics at The department of Autonomous Systems at the University of Amsterdam. fflFlorence22 Neural nets at Dipartimento di Sistemi e Informatica (Uni- versity of Florence, Italy). Links to other NN servers. fflTechnical Reports23 A searchable index of Computer Science Tech Re- ports, including neuroprose. fflbibliographies24 Search of a large collection of neural network bibtex bibliographies. fflGeorgia Tech25 Cognitive Science at Georgia Tech fflVirtual Library26 The World-Wide Web Virtual Library: Cognitive Science 7.5 Alife Simulators and Research Groups fflJim Clark27Alife simulation code by Jim Clark requires Harvard hvision package: fflhvision28 fflfishwick29 "Computer Simulation: Growth Through Extension" A pa- per by Paul A. Fishwick concerning Alife models. fflsee also30 ______________________________ 21http://carol.fwi.uva.nl/ smagt/neuro 22http://www-dsi.ing.unifi.it/neural/home.html 23http://cs.indiana.edu/cstr/search 24http://glimpse.cs.arizona.edu:1994/bib/Neural/ 25http://www.gatech.edu/cogsci/cogsci.html 26http://www.cog.brown.edu/pointers/cognitive.html 27ftp://metatron.harvard.edu/pub/alife/ 28ftp://metatron.harvard.edu:pub/hvision/HVision.tar.Z 29ftp://ftp.cis.ufl.edu/cis/tech-reports/tr94/tr94-015.ps 30http://www.cis.ufl.edu/ fishwick/book/book.html 38 fflSimulation31 Simulation Digest Project. fflMaes32 The Autonomous Agents/Alife Group at MIT fflAlife Groups33 Links to various Alife groups. fflLEE Home Page34 The Latent Energy Environments project. fflECAL 9535 Information on the 1995 European Conference on Alife. Granada, Spain. 4-6 June. 7.6 Computer Science and Autonomous Agents fflUMass36 Distributed artificial intelligence at The University of Mas- sachusetts Computer Science Department fflStanford37 Knowledge-Sharing at Stanford fflUMBC38 UMBC Agents Home Page. fflKQML-UMBC39 UMBC KQML Home Page. fflDAI-London40 Distributed Artificial Intelligence Research Unit of the Department of Electronic Engineering, Queen Mary & Westfield Col- lege, University of London. fflSoftware Agents41 fflftp connection42 Software agents mailing list. ______________________________ 31gopher://gopher.cis.ufl.edu/11/cis/simulation 32http://agents.www.media.mit.edu/groups/agents/ 33http://www.krl.caltech.edu/ brown/AL-groups.html 34http://www-cse.ucsd.edu:80/users/fil/ 35http://kal-el.ugr.es 36http://dis.cs.umass.edu/research/cig.html 37http://logic.stanford.edu/knowledge.html 38http://www.cs.umbc.edu/agents/ 39http://www.cs.umbc.edu/kqml/ 40http://www.elec.qmw.ac.uk/dai.html 41http://hitchhiker.space.lockheed.com/pub/AGENTS/htdocs/agent-home.html 42ftp://hitchhiker.space.lockheed.com/pub/AGENTS/htdocs/agent-home.html 39 fflPattern43 Pattern recognition. fflWeb Agents44 Intelligent Agents for the World Wide Web. fflRobotics45 fflKQML46 KQML (Knowledge Query and Manipulation Language), a high-level agent communication language. This page also contains pointers to information on other languages, protocols, and frameworks in the Autonomous Agent/Distributed artificial intelligence area. fflMeeting47 The 1995 International Conference on MultiAgent Systems (ICMAS-95), June 12-14, 1995, San Francisco. Major Meeting related Alife and Autonomous Agents. ______________________________ 43http://galaxy.ph.tn.tudelft.nl:2000/PRInfo.html 44http://hcrl.open.ac.uk/ 45http://web.nexor.co.uk/mak/doc/robots/robots.html 46http://www.cs.umbc.edu/kqml/ 47http://ICMAS.cs.umass.edu/ICMAS 40 8 Program List Following is an annotated list of ALife programs, most available by ftp. Some of these programs are treated more fully in the text. [Bal] A. Ballim. Viewgen. Viewgen (Viewpoint Generator) is a Prolog program that implements a agent modelling tool which allows for the generation of arbitrarily deep nes- ted belief spaces based on the agents's own beliefs, and on beliefs held by groups of agents. ViewGen is avail- able by anonymous ftp from crl.nmsu.edu:`pub/ViewFinder'. For the theory behind Viewgen see A. Ballim's dissertation crl.nmsu.edu:`ViewFinder-A4/A5/US.tar.Z'. [fBIB] The Laboratory for Biological Informatics and Theoretical Medicine (BITMed). Cdm-ds. The Cellular Device Machine Development System (CDM-DS). Equipped with its own object-based Simulation LANGuage SLANG. Version 3.1 is available for UNIX and Macin- tosh platforms. Simulation of Complex Systems. Current applica- tions include modelling of the immune and neuroenocrine systems in mice, database mining, HIV disease progression, and molecular biology. bitmed.ucsd.edu:`/pub/simulators'. [GEN] GENESIS. General neural simulation system. GENESIS (GEneral NEural SImulation System) is a general purpose simulation plat- form which supports the simulation of neural systems ranging from complex models of single neurons to simulations of large networks made up of more abstract neuronal components. GENESIS is avail- able by anonymous ftp from genesis.cns.caltech.edu. Before using ftp, you must telnet to genesis.cns.caltech.edu and login as the user 'genesis' (no password required) to register. [Hei] David Heibeler. Cellsim. A well-thought-out simulator for cellular automata. ftp.santafe.edu:`/pub/misc/cellsim_2.5.tar.Z'. 40 [Inc] Maxis Inc. "civilization", "simlife", "simcity" and "simant" pro- grams for mac and pc. A suite of "serious games" for learning about artificial life, the management of resources, and so on. [Kee94] Richard Keene. Darwin-an alife program. ftp.krl.caltech.edu: `/pub/alife/programs/darwin.tar.Z', 1994. [Lee] Jon Leech. L-systems. lsys is a program for generating artificial 'plants' based on production systems known as L-systems. Contact: Jon Leech . [Men] Filippo Menczer. Lee release 1.0 latent energy environments. A pro- gram for the study of the "bioenergetics" of artificial life. Available from cs.ucsd.edu, contact: . [MUM] MUME. Multi-module neural computing environment. MUME is a simulation environment for multi-modules neural comput- ing. It provides an object oriented facility for the simula- tion and training of multiple nets with various architectures and learning algorithms. An overview is available from the 129.78.13.39:`/pub/mume-overview.ps.Z'. MSDOS version of the program: brutus.ee.su.oz.au:`/pub/MUME-0.5-DOS.zip'. [NAS93] NASA. Clips 6.0. Technical report, NASA, 1993. CLIPS 6.0 is an general-purpose expert-system programming language written in ANSI C by NASA. The CLIPS inference engine in- cludes truth maintenance, dynamic rule addition, and customiz- able conflict resolution strategies. An extention to Clips, called Dyna-clips, permits the creation of populations of expert-systems, which can exchange rules and information with each other. For clips, contact: To subscribe to the CLIPS mailing list, send a message to the list server with message body SUB- SCRIBE CLIPS-LIST. For Dyna-clips, anonymous ftp: ftp.cs.cmu.edu: `user/ai/software/expert/clips/dyna/dynaclips_v1.tar.gz'. [Ray] Tom Ray. Tierra. Tierra is a system for studying the evolution of digital organisms. Source code and documentation are available 41 by anonymous ftp at tierra.slhs.udel.edu:`/tierra' To be ad- ded to either the tierra-announce (official announcements only) or tierra-digest (moderated discussion plus announcements) mailing lists, send mail to . [Res] M. Resnick. *logo. The cher.media.mit.edu:`ftp/pub/starlogo' directory contains information about *Logo (pronounced star-logo), the massively-parallel version of the Logo language. This program has been used, in particular, for Alife educational purposes. [Tyr] Toby Tyrrell. Animals. ANIMALS is an Alife simulator dir- ected at the study of action selection mechanisms, i.e. mech- anisms by which a animal chooses from a variety of (in gen- eral mutually-exclusive) actions at each moment in time. It com- prises both a simulated environment (described by various stim- uli impinging on the animal) and simulated animals (described by a set of possible actions to take, and a set of internal stimuli). Approximately 35000 lines of C-code. , ftp.ed.ac.uk:`pub/lrtt/se.tar.Z'. [Uni93] Carniege Mellon University. Soar. An artificial intelli- gence approach to Alife, known as Integrated Agent Archi- tecture. Supports learning through chunking (a "chunk" is a summary of the processing required to produce a sub- goal in an expert-system task). The SOAR system allows for the creation of multiple SOAR agents. ftp.cs.cmu.edu `/afs/cs.cmu.edu/project/soar/public/Soar6' Contact: , 1993. [vR] W. Kurt von Roeschlaub. Creature evolver. This program is used to study the evolution of chemotasis (the location of an object by chemical senses) by an organisms with a neural net. Turbo C++. ics.uci.end: `/pub/origins/software/creavolv/creaexe.zip'. 42 9 Internet Resource Guide The most up-to-date information on Alife simulators and related fields is found on the internet. The following is an annoted list of some of the most relevant sources. These are in addition to those which may obtained via to World Wide Web using a network browser such as NCSA mosaic (Section 7). Documents and software not served by WWW are available by anonym- ous ftp. Through this report ftp references are given in the form ma- chine.name:`filename'. To get, for example, alife.santafe.edu:`pub/topics/cas/c* *a-faq.ps', ffltype "ftp alife.santafe.edu" fflanswer "anonymous" at the "user" prompt. fflenter your email address at the "password" prompt. ffltype "get /pub/topics/cas/ca-faq.ps" . References containing an email address () can be gotten by sending a request to that email address. Note that this reference list contains a number of pointers to electronic discussion groups, some in the official usenet hierarchy, some not. These groups provide a forum for discussion on various technical issues. Some groups maintain their own databases. These are often superb sources of current and complete information. References [Age] Software Agents. Software agents mailing list. to join this list send email with the body of the message containing 'subscribe agents ' to . [ASS] ASSA. Advances in systems science and applications. an elec- tronic journal devoted to general systems theory. For informa- tion anonyomous ftp to assa.math.swt.edu, or send mail to . 43 [Bon93] Eric Bonabeau. Liste electronique "intelligence collective". Route de Tregastel 22301 Lannion Cedex Route de Tre- gastel 22301 Lannion Cedex tel: 96 05 31 07, contact: , (groupe general), , , , , 1993. [CKB94] CKBS. CKBS mailing list. A Cooperating Knowledge Based Sys- tem (CKBS) is an applied multiagent system. One major goal is the development of generic models for the solution of real-world problems. This mailing list is for distribution of conference and paper announcements in this area. To subscribe send mail to: , 1994. [Cl] COR-list. Computational research in organization theory, analysis, and design. a mailing list for discussion of Computational Organ- ization Research. for ad- ministrative messages. [CLI] CLIPS. CLIPS user group. Discussion of the CLIPS system. email with subject: SUB- SCRIBE CLIPS-LIST. [com] comp.ai.alife. Artificial life usenet group. A group in the usenet hierarchy devoted to artificial life. A FAQ list is in progress. [Con] X Consort. x-agent mailing list. for discussion of X windows autonomous agents. for administrative requests. [Das] Bhaskar Dasgupta. A bibliography on neural nets and genetic algorithms in financial forcasting. Approximately 130 references. . [Dem] Yves Demazeau. Mailing list for discussion of modelling autonom- ous agents in a multi-agent world. for submis- sions, for administrative requests. 44 [ELB] ELBA. E.l.b.a. (electronics and biotechnology advanced). Inform- ation concerning the use of biological materials and biological ar- chitectures for information processing systems and new devices. In- cludes information on neural networks and cellular automata. To subscribe, email to: . [] . Frequently asked questions about distrib- uted artificial intelligence. in preparation (Nov. 1994). [FUN] FUNIC. The funic neural net archive. A large collection of neural network papers and public domain software gathered from FTP sites in the US. funic.funet.fi`/pub/sci/neural'. [GD] Germany-DAI. Vki-rundmail, a german DAI mailing list. Primary medium for information exchange in the German distributed arti- ficial intelligence commmunity. for administrative messages and for contributions. [GMT] D. Goldberg, K. Milman, and C. Tidd. Genetic algorithms: A bib- liography. Illinois Genetic Algorithms Laboratory, UIUC. massive, 1200 references. email: . [Gro] UCLA Alife Research Group. A bibliography on artifi- cial life. Annotated bibliography in BibTeX format contain- ing nearly 500 references on all aspects of Artificial Life. ftp.cognet.ucla.edu:`ftp/pub/alife/ALife.bib'. [Gut] Howard Gutowitz. Frequently asked questions about cel- lular automata. text + over 600 references on CA. alife.santafe.edu:`/pub/topics/cas/ca-faq.ps'. [Hei] Joerg Heitkoetter. The hitch-hiker's guide to evolution- ary computation: A list of frequently asked questions. Latest info on Evolutionary Computation. rtfm.mit.edu: `pub/usenet/news.answers/ai-faq/genetic/part?' [HH94] Bernardo A. Huberman and Tad Hogg. Communities of practice: Performance and dynamics. parcftp.xerox.com: `pub/dynamics/communities.ps', 1994. 45 [Huh] Michael N. Huhns. DAI mailling list. DAI-List is a moderated mailing list devoted to research and practice in distributed artifi- cial intelligence and multiagent systems. To subscribe, send mail to: Back issues of DAI-List can be obtained via anonymous ftp from the DAI Archives at ftp.einet.net (192.147.157.225). [JM] U. Texas James McCoy. The genetic programming repository. ftp.cc.utexas.edu`pub/genetic-programming' papers and source code, Including software from Koza's book [Koz93] . [KQM] KQML. Knowledge-query and manipulation language mailing list. for information send email to with the subject line: 'help'. [Lea] Machine Learning. The machine learning mailing list. for submissions, and for administrative requests. [LIF] LIFIA. ftp server for lifia, laboratoire d'informatique fondamentale et d'intelligence artificielle, of the imag institute (informat* *ique et mathematique appliquee de grenoble). Contains papers and technical reports in a wide variety of subjects surrounding Alife. imag.fr:`/pub/LIFIA'. [Mur] Jaap Murre. A review of neurosimulators. An excellent review of roughly 40 simulation packages for neural nets, both commer- cial and academic. Includes pointers for internet access. ftp.mrc- apu.cam.ac.uk:`pub/nn/neurosim1.ps.Z'. [Nav] U.S. Navy. The genetic algorithms repository. contains GA software and a software sur- vey (in `/pub/galist/information/ga-software-survey.txt': ftp.aic.nrl.navy.mil). [Nis93] Volker Nissen. Evolutionary algorithms in management science. Very complete report on applications in this area. Over 230 refs.gwdu03.gwdg.de:` pub/msdos/reports/wi', 1993. 46 [Pol] J. Pollack. Osu neuroprose archive. This directory contains technical reports, preprints, bibliographies on neural networks. archive.cis.ohio-state.edu`/pub/neuroprose'. [RM] M. Resnick and F. Martin. Children and artificial life. Discus- sion of Alife in education. E&L memo 10, November 1990, MIT-Media Lab cher.media.mit.edu `pub/el-publications/Memos/memo10.PS.Z'. [Sar93] N. Saravan. References in evolutionary computa- tion. magenta.me.fau.edu:`/pub/ep-list/bib/EC-ref.bib.Z' List of References in BibTeX format in the area of Evolutionary Computation (GA/ES/EP/GP). This file currently has nearly 600 entries., 1993. [Tyr] Toby Tyrrell. Animals. Ph.D. thesis describing simulator for the study of action selection.ftp.ed.ac.uk:`pub/lrtt/as.[1-7]'. [Uni] Washington University. archive for technical reports in the philo- sophy/neuroscience/psychology program. thalamus.wustl.edu `pub/pnp'. 47 References [AB94] Chris Adami and C. Titus Brown. Evolutionary learning in the 2d artificial life system 'avida'. In Rodney A. Brooks and Pattie Maes, editors, Artificial Life IV. MIT Press/Bradford Books, 1994. [ABN94] Laurent Atlan, Jerome Bonet, and Martine Naillon. Learning distributed reactive strategies by genetic programming for the general job shop problem. In Seventh Annual Florida Artificial Intelligence Research Symposium, FLAIRS-94, 1994. [Ada94] Chris Adami. On modelling life. In Rodney A. Brooks and Pattie Maes, editors, Artificial Life IV. MIT Press/Bradford Books, 1994. [AR91] J.A. Anderson and E. Rosenfeld. Neurocomputing: Vols 1 and 2. MIT Press, Cambridge MA, 1988,1991. [Atm94] W Atmar. Notes on the simulation of evolution. IEEE Trans. Neural Networks, 5:1:130-148, 1994. [BC93] William Bricken and Geoffrey Coco. The VEOS project. Tech- nical report, Human Interface Technology Laboratory. Uni- versity of Washington FJ-15, Seattle 98195, 1993. Connections between Virtual Reality and Alife. [BDM94] Pierre Bessiere, Eric Dedieu, and Emmanuel Mazer. Rep- resenting robot/environment interactions using probabilities: the 'beam in the bin' experiment. In P. Gaussier and J-D Nicoud, editors, PerAc'94; From Perception to Action Confer- ence, Lausanne, Switzerland. IEEE Computer Society Press, 1994. ISBN 0-8186-6482-7. [Bes93] Pierre Bessiere. Genetic algorithms applied to formal neural networks. In P. Bourgine, editor, ECAL91, 1st European Con- ference on Artificial Life, Paris. MIT Press, Bradford Books, 1993. 48 [BG88] Alan H. Bond and Les Gasser. Readings in Distributed Artificial Intelligence. Morgan Kaufmann, San Mateo, CA, 1988. [BGH89] L. B. Booker, D.E. Goldberg, and J.H. Holland. Classifier sys- tems and genetic algorithms. Artificial Intelligence, 40(1-3):235- 282, Sep 1989. [BGS+ 91] P. Brazdil, M Gams, S. Sian, Torgo, and W.van de Velde. Learn- ing in distributed systems and multi-agent environments. In Y. Kodratoff, editor, Machine-Learning-EWSL-91, LNCS 482, 1991. [BKG91] Gerald O. Barney, W. Brian Kreutzer, and Martha J. Gar- rett. Managing a Nation: The Microcomputer Software Catalog. Westview Press, Boulder, 1991. Catalogues all sorts of software tools for future studies and national planning. It talks about data sources, and it has a good chapter by Sterman titled 'a skeptic's guide to computer models'. [BM91] P. Brazdil and S. Muggleton. Learning to relate terms in a multiple agent environment. In Y. Kodratoff, editor, Machine- Learning-EWSL-91, LNCS 482, 1991. [BM94] Rodney A. Brooks and Pattie Maes. Artificial Life IV. Bradford Books, MIT Press, 1994. ISBN: 0-262-52190-3. [Bro86] R. Brooks. A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, RA-2:14-23, Apr 1986. [Bro89] Rodney A. Brooks. A robot that walks: Emergent behaviour from a carefully evolved network. Neural Computation, 1(2), 1989. [BT94] Eric Bonabeau and Guy Theraulaz. Intelligence Collective. Her- mes, 1994. [CL4 ] Editor in Chief Chris Langton. The Aritficial Life Journal. MIT Press, 1994-. 49 [Dal93] R. Daley. Multi-agent learning: Theorectical and empirical studies. In G.Brewka and K.P.Jantke, editors, Nonmonotonic and Inductive Logic. Springer-Verlag, 1993. [Dav89] Lawrence Davis. Genetic Algorithms and Simulated Annealing. Morgan Kaufmann, 1989. [Dav91] Lawrence Davis. Handbook of Genetic Algorithms. Van Nos- trand Reinhold, New York, 1991. ISBN 0-442-00173-8. [DB93] Fogel DB. On the philosophical foundations of evolutionary al- gorithms and genetic algorithms. In DB Fogel and W Atmar, editors, Proc. of the Second Annual Conf. on Evolutionary Pro- gramming, pages 23-29, La Jolla, CA, 1993. Evolutionary Pro- gramming Society. [DB94] Fogel DB. Evolutionary programming: An introduction and some current directions. Statistics and Computing, 4:113-129, 1994. [DB95] Fogel DB. Evolutionary Computation: Toward a New Philo- sophy of Machine Intelligence. IEEE Press, Piscataway, NJ, 1995. [Dee92] S. M. Deen, editor. Proc. of the CKBS-SIG Workshop 1992, number 1, Keele University, Staffordshire, ST5 5BG,U.K., Sept 1992. Dake Centre ISBN 0 9521789 0 7. To obtain, send a message to bairdcs.keele.ac.uk. [Dee93a] S. M. Deen. A CKBS approach to holonic manufacturing sys- tems. Technical Report no. DAKE/-/TR-93009.0, Data and Knowledge Engineering Centre, Keele University, Staffordshire, ST5 5BG, U. K., 1993. [Dee93b] S. M. Deen. A general framework for coherence in a CKBS. Journal of Intelligent Information Systems, 2:83-107, Jun 1993. This is also a DAKE Centre Technical Report DAKE/-/TR- 92012.1. 50 [Dee93c] S. M. Deen. Systems characteristics of holons for intelligent manufacturing systems. Technical Report no. DAKE/-/TR- 93008.0, Data and Knowledge Engineering Centre, Keele Uni- versity, Staffordshire, ST5 5BG, U. K., 1993. [Dee94a] S. M. Deen. An architectural framework for some CKBS applic- ations. Technical Report no. DAKE/-/TR-94001.0, Data and Knowledge Engineering Centre, Keele University, Staffordshire, ST5 5BG, U. K., 1994. [Dee94b] S. M. Deen. Cooperation issues in holonic manufacturing sys- tems. In Y. Yoshikawa and J. Goossenaerts, editors, Proc. of the Design of Information Infrastructure Systems for Manufactur- ing 1993. Elsivier, 1994. This is also a DAKE Centre Technical Report DAKE/-/TR-93007.0. [DGN+ 93] J.L. Deneubourg, S. Goss, G Nicolis, H. Bersini, and R. Da- gonnier, editors. ECAL '93: European Conference on Artificial Life. Addison-Wesley, 1993. [DKS91] Cihan H. Dagli, Soudar R.T. Kumara, and Shin, editors. Intel- ligent Engineering Systems through Artificial Neural Networks. ASM Press, New York, 1991. [Dor94] Georg Dorffner, editor. Neural Networks and a New AI. Chap- man & Hall, London, 1994. [Dor95] Marco Dorigo, editor. Special issue of IEEE Transactions on Systems, Man and Cybernetics (IEEE-SMC) on: Learning Ap- proaches to Autonomous Robots Control., 1995. [Dro93] Alexis Drogoul. De la Simulation Multi-Agentsa la Resolution Collective de Problemes. PhD thesis, L'Universite Paris VI, 1993. [Dum94] Renaud Dumeur. Synthese de Comportements Animaux Indi- viduels et Collectifs par Algorithmes Genetiques. PhD thesis, Universite de Paris VIII, 1994. 51 [EEK93] G. Bard Ermentrout and Leah Edelstein-Keshet. Cellular auto- mata approaches to biological modeling. Journal of Theoretical Biology, 160:97-133, January 1993. [F94] Menczer F. Changing latent energy environments: A case for the evolution of plasticity. Technical Report CS94-336, Jan 1994. [FA92] DB Fogel and W Atmar. Proceedings of the First Annual Con- ference on Evolutionary Programming. Evolutionary Program- ming Society, San Diego, CA, 1992. [FA93] DB Fogel and W Atmar. Proceedings of the Second Annual Conference on Evolutionary Programming. Evolutionary Pro- gramming Society, San Diego, CA, 1993. [FD93a] Martyn Fletcher and S. M. Deen. Design considerations for optimal intelligent network routing. In S. M. Deen, editor, Proc. of the CKBS-SIG Workshop 1992, pages 19-42. DAKE Centre, Keele Universty, Staffordshire, ST5 5BG, U. K., 1993. This is also DAKE Centre Technical Report DAKE/-/TR-92015.2. [FD93b] Martyn Fletcher and S. M. Deen. Multi agent design issues in congestion management. Technical report no. DAKE/-/TR- 93010.0, Data and Knowledge Engineering Centre, Keele Uni- versity, Staffordshire, ST5 5BG, U. K., Sept 1993. [Fer94] J. Ferber. Simulating with reactive agents. In E. Hillebrand and J. Stender, editors, Many-Agent Simulation and Artificial Life. IOS Press, 1994. [FH87] Scott Fahlman and Geoffrey Hinton. Connectionist architectures for artificial intelligence. IEEE Computer, 20(1):100-109, Jan 1987. [FJK92] P. Fites, P. Johnson, and M. Kratz. The Computer Virus Crisis. Van Nostrand Reinhold, 1992. [Fle93a] Martyn Fletcher. Implementing an intelligent network routing model in a CKBS architecture. Technical Report no. DAKE/- /TR-93001.0, Data and Knowledge Engineering Centre, Keele University, Staffordshire, ST5 5BG, U. K., 1993. 52 [Fle93b] Martyn Fletcher. Some further design considerations for the congestion management mechanism menthol. Technical Re- port no. DAKE/-/TR-93005.0, Data and Knowledge Engineer- ing Centre, Keele University, Staffordshire, ST5 5BG, U. K., 1993. [For91] S. Forrest. Emergent Computation. MIT Press, 1991. [FOW66] LJ Fogel, AJ Owens, and MJ Walsh. Artificial Intelligence Through Simulated Evolution. John Wiley and Sons, NY, 1966. [FR93a] Menczer F and Belew RK. Latent energy environments: A model for artificial life complexity. Technical Report CS93-298, July 1993. [FR93b] Menczer F and Belew RK. Latent energy environments: A tool for artificial life simulations. Technical Report CS93-301, July 1993. [FR94] Menczer F and Belew RK. Latent energy environments. In Plastic Individuals in Evolving Populations, Santa Fe Institute Studies in the Sciences of Complexity. Addison-Wesley, 1994. [GB92] Les Gasser and Jean-Pierre Briot. Object-based concurrent computation and DAI. In N.M. Avouris and L. Gasser, editors, Distributed Artificial Intelligence: Theory and Praxis. Kluwer Academic Publishers, 1992. The basic argument of this paper is that to implement DAI systems we should find complement- ary relations between the theory of social organization chosen for DAI problem-solving and the theory of modeling and imple- mentation used for system construction. [GBH87] L. Gasser, C. Braganza, and N. Herman. Mace:a flexible testbed for distributed ai research. In M.N.Huhns, editor, Distributed Artificial Intelligence, London, 1987. Pitman. [GD94] Nigel Gilbert and Jim Doran. Simulating Societies: the com- puter simulation of social phenomena. UCL Press, London, 1994. ISBN 1-85728-082-2. 53 [GGRVG85] J. Grefenstette, R. Gopal, B. Rosmaita, and D. Van Gucht. Genetic algorithms for the traveling salesman problem. In Proc. of the 1st International Conference on Genetic Algorithms and Applications, pages 160-168, 1985. [GH89] Les Gasser and Michael N. Huhns. Distributed Artificial Intel- ligence, Volume II. Morgan Kaufmann, 1989. [Gol89] David E. Goldberg. Genetic Algorithms in Search, Optimiza- tion, and Machine Learning. Addison-Wesley, Reading, MA, 1989. ISBN 0-201-15767-5. [Gre91] J.J. Grefenstette. Lamarckian learning in multi-agent environ- ments. In Proc. 1991 Conference on Genetic Alogorithms, pages 303-310, 1991. [Gut91] Howard Gutowitz. Cellular Automata: Theory and Experiment. MIT Press/Bradford Books, Cambridge Mass., 1991. ISBN 0- 262-57086-6. [Gut93] Howard Gutowitz. A tutorial introduction to Swarm. Technical report, The Santa Fe Institute, 1993. Santa Fe Institute Preprint Series. [HD92] S.K. Helsel and S.D. Doherty. Virtual Reality Market Place. Meckler Publishing, London, 1992. ISBN 0-88736-795-X. [Heu94] Jean-Claude Heudin. La Vie Artificielle. Hermes, Paris, 1994. [HG93] B. Huberman and N.S. Glance. Socal dilemmas and fluid or- ganizations. In J.L. Deneubourg, S. Goss, G Nicolis, H. Bersini, and R. Dagonnier, editors, ECAL '93: European Conference on Artificial Life, page 496. Addison-Wesley, 1993. [HHNT88] J.H. Holland, K.J. Holyoak, R.E. Nisbett, and P.R. Thagard. Induction: Processes of Inference, Learning, and Discovery. MIT Press, 1988. [Hin89] Geoffrey E. Hinton. Connectionist learning procedures. Artifi- cial Intelligence, 40(1-3):185-234, 1989. 54 [Hin90] G.E. Hinton. Connectionist Symbol Processing. MIT Press, 1990. [HKP91] J. Hertz, A. Krogh, and R.G. Palmer. Introduction to the The- ory of Neural Computation. Addison-Wesley, 1991. ISBN 0- 201-51560-1. [HMSB87] M.N. Huhns, U. Mukhopadhyay, L.M. Stephens, and R.D. Bonnell. DAI for document retrieval:the minds project. In M.N.Huhns, editor, Distributed Artificial Intelligence, London, 1987. Pitman. [HN90] Robert Hecht-Nielsen. Neurocomputing. Addison-Wesley, 1990. ISBN 0-201-09355-3. [Hog89] P. Hogeweg. Simplicity and complexity in mirror universes. BioSystems, 23, 1989. [Hol75] J.H. Holland. Adaptation in Natural and Artificial Systems. University of Michigan Press, 1975. Reprinted by MIT Press, 1992. [Hol93] John Holland. Echoing emergence: Objectives, rough defini- tions, and speculations for echo-class models. In G. Cowan, editor, Integrative Themes, SFI Studies in the Sciences of Com- plexity, Vol XIX. Addison-Wesley, 1993. [HS94] E. Hillebrand and J. Stender. Many-Agent Simulation and Ar- tificial Life. IOS Press, 1994. [Huh87] Michael N. Huhns. Distributed Artificial Intelligence. Morgan Kaufmann, 1987. [HW90] B. H"olldobler and E.O. Wilson. The Ants. Belknap/Harvard University Press, 1990. ISBN 0-674-04075-9. [IEE91] IEEE, editor. Special Issue on Distributed AI, volume 21(6), Nov/Dec 1991. 55 [JMD94] Michael Patrick Johnson, Pattie Maes, and Trevor Darrell. Evolving visual routines. In Rodney Brooks and Pattie Maes, editors, Artificial Life IV, pages 198-209. MIT Press, 1994. [Jod94a] Jean-Francois Jodouin. Reseaux de neurones : Principes et definitions. editions Hermes, Paris, 1994. [Jod94b] Jean-Francois Jodouin. Reseaux neuromimctiques : Modeles et applications. editions Hermes, Paris, 1994. [KB92] K. Karakotsios and M. Bremer. Simlife: The official Strategy Guide. Prima Publishers, 1992. ISBN 1-55958-190-5. [KD93] Y. Kitamura et al and S. M. Deen. A cooperative search scheme for dynamic problems. In Proc. of the IEEE Systems, Man and Cybernetics Conference. IEEE, 1993. [KEK94] Jr Kenneth E. Kinnear. Advances in Genetic Programming. MIT Press- Bradford Books, 1994. For a description see ftp.cc.utexas.edu, `/pub/genetic-programming/papers/AiGP.atoc.txt'. [KI92] Kunihiko Kaneko and Takashi Ikegami. Homeochaos: dynamic stability of a symbiotic network with population dynamics and evolving mutation rates. Physica D, 56:406-429, 1992. [Kni89] Kevin Knight. A gentle introduction to subsymbolic compu- tation: Connectionism for the AI researcher. Technical Re- port CMU-CS-89-150, May 1989. Carnegie Mellon University, School of Computer Science, Pittsburgh, PA. [Koz92] John R. Koza. Genetic Programming: On the programming of computers by means of natural selection. MIT Press, 1992. ISBN 0-262-11170-5. [Koz94] John R. Koza. Genetic Programming II: automatic discovery of reusable subprograms. MIT Press, 1994. ISBN 0-262-11189-6. [Lan84] C.G. Langton. Self-reproduction in cellular automata. Physica D, 10(1-2):135-144, 1984. 56 [Lan86] C.G. Langton. Studying artificial life with cellular automata. Physica D, 22:120-149, 1986. [Lan87] C.G. Langton. Virtual state machines in cellular automata. Complex Systems, 1:257-271, 1987. [Lan91] C. Langton. Preface (to artificial life ii). In C. Langton, C. Taylor, J. D. Farmer, and Rasmussen, editors, Artificial Life II, volume XI. Addison-Wesley, Redwood City, CA, 1991. [LdB94] Loet Leydesdorff and Peter Van den Besselaar. Evolutionary Economics and Chaos Theory: New developments in technology studies. Pinter, London, 1994. ISBN 1 85567 198 0. [Lev92] S. Levy. Artificial Life. Pantheon, New York, 1992. [LJ94] Fogel LJ. Evolutionary programming in perspective: The top- down view. In Zurada JM, Marks RJ, and Robinson CJ, editors, Computational Intelligence: Imitating Life, pages 135-146, Pis- cataway, NJ, 1994. IEEE Press. [LNR87] John E. Laird, Allen Newell, and Paul S. Rosenbloom. Soar: An architecture for general intelligence. Artificial Intelligence, 33(1):1-64, 1987. [LRN86] J.E. Laird, P.S. Rosenbloom, and A. Newell. Chunking in soar: The anatomy of a general learning mechanism. Machine Learn- ing, 1:1-46, 1986. [LS87] S. Lee and Y.G. Shin. Multiple agent cooperative problem solv- ing with axiom-based negotiation. In Proceedings IEEE Inter- national Symposium on Intelligent Control, 1987. [LTFR91] C. Langton, C. Taylor, J. D. Farmer, and Rasmussen. Artificial Life II, volume XI. Addison-Wesley, Redwood City, CA, 1991. [Lut94] Evelyne Lutton. Rapport d'expertise sgdn: Etat de l'art des algorithmes genetiques. INRIA-Rocquencourt, 1994. [Mae90] Pattie Maes. How to do the right thing. Connection Science, 1(3):291-323, 1990. 57 [Mae91] Pattie Maes. Designing Autonomous Agents: Theory and Prac- tice from Biology to Engineering and Back. MIT Press, 1991. [Mic92] Z. Michalewicz. Genetic algorithms + Data Structures = Evol- utionary Programs. Springer-Verlag, New York, 1992. [MTD93] S. Hamada M. Takizawa and S. M. Deen. Vehicle transactions. In V. Marik, J. Lazansky, and R. Wagner, editors, Database and Expert Systems Applications, pages 611-614. Springer-Verlag, 1993. [MW91] Jean-Arcady Meyer and Stewart W. Wilson. From animals to animats: Proceedings of the First International Conference on Simulation of Adaptive Behavior (1990, Paris, France). MIT Press, Cambridge, MA, 1991. [Ndo93] Baird Ndovie. Multi-agent cooperation in air traffic control: A functional analysis. Technical Report no. DAKE/-/TR-93006.0, Data and Knowledge Engineering Centre, Keele University, Staffordshire, ST5 5BG, U. K., 1993. [N.R94] N.R.Jennings. Cooperation in Industrial Multi-Agent Systems. World Scientific Publishing Company, 1994. ISBN: 981-02-1652- 1. [OKJ+ 91] Setsuo Ohsuga, Hannu Kangassalo, Hannu Jaakkola, Koichi Hori, and N. Yonezaki, editors. Information Modeling and Knowledge Bases: Foundations, Theory, and Applications. IOS Press, Amsterdam, 1991. [OSH87] I.M. Oliver, D.J. Smith, and J.R.C. Holland. A study of the per- mutation crossover operators on the traveling salesman problem. In 2nd International Conference on Genetic Algorithms, pages 224-230, 1987. [OV93] Setsuo Ohsuga and Jari Vaario. A study of artificial life as a model of automatic model building. 1993. (to appear in proceed- ing of European-Japanese Seminar on Information Modeling and Knowledge Bases 1993). 58 [PFB93] Rebecca Parsons, Stephanie Forrest, and Christian Burks. Ge- netic algorithms for DNA sequence assembly. In Proc. of the 1st International Conference on Intelligent Systems in Molecu- lar Biology. AAAI Press, July 1993. [PFB94] Rebecca Parsons, Stephanie Forrest, and Christian Burks. Ge- netic algorithms, operators, and DNA fragment assembly. Ma- chine Learning, 1994. To appear. [Raw91] G. Rawlins. Foundations of Genetic Algorithms. Morgan Kaufmann, 1991. [Ray91] T. S Ray. An approach to the synthesis of life. In C. Langton, C. Taylor, J. D. Farmer, and Rasmussen, editors, Artificial Life II, volume XI, pages 371-408. Addison-Wesley, Redwood City, CA, 1991. [Ray94] T. S Ray. An evolutionary approach to synthetic biology, zen and the art of creating life. Artificial Life, 1(1), 1994. [Res94] Mitchel Resnick. Learning about life. Artificial Life, 1/2, 1994. [RM86] D.E. Rumelhart and J.L. McClelland. Parallel Distributed Pro- cessing: Explorations in the Microstructure of Cognition. Vol. 1: Foundations; Vol. 2: Psychological and Biological Models. MIT Press, Cambridge, MA, 1986. [RZ94] Jeffrey S. Rosenschein and Gilad Zlotkin. Rules of Encounter. MIT Press ISBN 0-262-18159-2, 1994. Rules of Encounter ap- plies the general approach and the mathematical tools of game theory in a formal analysis of the rules (or protocols) govern- ing the high-level behavior of interacting computer systems. The authors point out that adjusting the rules of public behavior _ or the rules of the game _ by which the programs must inter- act can influence the private strategies that designers set up in their machines, shaping design choices and run-time behavior. Applications of this protocol design might be, for example, to create the mechanisms by which software agents negotiate with one another. 59 [SF94] A.V. Sebald and L.J. Fogel. Proceedings of the Third Annual Conference on Evolutionary Programming. World Scientific, River Edge, NJ, 1994. ISBN 981-02-1810-9. [Sia91] S.S. Sian. Extending learning to multiple agents:issues and a model for multi-agent machine learning(ma-ml). In Y.Kodratoff, editor, Machine-Learning-EWSL-91, LNCS 482, 1991. [SMM+ 91] T. Starkweather, S. McDaniel, K. Mathias, D. Whitley, and C. Whitley. A comparison of genetic sequencing operators. In 4th International Conference on Genetic Algorithms, pages 69- 76, 1991. [Spa91] Eugene H. Spafford. Computer viruses- a form of artificial life? In C. Langton, C. Taylor, J. D. Farmer, and Rasmussen, editors, Artificial Life II, volume XI, pages 371-408. Addison-Wesley, Redwood City, CA, 1991. [Spa94] Eugene H. Spafford. Computer viruses as artificial life. Artificial Life Journal, 1(3), 1994. [SW89] M.J. Shaw and A.B. Whinston. Learning and adaptation in dis- tributed artificial intelligence systems. In Distributed Artificial Intelligence 2, 1989. [Tac93] Walter A. Tackett. Genetic programming for feature discovery and image discrimination. In Stephanie Forrest, editor, Pro- ceedings of the Fifth International Conference on Genetic Al- gorithms. Morgan Kauffman, 1993. [Tou91] D.S. Touretzky. Neural Information Processing Systems, volume 1-4. Morgan Kaufmann, 1988-1991. [Vaa93a] Jari Vaario. Artificial life primer. Technical Report TR-H- 033, ATR Human Information Processing Laboratories, Kyoto, Japan, September 29 1993. [Vaa93b] Jari Vaario. Emergent intelligence. Japanese Artificial Life Newsletter, 1993. 60 [Vaa93c] Jari Vaario. An Emergent Modeling Method for Artificial Neural Networks. PhD thesis, The University of Tokyo, 1993. [Vaa93d] Jari Vaario. The role of environment in evolutionary compu- tation. In Workshop: Evolutionary Computation and its Ap- plications. 1993 Australian Joint Conference on Artificial Intel- ligence, 1993. [Vaa94a] Jari Vaario. Artificial life as constructivist AI. Journal of SICE (Society of Instrument and Control Engineers), 33(1), 1994. [Vaa94b] Jari Vaario. From evolutionary computation to computational evolution. Informatika, 1994. (submitted). [Vaa94c] Jari Vaario. Modeling biological adaptation. Japan-U.S.A. Sym- posium on Flexible Automation, Bionic Systems and Artificial Life, July 11-18 1994. [VHO94] Jari Vaario, Koichi Hori, and Setsuo Ohsuga. Toward evolution- ary design of autonomous systems. The International Journal in Computer Simulation, a special issue on highly autonomous systems, 1994. (to appear). [vN66] J. von Neumann. In A.W. Burks, editor, Theory of Self- Reproducing Automata. University of Illinois Press, Urbana, 1966. [VO91] Jari Vaario and Setsuo Ohsuga. Adaptive neural architectures through growth control. In Intelligent Engineering Systems through Artificial Neural Networks, pages 11-16. 1991. [VO92] Jari Vaario and Setsuo Ohsuga. An emergent construction of ad- aptive neural architectures. Heuristics - The Journal of Know- ledge Engineering, 5(2), 1992. [VO94] Jari Vaario and Setsuo Ohsuga. On growing intelligence. In Neural Networks and a New AI. Dorffner-93, 1994. [VOH91] Jari Vaario, Setsuo Ohsuga, and Koichi Hori. Connectionist modeling using Lindenmayer systems. In Information Modeling 61 and Knowledge Bases: Foundations, Theory, and Applications, pages 496-510. 1991. [Wat91] Mark Watson. Common Lisp Modules - Artificial Intelligence in the Era of Neural Networks and Chaos Theory. Springer- Verlag, 1991. Includes code written in Macintosh Common Lisp and uses the Mac graphical interface. [WD92] A. Werner and C. Demazeau. Decentralized Artificial Intelli- gence. Elsevier, 1992. [WD93] Mark Walsh and S. M. Deen. A study of some multi-agent application design strategies with a view to enhancing perform- ance. In S. M. Deen, editor, Proc. of the CKBS-SIG Workshop 1992, pages 75-88. DAKE Centre, Keele University, Stafford- shire, ST5 5BG, U. K., 1993. This is also a DAKE Centre Technical Report DAKE/-/TR-92014.2. [Wei93] G. Weiss. Learning to coordinate actions in multi-agent systems. In IJCAI-93, pages 311-316, 1993. 62