An Iterated Function Systems Approach to Emergence

20 292 0
An Iterated Function Systems Approach to Emergence

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

An Iterated Function Systems Approach to Emergence Douglas A. Hoskins 1 Abstract An approach to action selection in autonomous agents is presented. This approach is motivated by biological examples and the operation of the Ran- dom Iteration Algorithm on an Iterated Function System. The approach is illustrated with a simple three-mode behavior that allows a swarm of 200 computational agents to function as a global optimizer on a 30-dimensional multimodal cost function. The behavior of the swarm has functional simi- larities to the behavior of an evolutionary computation (EC). 1 INTRODUCTION One of the most important concepts in the study of artificial life is emergent behavior (Langton 1990; Steels 1994). Emergent behavior may be defined as the global behaviors of a system of agents, situated in an environment, that require new descriptive categories beyond those that describe the local behavior of each agent. Most biological systems exhibit some form of self- organizing activity that may be called emergent. Intelligence itself appears to be an emergent behavior (Steels 1994), arising from the interactions of in- dividual “agents,” whether these agents are social insects or neurons within the human brain. Many approaches have been suggested for generating emergent behav- ior in autonomous agents, both individually (Steels 1994; Maes 1994; Sims 1994; Brooks 1986) and as part of a larger group of robots (Resnick 1994; Col- orni et al. 1992; Beckers et al. 1994; Hodgins and Brogan 1994; Terzopoulos et al. 1994). These concepts have also been applied to emergent computa- tion, where the behavior of the agent has some computational interpretation (Forrest 1991). Exciting results have been achieved using these approaches, particularly through the use of global, non-linear optimization techniques such as evolutionary computation to design the controllers. Almost all control architectures for autonomous agents implicitly assume that there exists a single, best action for every agent state (including sensor 1 Boeing Defense & Space Group and the University of Washington, Seattle, WA. e-mail hoskinsd@bcsaic.boeing.com to appear in ”Evolutionary Computation IV: The Edited Proceedings of the Fourth Annual Conference on Evolutionary Programming”, J. R. McDonnell, R. G. Reynolds and D. B. Fogel, Eds. MIT Press 1 states), although most include some form of wandering behavior. In this pa- per, an approach is presented that explicitly assumes the opposite — that there are many “correct” actions for a given agent state. In doing so, we hope to gain access to analytical methods from Iterated Function Systems (Barnsley 1993), ergodic systems theory (Elton 1987) and the theory of im- pulsive differential equations (Lakshmikantham et al. 1989; Bainov and Sime- onov 1989), which may allow qualitative analysis of emergent behaviors and the analytical synthesis of behavioral systems for autonomous agents. Many important applications of emergent behavior, such as air traffic control, in- volve problems where system failure could lead to significant loss of life. These systems, at least, would seem to require some ability to specify bounds on the emergent behavior of the system as a function of a dynamically chang- ing environment. Section 2 by motivates the approach with two examples from biology. Section 3 reviews the concepts of Iterated Function Systems (IFSs). A new action selection architecture, the Random Selection Rule, is presented in sec- tion 4. This approach is applied to an emergent computation in section 5. These results are discussed in section 5. Concluding remarks are offered in section 6. 2 BIOLOGICAL MOTIVATION Chemotaxis in single-celled organisms, such as E. coli, is one of the simplest behaviors that might be called intelligent. Intelligence in this case is ability of the organism to control its long term distribution in the environment. E. coli exhibits two basic forms of motion —‘run’ and ‘tumble’ (Berg 1993). Each cell has roughly six flagella. These can be rotated, either clockwise or counter-clockwise. A ‘run’ is a forward motion generated by counter- clockwise rotation of the flagella. This rotation causes the flagella to “form a synchronous bundle that pushes the body steadily forward.” A ‘tumble’ results from rotating the flagella clockwise. This causes the bundle to come apart, so that the flagella move independently, resulting in a random change in the orientation of the cell. A cell can control its long term distribution in its environment by modulating these two behaviors. This is true even though the specific movements of the cell are not directed with respect to the envi- ronment. The cell detects chemical gradients in its environment, and modu- lates the probability of tumbling, not the orientation of the cell after the tum- ble. By tumbling more frequently when the gradient is adverse the cell ex- ecutes a biased random walk, moving the cell (on average) in the favored direction. Intelligent behavior is also seen in more complex animals, such as hu- mans. In this case, the behavior of the organism is determined, in large part, by motor commands generated by the brain in response to sensory input. The brain is comprised of roughly 10 billion highly interconnected neurons (Kandel and Schwartz 1985). The overall patterns of these interconnections are consistent between individuals, although there is significant variation from individual to individual. Connections between neurons are formed by two structures: axons and dendrites. Synapses are formed where these structures meet. Signals are propagated between neurons in a two-step process: “wave- 2    to-pulse” and “pulse-to-wave” (Freeman 1975). The first step is the genera- tion of action potentials at the central body of the neuron. These essentially all-or-none pulses of depolarization are triggered when the membrane po- tential at the central body of the neuron exceeds a threshold value. They propagate outward along the axon, triggering the release of neurotransmit- ter molecules at synapses. This begins the second step, pulse-to-wave con- version. The neurotransmitters trigger the opening of chemically gated ion channels, changing the ionic currents across the membrane in the dendrite of the post-synaptic neuron. These currents are a graded response that is in- tegrated, over both space and time, by the dynamics of the membrane po- tential. The pattern, sense (excititory or inhibitory), and magnitude of these connections determine the effect that one neuron has on the remainder of the network. The wave-to-pulse conversion of the action potential’s gener- ation and propagation constitute a multiplication and distribution (in space and time) of the output of the neuron, while the pulse-to-wave conversion performed by membrane dynamics and spatial distribution of the dendrites acts to integrate and filter the inputs to a neuron. Action potential genera- tion in a neuron appears to act like a random process, with the average fre- quency determined by the inputs to the neuron. The “neurons” in artificial neural networks are not intended to be one- for-one models of biological neurons, especially in networks intended for computational applications. They are viewed as models of the functional behavior of a mass or set of hundred or thousands of neurons with similar properties and connectivity. In these models, the wave-to-pulse and pulse- to-wave character of communication between many biological neurons is abstracted out, and replaced by a single output variable. This variable is taken as modeling the instantaneous information flow between the neural masses. This output variable is a function of the state(s) of the artificial neu- ron. This most common approach to modeling artificial neurons makes the implicit assumption that the information state of a collection of biologi- cal neurons is representable by a fixed length vector of real numbers, e.g. points in R n . Other information states may be accessible to a collection of neurons interacting with randomly generated all-or-none impulses. These “states” would be emergent patterns of behavior, exhibited in the metric spaces “where fractals live” (Barnsley 1993). 3 ITERATED FUNCTION SYSTEMS Iterated Function Systems (IFSs) have been widely studied in recent years, principally for computer graphic applications and for fractal image com- pression (Barnsley 1993; Hutchinson 1981; Barnsley and Hurd 1993). This section summarizes some key results for Iterated Function Systems, as dis- cussed Barnsley (1993). Four variations on an Iterated Function System are discussed: an IFS, the Random Iteration Algorithm on an IFS, an IFS with probabilities and a recurrent IFS. In addition, the convergence of the behav- ior of these systems to unique limits in derived metric spaces will be discussed, and an approximate mapping lemma will be proved. An IFS is defined as a complete metric space, (X, d), together with a finite collection of contractive mappings, W = {w i (x)|w i : X → X, i = 1, . . . , N }. 3     Figure 1: Clockwise from upper left: The sets A 0 through A 4 in the generation of a Sier- pinski triangle. The A 0 is a circle inside a square in this example. The limiting behavior is independent of A 0 . A mapping, w i , is contractive in (X, d) if d(w i (x), w i (y)) ≤ sd(x, y) for some contractivity factor, s i ∈ [0, 1). The contractivity factor for the com- plete IFS is defined as s = max i=1, ,N (s i ). The following example illustrates the operation of an IFS. Consider three mappings of the form: w i (x) = 0.5(x − η i ) + η i where η i = (0, 0), (1, 0), or (0.5, 1.0) for i = 1, 2, 3, respectively and x, η i ∈ X. If this collection of mappings is applied to a set, A k , a new set is generated, A k+1 = W (A k ) = w 1 (A k ) ∪ w 2 (A k ) ∪ w 3 (A k ). Repeating, or iterating, this procedure generates a sequence of sets, {A k }. This process is illustrated in figure 1, where the set A 0 is a circle and square. The first four elements of the sequence are also shown. Each application of W generates three modified copies of the previous set, each copy scaled and translated according to one of the w i . The sequence converges to a unique limit set, A. This limit does not depend on the initial set, and is invariant un- der W , that is A = W (A). This set is referred to as the attractor of the IFS. It is a single element, or point, in the Hausdorff metric space, (H(X) , h). Points in H(X) are compact sets in X, and the metric, h, on H(X) is based on the metric d on X. The attractor for the IFS in this example is sometimes called the Sierpinski triangle, or gasket. The RIA generates a sequence, {x k } ∞ k=0 , by recursively applying individ- ual mappings from an IFS. This sequence, known as the orbit of the RIA, will almost always approximate the attractor of the IFS. At each step, a mapping 4       Figure 2: The first 10,000 elements in the sequence generated by an RIA on the IFS for the Sierpinski triangle. is selected at random from W and then used to generate the next term in the sequence. That is, we generate {x k } by x k = w σ k (x k−1 ) where x 0 is an arbitrary point in X, and σ k is chosen randomly from {1, . . . , N}. The infinite sequence of symbols, z = σ 1 σ 2 σ 3 σ 4 . . ., is a point in the code space, Σ, sometimes known as the space of Bernoulli trials. Each (infinite) run of the RIA defines one such a point, z. The first 10,000 points of such a run are shown in figure 2. The points generated by the RIA are converging to the Sierpinski triangle. The distance, d(x k , A), from the point to the set is dropping at each step by at least s, the contractivity of the IFS. Moreover, the sequence will “almost always” come within any finite neighborhood of all points on the attractor as k → ∞. The RIA suggests the concept of an IFS with probabilities. This type of IFS associates a real number, p i with each w i , subject to the constraint that  N i=1 p i = 1, p i > 0. This gives different “mass” to different parts of the at- tractor of the IFS, even though the attractor in H(X) does not change. This is illustrated in figure 3. The attractors for the two RIA’s illustrated in figure 3 contain exactly the same set points in the plane. They differ only in the distribution of “mass” on the attractor. Just as the limiting set of an IFS is a point in H(X) , these limiting mass distributions are points in another derived metric space, P(X) . P(X) is the space of normalized of Borel measures on X. The metric on this space is the Hutchinson metric, d H (Barnsley 1993; Hutchinson 1981). The mapping, W , on a set in H(X) was the union of its constituent mappings. 5        Figure 3: RIA on an IFS with probabilities, for two different sets of probabilities. The prob- abilities are { 2 3 , 1 6 , 1 6 } and { 4 9 , 4 9 , 1 9 } for the left and right figures, respectively. Its analog in P(X) is the Markov operator, M(ν), given by M(ν) = p 1 ν ◦ w −1 1 + . . . + p N ν ◦ w −1 N Thus an IFS with probabilities generates a sequence of measures {ν k } in P(X) , with the invariant measure, µ = M(µ), as its unique limit,. The recurrent IFS is a natural extension of the IFS with probabilities to Markov chains. It applies conditional probabilities, p ij , rather than indepen- dent probabilities, p i , to the mappings in W . Here p ij may be interpreted as the probability of using map j if map i was the last mapping applied. It is required that  p ij = 1, j = 1, . . . , N for each i and that there exist a non- zero probability of transitioning from i to j for every i, j ∈ {1, . . . , N } . An example of an RIA on a recurrent IFS is shown in figure 4. The IFS has seven available mappings. Three of these mappings, played independently, gen- erate a Sierpinski triangle, while the other four fill in a rectangle. The matrix of transition probabilities, {p ij }, for the Markov chain is: {p ij } =           0.3 0.3 0.3 0.025 0.025 0.025 0.025 0.3 0.3 0.3 0.025 0.025 0.025 0.025 0.3 0.3 0.3 0.025 0.025 0.025 0.025 0.03 0.03 0.04 0.225 0.225 0.225 0.225 0.03 0.03 0.04 0.225 0.225 0.225 0.225 0.03 0.03 0.04 0.225 0.225 0.225 0.225 0.03 0.03 0.04 0.225 0.225 0.225 0.225           The two sets of mappings may be thought of as two “modes,” each of which is an IFS with probabilities, that have a low (10%) probability of tran- sitioning from one mode to the other. The resulting distribution combines both modes. In the limit, each mode serves as the initial set, A 0 , for a se- quence of sets converging to the other. A more common aspect of recurrent IFSs, especially in computer graphics, is the inclusion of zero elements in the transition matrix and the use of transitions between different metric spaces. The central result for computer graphics applications of IFSs is the Col- lage Theorem (Barnsley 1993). It has several versions, for the various types of IFSs. This theorem shows that an IFS can be selected whose attractor is a specific image. Specifically, for H(X) , Collage Theorem Let (X, d) be a complete metric space. Let L ∈ H(X) be 6      Figure 4: RIA on a recurrent IFS. given, and let  ≥ 0 be given. Choose an IFS {X, W } with contractivity factor 0 ≤ s < 1, so that h(L, W (L)) ≤ . Then h(L, A) ≤  1 − s , where A is the attractor of the IFS. This result means that if a finite set of mappings can be discovered that approximately covers an image with “warped” copies of itself (w i (L)) , then the attractor of the IFS formed from those mappings will be close to the orig- inal image. The goal of this effort is the generation and analysis of RIA-like behav- ior in autonomous agents. A key result for the RIA (and our application) is a corollary of Elton’s theorem (Barnsley 1993; Elton 1987). The sequence, {x n }, is generated by an RIA on an IFS with probabilities. The invariant measure for the IFS is µ. Corollary to Elton’s Theorem (Elton 1987) Let B be a Borel subset of X and let µ(boundary of B = 0. Let N (B, n) = number of points in {x 0 , x 1 , . . . , x n } ∩ B, for n = 1, 2, . . Then, with probability one, µ(B) = lim n→∞  N (B, n) n + 1  for all starting points x 0 . That is, the “mass” of B is the proportion of itertion steps which produce points in B when the Random Iteration Algorithm is run. 7      In other words, the limiting behavior of the RIA will “almost always” approximate µ. Autonomous agents will, in many cases, be real, physical systems such as mobile robots. An approach to the qualitative analysis of their behavior must be able to account for modeling errors, as no model of a physical sys- tem is perfect. A first step toward this is the following lemma regarding the behavior of an RIA in H(X) . Approximate Mapping Lemma Let W and W  be sets of N mappings, where the IFS defined by (X, W) has contractivity s, and the individual mappings in W  are approximiations to those in W . Specifically, let d(w(x), w  (x)) ≤  for all x ∈ X, i = 1, . . . , N . Let {x i } ∞ i=0 and {x  i } ∞ i=0 be the orbits induced by a sequence of selections, σ 1 σ 2 σ 3 . . ., begining from the same point, x 0 = x  0 ∈ X. Then, d(x k , x  k ) ≤  1 − s for all k and x 0 ∈ X. The proof follows that of the Collage Theorem in H(X) . Let e k = d(x k , x  k ). Then we have a strictly increasing sequence of error bounds, {e k }, where e 0 = 0 e 1 =  ≥ d(x 1 , x  1 ) The contractivity of W , and the assumed bound on mapping errors combine in the triangle inequality to propagate the error bound: d(x k+1 , x  k+1 ) = d(w(x k ), w  (x  k )) ≤ d(w(x k ), w(x  k )) + d(w(x  k ), w  (x  k )) ≤ sd(x k , x  k ) +  Replacing d(x k , x  k ) by e k gives a bound on d(x k+1 , x  k+1 ): e k+1 = se k +  =  k  i=0 s i and, as k → ∞, e =  1+s . This result provides a bound on the orbit whose actions at each step are close to that of a model system. Note that the mappings in W  are not re- quired to be strictly contractive. Instead the assumptions of the lemma only require that d(w  i (x), w  i (y)) ≤ sd(x, y) + 2 for all x, y ∈ X. 8      4 EMEGENT BEHAVIOR AND THE RANDOM SELECTION RULE The predictable long term behavior of the RIA, and Elton’s results in partic- ular, suggests a definition of emergent behavior — the characteristics of a system’s response that are not sensitive to initial conditions. Definition (tentative): Emergent Behavior An emergent behavior is de- fined as a repeatable or characteristic response by the elements of a system of in- teracting autonomous agents. To state that a system exhibits an emergent behavior is equivalent to asserting the existence of some attribute of the system that arises, or evolves, predictably when agents interact with each other and their environment. We will call the behavior of the system emergent with respect to Q for some testable property, Q(χ(t)), if that property holds for almost all trajectories, χ(t). For example, Q(χ(t)) might be that the distribution of agent positions approaches a specific measure in the limit, as in the corollary to Elton’s the- orem. This definition treats emergence as a generalization of stability. Asymp- totic stability in the usual sense is a trivial example of emergence under this definition. Emergence becomes non-trivial if the attractor is a more complex property of the trajectory, such as it’s limiting distribution in the state space. The RIA also suggests an action selection strategy for autonomous agents. This strategy is called the Random Selection Rule (RSR). Where an RIA generates an abstract sequence of points in a metric space, state tra- jectories for a mobile robot must be at least piecewise continuous (includ- ing control states). This is accomplished in the RSR by augmenting the basic RIA structure with an additional set of functions to define the dura- tion of each behavior. Where the RIA generates a point in a code space: a = σ 1 σ 2 σ 3 . . ., the RSR generates points in an augmented code space: α = (σ 1 , δ 1 )(σ 2 , δ 2 )(σ 3 , δ 3 ) . . It is conjectured that the results obtained for Iter- ated Function Systems can be extended to this augmented space, and used with the above definition to characterize the emergent behavior of interact- ing, situated autonomous agents. Specifically, an RSR has three components: 1. Finite Behavior Suite: Each agent has a finite set of possible agent be- haviors (e. g. closed loop dynamics), B = {b i }, i = 1, . . . , N. During the interval, (t k , t k+1 ], the state of the agent is given by x(t) = b σ k (t, t k , x(t k )) 2. Random Behavior Selection: A selection probability function, p(x, σ k−1 , m), must be defined for each behavior. Dependence on the agent state, x, implicitly incorporates reactive, sensor driven behav- ior, since sensor inputs necessarily depend on the state of the agent — its position and orientation. Dependence of the behavior selection probability on the identity of the current behavior, σ k−1 , echoes the notion of a recurrent IFS, and provides a method of encorporating multiple modes of behavior. Finally, the action selection probability may depend on messages, m, transmitted between agents. This type 9      of dependence is not explicitly covered under standard IFS theory. It’s significance and a proposed mechanism for applying IFS theory to the behavior of the collection are discussed below. 3. Distinct Action Times: Agents exhibit only one behavior at any given time. The length of time that a behavior is applied before the next ran- dom selection is made is defined by a duration function, δ(t k−1 , x) > 0. This function may be as simple as a fixed interval, or as complex as an explicit dependence on t k−1 and x, so that the new action selec- tion occurs when the state trajectory encounters a switching surface. We require only that the duration be non-zero. In the present work, the duration is an interval drawn from an exponential distribution, so that the sequence of action selection times, {t k }, is a sequence of Pois- son points. Dependence of the behavior selection probabilities on messages from other agents presents both a problem and an opportunity. On the one hand, the remainder of the population may be viewed as simply part of a dynam- ically changing environment, so that the messages are simply another sen- sor input. On the other hand, we may treat a collection of agents as a single “swarm agent” or a “meta-agent.” First, a collection of M agents has a well defined state. It is simply the set of M states, χ(t) = {x (1) , . . . , x (M) }. Second, it has a well defined, finite set of  M i=1 n(i) distinct behaviors, where n(i) is the number of behaviors avail- able to the i th agent. Third, there is a behavior selection probability func- tion associated with each possible behavior. This probability may depend on the previous behavior state and on the current state, χ(t), of the collec- tion of agents. Messages between agents are internal to these selection functions. Fourth, there is a well defined sequence of action times {t k } for the swarm agent. It is simply the union of the sequences for the component agents. The behavior duration function is simply δ k = t k+1 − t k . In sum, a swarm of communicating agents implementing an RSR is it- self a “agent”, with a well defined state, operating under an RSR. This raises the possibility of the recursive construction of complex swarm behaviors from simple elemental behaviors. Moreover, such complex behaviors may be subject to analysis, or even analytic design, if the mathematics of IFSs and related results can be extended to cover the behavior of the Random Selec- tion Rule for the elemental behaviors. The simulation results presented below are for a swarm of computa- tional RSR agents, whose behaviors that are simple contractive mappings followed by an exponentially distributed wait at the resulting position. This generates a piecewise continuous trajectory that is a series of “sample and holds.” Action selection probabilites depend on sensing (of a cost function), previous behavior (to allow multiple modes) and communication, to facili- tate an indirect competition for a favored mode. 5 SIMULATION RESULTS The RSR approach was used to define a set of behaviors for a swarm of 200 simulated agents. The “sensor inputs” to these agents consist of evaluation of a cost function in a 30-dimensional search space, and a limited amount 10 [...]... Princeton University Press Brooks, R A (1986) A robust layered control system for a mobile robot IEEE Journal of Robotics and Automation, RA-2 Colorni, A., M Dorigo, and V Maniezzo (1992) Distributed optimization by ant colonies In Varela, F J and P Bourgine, editors, Toward a Practice of Autonomous Systems, Proceedings of the first European Conference on Artificial Life MIT Press Elton, J F (1987) An ergodic... (affine maps), and the success that has been achieved in the inverse problem, image compression, lead us to believe that this challenge is not an impossible one In particular, Elton’s theorem (Elton 1987) provides an important starting point because it allows behavioral probabilities to be functions of x This means that the behavior of a model of a situated agent can be analyzed using this approach The... to solid mode if the queue is empty A solid mode agent that cannot remain in solid mode transitions to condense mode Gas and condense mode agents that do not transition to solid mode either remain in their present mode or transition to the other mode In these experiments the transition probabilities from gas to condense and from condense to solid were both 25 percent per event Agents communicated using... Hodgins, J K and D C Brogan (1994) Robot heards: Group behaviors for systems with significant dynamics In Brooks, R A and P Maes, editors, Artificial Life IV, Proceedings of the Fourth International Workshop on the Sythesis and Simulation of Living Systems, pages 319 – 324 MIT Press Hutchinson, J (1981) Fractals and self-similarity Indiana University Journal of Mathematics, 30:713–747 Kandel, E R and J H... scale systems with defined, predictable emergent behaviors that solve practical problems The mathematics of Iterated Function Systems may provide tools for the qualitative analysis of such systems This goal, along with the behavior of our biological examples, motivates the development of the Random Selection Rule (RSR) approach to the action selection problem The mathematics of IFSs, their ability to construct... process The mode transition behavior of the swarm is illustrated in figure 5 Agents transition to solid mode by indirect competition In the setBehaviorMode method, the agent compares its own sensed cost function value with the worst value in its message queue If its own cost function value is less than or equal to this value, the agent transitions to solid mode The agent also transitions to solid mode if... “schema”) is constantly changing as the population of solid mode agents changes In this way, it adapts to information about the search space provided by positions of the most persistent solid mode agents 7 SUMMARY This paper has presented a novel approach to the action selection problem for autonomous agents This approach, the Random Selection Rule, is based on selecting behaviors at random from a finite... propose to specify energy functions and behavioral rules such that the limiting distribution of robots minimizes the specified energy functions If it can be shown that the RSR does behave as an RIA, with a single limiting distribution, then we may design the selection probability functions so that the limiting distribution minimizes the specified energy function The same approach may also be applied to the... Barnsley, M F and L P Hurd (1993) Fractal Image Compression A K Peters, Ltd Beckers, R., O E Holland, and J L Deneubourg (1994) From local actions to global tasks: Stigmergy and collective robotics In Brooks, R A and P Maes, editors, Artificial Life IV, Proceedings of the Fourth International Workshop on the Sythesis and Simulation of Living Systems, pages 181–189 MIT Press Berg, H C (1993) Random Walks... and then applying a contraction mapping toward the selected corner 11 Spontaneous Mode Change GAS MODE CONDENSE MODE Indirect Probabilistic Competition SOLID MODE Figure 5: Mode transitions for the three phase RSR agents Agents can exist in one of three modes Transition between gas and condense mode is a random process Agents compete indirectly for residence in the solid mode This competition is random . An Iterated Function Systems Approach to Emergence Douglas A. Hoskins 1 Abstract An approach to action selection in autonomous agents is presented. This approach is motivated. many “correct” actions for a given agent state. In doing so, we hope to gain access to analytical methods from Iterated Function Systems (Barnsley 1993), ergodic systems theory (Elton 1987) and. (Lakshmikantham et al. 1989; Bainov and Sime- onov 1989), which may allow qualitative analysis of emergent behaviors and the analytical synthesis of behavioral systems for autonomous agents. Many important

Ngày đăng: 23/06/2014, 20:18

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan