Tài liệu 72 Nonlinear Maps docx

14 134 0
Tài liệu 72 Nonlinear Maps docx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Steven H Isabelle, et Al “Nonlinear Maps.” 2000 CRC Press LLC Nonlinear Maps 72.1 Introduction 72.2 Eventually Expanding Maps and Markov Maps Eventually Expanding Maps Steven H Isabelle Massachusetts Institute of Technology Gregory W Wornell Massachusetts Institute of Technology 72.1 72.3 Signals From Eventually Expanding Maps 72.4 Estimating Chaotic Signals in Noise 72.5 Probabilistic Properties of Chaotic Maps 72.6 Statistics of Markov Maps 72.7 Power Spectra of Markov Maps 72.8 Modeling Eventually Expanding Maps with Markov Maps References Introduction One-dimensional nonlinear systems, although simple in form, are applicable in a surprisingly wide variety of engineering contexts As models for engineering systems, their richly complex behavior has provided insight into the operation of, for example, analog-to-digital converters [1], nonlinear oscillators [2], and power converters [3] As realizable systems, they have been proposed as random number generators [4] and as signal generators for communication systems [5, 6] As analytic tools, they have served as mirrors for the behavior of more complex, higher dimensional systems [7, 8, 9] Although one-dimensional nonlinear systems are, in general, hard to analyze, certain useful classes of them are relatively well understood These systems are described by the recursion x[n] = f (x[n − 1]) y[n] = g(x[n]) , (72.1a) (72.1b) initialized by a scalar initial condition x[0], where f (·) and g(·) are real-valued functions that describe the evolution of a nonlinear system and the observation of its state, respectively The dependence of the sequence x[n] on its initial condition is emphasized by writing x[n] = f n (x[0]) where f n (·) represents the n-fold composition of f (·) with itself Without further restrictions of the form of f (·) and g(·), this class of systems is too large to easily explore However, systems and signals corresponding to certain “well-behaved” maps f (·) and observation functions g(·) can be rigorously analyzed Maps of this type often generate chaotic signals—loosely speaking, bounded signals that are neither periodic nor transient—under easily verifiable conditions These chaotic signals, although completely deterministic, are in many ways analogous to stochastic processes In fact, one-dimensional chaotic maps illustrate in a relatively simple setting that the distinction between deterministic and stochastic signals is sometimes artificial 1999 by CRC Press LLC c and can be profitably emphasized or deemphasized according to the needs of an application For instance, problems of signal recovery from noisy observations are often best approached with a deterministic emphasis, while certain signal generation problems [10] benefit most from a stochastic treatment 72.2 Eventually Expanding Maps and Markov Maps Although signal models of the form [1] have simple, one-dimensional state spaces, they can behave in a variety of complex ways that model a wide range of phenomena This flexibility comes at a cost, however; without some restrictions on its form, this class of models is too large to be analytically tractable Two tractable classes of models that appear quite often in applications are eventually expanding maps and Markov maps 72.2.1 Eventually Expanding Maps Eventually expanding maps—which have been used to model sigma-delta modulators [11], switching power converters [3], other switched flow systems [12], and signal generators [6, 13]—have three defining features: they are piecewise smooth, they map the unit interval to itself, and they have some iterate with slope that is everywhere greater than unity Maps with these features generate time series that are chaotic, but on average well behaved For reference, the formal definition is as follows, where the restriction to the unit interval is convenient but not necessary: DEFINITION 72.1 A nonsingular map f : [0, 1] → [0, 1] is called eventually expanding if There is a set of partition points = a0 < a1 < · · · aN = such that restricted to each of the intervals V i = [ai−1 , ), called partition elements, the map f (·) is monotonic, continuous and differentiable The function 1/|f (x)| is of bounded variation [14] (In some definitions, this smoothness condition on the reciprocal of the derivative is replaced with a more restrictive bounded slope condition, i.e., there exists a constant B such that |f (x)| < B for all x.) There exists a real λ > and a integer m such that d m f (x) ≥ λ dx wherever the derivative exists This is the eventually expanding condition Every eventually expanding map can be expressed in the form f (x) = N X fi (x)χi (x) (72.2) i=1 where each fi (·) is continuous, monotonic, and differentiable on the interior of the ith partition element and the indicator function χi (x) is defined by  x ∈ Vi , (72.3) χi (x) = x ∈ Vi This class is broad enough to include for example, discontinuous maps and maps with discontinuous or unbounded slope Eventually expanding maps also include a class that is particularly amenable to analysis—the Markov maps 1999 by CRC Press LLC c Markov maps are analytically tractable and broadly applicable to problems of signal estimation, signal generation, and signal approximation They are defined as eventually expanding maps that are piecewise-linear and have some extra structure DEFINITION 72.2 A map f : [0, 1] → [0, 1] is an eventually expanding, piecewise-linear, Markov map if f is an eventually expanding map with the following additional properties: The map is piecewise-linear, i.e., there is a set of partition points = a0 < a1 < · · · < aN = such that restricted to each of the intervals Vi = [ai−1 , ), called partition elements, the map f (·) is affine, i.e., the functions fi (·) on the right side of (72.2) are of the form fi (x) = si x + bi The map has the Markov property that partition points map to partition points, i.e., for each i, f (ai ) = aj for some j Every Markov map can be expressed in the form f (x) = N X (si x + bi ) χi (x) , (72.4) i=1 where si = for all i Fig 72.1 shows the Markov map  (1 − a)x/a + a ≤ x ≤ a f (x) = (1 − x)/(1 − a) a < x ≤ , (72.5) which has partition points {0, a, 1}, and partition elements V1 = [0, a) and V2 = [a, 1) FIGURE 72.1: An example of a piecewise-linear Markov map with two partition elements Markov maps generate signals with two useful properties: they are, when suitably quantized, indistinguishable from signals generated by Markov chains; they are close, in a sense, to signals generated by more general eventually expanding maps [15] These two properties lead to applications of Markov maps for generating random numbers and approximating other signals The analysis underlying these types of applications depends on signal representations that provide insight into the structure of chaotic signals 1999 by CRC Press LLC c 72.3 Signals From Eventually Expanding Maps There are several general representations for signals generated by eventually expanding maps Each provides different insights into the structure of these signals and proves useful in different applications First, and most obviously, a sequence generated by a particular map is completely determined by (and is thus represented by) its initial condition x[0] This representation allows certain signal estimation problems to be recast as problems of estimating the scalar initial condition Second, and less obviously, the quantized signal y[n] = g(x[n]), for n ≥ generated by (72.1) with g(·) defined by (72.6) g(x) = i x ∈ Vi , uniquely specifies the initial condition x[0] and hence the entire state sequence x[n] Such quantized sequences y[n] are called the symbolic dynamics associated with f (·) [7] Certain properties of a map, such as the collection of initial conditions leading to periodic points, are most easily described in terms of its symbolic dynamics Finally, a hybrid representation of x[n] combining the initial condition and symbolic representations H[N ] = {g(x[0]), , g(x[N ]), x[N ]} is often useful 72.4 Estimating Chaotic Signals in Noise The hybrid signal representation described in the previous section can be applied to a classical signal processing problem—estimating a signal in white Gaussian noise For example, suppose the problem is to estimate a chaotic sequence x[n], n = 0, , N − from the noisy observations r[n] = x[n] + w[n], n = 0, , N − (72.7) where w[n] is a stationary, zero-mean white Gaussian noise sequence with variance σw2 , and x[n] is generated by iterating (72.1) from an unknown initial condition Because w[n] is white and Gaussian, the maximum likelihood estimation problem is equivalent to the constrained minimum distance problem minimize x[n] : x[i] = f (x[i − 1]) ε[N] = N X (r[k] − x[k])2 (72.8) k=0 and to the scalar problem minimize x[0] ∈ [0, 1] ε[N ] = 2 r[k] − f k (x[0]) N  X (72.9) k=0 Thus, the maximum-likelihood problem can, in principle, be solved by first estimating the initial condition, then iterating (72.1) to generate the remaining estimates However, the initial condition is often difficult to estimate directly because the likelihood function (72.9), which is highly irregular with fractal characteristics, is unsuitable for gradient-descent type optimization [16] Another solution divides the domain of f (·) into subintervals and then solves a dynamic programming problem [17]; however, this solution is, in general, suboptimal and computationally expensive Although the maximum likelihood problem described above need not, in general, have a computationally efficient recursive solution, it does have one when, for example, the map f (·) is a symmetric tent map of the form (72.10) f (x) = β − − β|x| , x ∈ [−1, 1] 1999 by CRC Press LLC c with parameter < β ≤ [5] This algorithm solves for the hybrid representation of the initial condition from which an estimate of the entire signal can be determined The hybrid representation is of the form H[N ] = {y[0], , y[N ], x[N]} , where each y[i] takes one of two values which, for convenience, we define as y[i] = sgn (x[i]) Since each y[n] can independently takes one of two values, there are 2N feasible solutions to this problem and a direct search for the optimal solution is thus impractical even for moderate values of N The resulting algorithm has computational complexity that is linear in the length of the observation, N This efficiency is the result of a special separation property, possessed by the map [10]: given y[0], , y[i − 1] and y[i + 1], , y[N ] the estimate of the parameter y[i] is independent of ˆ y[i + 1], , y[N] The algorithm is as follows Denoting by φ[n|m] the ML estimates of any sequence φ[n] given r[k] for ≤ k ≤ m, the ML solution is of the form,   ˆ − 1] β − β 2n r[n] + β 2n − x[n|n (72.11) x[n|n] ˆ = 2(n+1) β −1 y[n|N] ˆ = sgn x[n|n] ˆ xˆML [n|n] = Lβ (x[n|n]) ˆ , (72.12) (72.13) where x[n|n−1] ˆ = f (x[n−1|n−1]), ˆ the initialization is x[0|0] ˆ = r[0], and the function Lβ (x[n|n]), ˆ defined by  x ∈ (−1, β − 1)  x −1 x ≤ −1 · , (72.14) Lβ (x) =  β −1 x ≥β −1 serves to restrict the ML estimates to the interval x ∈ (−1, β −1) The smoothed estimates xˆML [n|N ] are obtained by converting the hybrid representation to the initial condition and then iterating the estimated initial condition forward 72.5 Probabilistic Properties of Chaotic Maps Almost all waveforms generated by a particular eventually expanding map have the same average behavior [18], in the sense that the time average n−1 n−1  1X 1X  k ¯ h(x[k]) = lim h f (x[0]) h(x[0]) = lim n→∞ n n→∞ n k=0 (72.15) k=0 exists and is essentially independent of the initial condition x[0] for sufficiently well-behaved functions h(·) This result, which is reminiscent of results from the theory of stationary stochastic processes [19], forms the basis for a probabilistic interpretation of chaotic signals, which in turn leads to analytic methods for characterizing their time-average behavior To explore the link between chaotic and stochastic signals, first consider the stochastic process generated by iterating (72.1) from a random initial condition x[0], with probability density function p0 (·) Denote by pn (·) the density of the nth iterate x[n] Although, in general, the members of the sequence pn (·) will differ, there can exist densities, called invariant densities, that are time-invariant, i.e., (72.16) p0 (·) = p1 (·) = = pn (·) = p(·) When the initial condition x[0] is chosen randomly according to an invariant density, the resulting stochastic process is stationary [19] and its ensemble averages depend on the invariant density Even 1999 by CRC Press LLC c when the initial condition is not random, invariant densities play an important role in describing the time-average behavior of chaotic signals This role depends on, among other things, the number of invariant densities that a map possesses A general one-dimensional nonlinear map may possess many invariant densities For example, eventually expanding maps with N partition elements have at least one and at most N invariant densities [20] However, maps can often be decomposed into collections of maps, each with only one invariant density [19], and little generality is lost by concentrating on maps with only one invariant density In this special case, the results that relate the invariant density to the average behavior of chaotic signals are more intuitive The invariant density, although introduced through the device of a random initial condition, can also be used to study the behavior of individual signals Individual signals are connected to ensembles of signals, which correspond to random initial conditions, through a classical result due to ¯ Birkhoff, which asserts that the time average h(x[0]) defined by Eq (72.15) exists whenever f (·) has an invariant density When the f (·) has only one invariant density, the time average is independent of the initial condition for almost all (with respect to the invariant density p(·)) initial conditions and equals n−1 n−1  Z 1X 1X  k h(x[k]) = lim h f (x[0]) = h(x)p(x)dx (72.17) lim n→∞ n n→∞ n k=0 k=0 where the integral is performed over the domain of f (·) and where h(·) is measurable Birkhoff ’s theorem leads to a relative frequency interpretation of time-averages of chaotic signals To see this, consider the time-average of the indicator function χ˜ [s−,s+] (x), which is zero everywhere but in the interval [s − , s + ] where it is equal to unity Using Birkhoff ’s theorem with Eq (72.17) yields Z n−1 1X χ˜ [s−,s+] (x[k]) lim n→∞ n = k=0 χ˜ [s−,s+] (x)p(x)dx (72.18) Z = ≈ [s−,s+] p(x)dx 2p(s) , (72.19) (72.20) where Eq (72.20) follows from Eq (72.19) when  is small and p(·) is sufficiently smooth The time-average (72.18) is exactly the fraction of time that the sequence x[n] takes values in the interval [s − , s + ] Thus, from (72.20), the value of the invariant density at any point s is approximately proportional to the relative frequency with which x[n] takes values in a small neighborhood of the point Motivated by this relative frequency interpretation, the probability that an arbitrary function h(x[n]) falls into an arbitrary set A can be defined by n−1 1X χ˜ A (h(x[k])) n→∞ n P r {h(x) ∈ A} = lim (72.21) k=0 Using this definition of probability , it can be shown that for any Markov map, the symbol sequence y[n] defined in Section 72.3 is indistinguishable from a Markov chain in the sense that P r {y[n]|y[n − 1], , y[0]} = P r {y[n]|y[n − 1]} , holds for all n [21] The first order transition probabilities can be shown to be of the form Vy[n] , P r(y[n]|y[n − 1]) = sy[n] Vy[n−1] 1999 by CRC Press LLC c (72.22) where the si are the slopes of the map f (·) as in Eq (72.4) and |Vy[n] | denotes the length of the interval Vy[n] As an example, consider the asymmetric tent map  x/a 0≤x≤a f (x) = (1 − x)/(1 − a) a < x ≤ , with parameter in the range < a < and a quantizer g(·) of the form (72.6) The previous results establish that y[n] = g(x[n]) is equivalent to a sample sequence from the Markov chain with transition probability matrix   a 1−a , [P ]ij = a 1−a where [P ]ij = P r{y[n] = i|y[n − 1] = j } Thus, the symbolic sequence appears to have been generated by independent flips of a biased coin with the probability of heads, say, equal to a When the parameter takes the value a = 1/2, this corresponds to a sequence of independent equally likely bits Thus, a sequence of Bernoulli random variables can been constructed from a deterministic sequence x[n] Based on this remarkable result, a circuit that generates statistically independent bits for cryptographic applications has been designed [4] Some of the deeper probabilistic properties of chaotic signals depend on the integral (72.17), which in turn depends on the invariant density For some maps, invariant densities can be determined explicitly For example, the tent map (72.10) with β = has invariant density  1/2 −1 ≤ x ≤ p(x) = otherwise as can be readily verified using elementary results from the theory of derived distributions of functions of random variables [22] More generally, all Markov maps have invariant densities that are piecewiseconstant function of the form n X ci χi (x) (72.23) i=1 where ci are real constants that can be determined from the map’s parameters [23] This makes Markov maps especially amenable to analysis 72.6 Statistics of Markov Maps The transition probabilities computed above may be viewed as statistics of the sequence x[n] These statistics, which are important in a variety of applications, have the attractive property that they are defined by integrals having, for Markov maps, readily computable, closed-form solutions This property holds more generally—Markov maps generate sequences for which a large class of statistics can be determined in closed form These analytic solutions have two primary advantages over empirical solutions computed by time averaging: they circumvent some of the numerical problems that arise when simulating the long sequences of chaotic data that are necessary to generate reliable averages; and they often provide insight into aspects of chaotic signals, such as dependence on a parameter, that could not be easily determined by empirical averaging Statistics that can be readily computed include correlations of the form L−1 Rf ;h0 ,h1 , ,hr [k1 , , kr ] 1999 by CRC Press LLC c X h0 (x[n])h1 (x[n + k1 ]) · · · hr (x[n + kr ])(72.24) L→∞ L n=0 Z = h0 (x[n])h1 (x[n + k1 ]) · · · hr (x[n + kr ]) p(x) dx ,(72.25) = lim where the hi (·)0 s are suitably well-behaved but otherwise arbitrary functions, the ki0 s are nonnegative integers, the sequence x[n] is generated by Eq (72.1), and p(·) is the invariant density This class of statistics includes as important special cases the autocorrelation function and all higherorder moments of the time-series Of primary importance in determining these statistics is a linear transformation called the Frobenius-Perron (FP) operator, which enters into the computation of these correlations in two ways First, it suggests a method for determining an invariant density Second, it provides a “change of variables” within the integral that leads to simple expressions for correlation statistics The definition of the FP operator can be motivated by using the device of a random initial condition x[0] with density p0 (x) as in Section 72.5 The FP operator describes the time evolution of this initial probability density More precisely, it relates the initial density to the densities pn (·) of the random variables x[n] = f n (x[0]) through the equation pn (x) = Pfn p0 (x) (72.26) where Pfn denotes the n-fold self-composition of Pf This definition of the FP operator, although phrased in terms of its action on probability densities, can be extended to all integrable functions This extended operator, which is also called the FP operator, is linear and continuous Its properties are closely related to the statistical structure of signals generated by chaotic maps (see [9] for a thorough discussion of these issues) For example, the evolution equation (72.26) implies that an invariant density of a map is a fixed point of its FP operator, that is, it satisfies p(x) = Pf p(x) (72.27) This relation can be used to determine explicitly the invariant densities of Markov maps [23], which may in turn be used to compute more general statistics Using the change of variables property of the FP operator, the correlation statistic (72.25) can be expressed as the ensemble average (72.28) Rf ;h0 ,h1 , ,hr [k1 , , kr ] = Z o o n n k −k hr (x)Pf r r−1 hr−1 (x) · · · Pfk2 −k1 h1 (x)Pfk1 {h0 (x)p(x)} · · · dx (72.29) Although such integrals are, for general one-dimensional nonlinear maps, difficult to evaluate, closedform solutions exist when f (·) is a Markov map— a development that depends on an explicit expression for FP operator The FP operator of a Markov map has a simple, finite-dimensional matrix representation when it operates on certain piecewise polynomial functions Any function of the form h(·) = N K X X aij x i χj (x) i=0 j =1 can be represented by an N (K + 1) dimensional coordinate vector with respect to the basis θ1 (x), θ2 (x), , θN(K+1) = o n χ1 (x), , χN (x), xχ1 (x), , xχN (x), , x K χ1 (x), , x K χN (x) (72.30)  The action of the FP operator on any such function can be expressed as a matrix-vector product: when the coordinate vector of h(x) is h, the coordinate vector of q(x) = Pf h(x) is q = PK h , 1999 by CRC Press LLC c where Pk is the square N(K + 1) dimensional, block upper-triangular matrix   P00 P01 · · · · · · P0K  P11 P12 · · · P1K    PK =   ,   0 ··· (72.31) · · · PKK and where each nonzero N × N block is of the form   j P0 Bj −i Sj for j ≥ i Pij = i (72.32) The N × N matrices B and S are diagonal with elements Bii = −bi and Sii = 1/si , respectively, while P0 = P00 is the N × N matrix with elements  1/ sj i ∈ Ij , (72.33) [P0 ]ij = otherwise The invariant density of a Markov map, which is needed to compute the correlation statistic (72.25), can be determined as the solution of an eigenvector problem It can be shown that such invariant densities are piecewise constant functions so that the fixed point equation (72.27) reduces to the matrix expression P0 p = p Due to the properties of the matrix P0 , this equation always has a solution that can be chosen to have nonnegative components It follows that the correlation statistic (72.29) can always be expressed as Rf ;h0 ,h1 , ,hr [k1 , , kr ] = g1T Mg where M is a basis correlation matrix with elements Z [M]ij = θi (x)θj (x) dx (72.34) (72.35) and gi are the coordinate vectors of the functions g1 (x) = hr (x) g2 (x) = Pf r (72.36) k −kr−1 o o n hr−1 (x) · · · Pfk2 −k1 h1 (x)Pfk1 {h0 (x)p(x)} · · · n (72.37) By the previous discussion, the coordinate vectors g1 and g2 can be determined using straightforward matrix-vector operations Thus, expression (72.34) provides a practical way of exactly computing the integral (72.29), and reveals some important statistical structure of signals generated by Markov maps 72.7 Power Spectra of Markov Maps An important statistic in the context of many engineering applications is the power spectrum The power spectrum associated with a Markov map is defined as the Fourier transform of its autocorrelation sequence Z Rxx [k] = 1999 by CRC Press LLC c x[n]x[n + k]p(x)dx (72.38) which, using Eq (72.34) can be rewritten in the form Rxx [k] = g1T M1 P1k g˜ , (72.39) where P1 is the matrix representation of the FP operator restricted to the space of piecewise linear functions, and where g1 is the coordinate vector associated with the function x, and where g˜ is the coordinate vector associated with g˜ (x) = xp(x) The power spectrum is obtained from the Fourier transform of Eq (72.39), yielding, ! +∞   X |k| −j ωk jω T P1 e (72.40) = g1 M1 g˜ Sxx e k=−∞ This sum can be simplified by examining the eigenvalues of the FP matrix P1 In general, P1 has eigenvalues whose magnitude is strictly less than unity, and others with unit-magnitude [9] Using this fact, Eq (72.40) can be expressed in the form m    −1   −1 X g˜ + Ci δ (ω − ωi ) , I − 022 I − 02 ej ω Sxx ej ω = h1T M I − 02 e−j ω (72.41) i=1 where 02 has eigenvalues that are strictly less than one in magnitude, and Ci and ωi depend on the unit magnitude eigenvalues of P1 As Eq (72.41) reflects, the spectrum of a Markov map is a linear combination of an impulsive component and a rational function This implies that there are classes of rational spectra that can be generated not only by the usual method of driving white noise through a linear time-invariant filter with a rational system function, but also by iterating deterministic nonlinear dynamics For this reason it is natural to view chaotic signals corresponding to Markov maps as “chaotic ARMA (autoregressive moving-average) processes” Special cases correspond to the “chaotic white noise” described in [5] and the first order autoregressive processes described in [24] Consider now a simple example involving the Markov map defined in Eq (72.5) and shown in Figure 72.1 Using the techniques described above, the invariant density is determined to be the piecewise-constant function  1/(1 + a)  ≤ x ≤ a p(x) = 1/ − a a ≤ x ≤ Using Eq (72.41) and a parameter value a = 8/9, the rational part of the autocorrelation sequence associated with f (·) is determined to be Sxx (z) = − 42632 36z−1 − 145 + 36z 459 (9 + 8z)(9 + 8z−1 )(64z2 + z + 81)(64z−2 + z−1 + 81) (72.42) The power spectrum corresponding to evaluating Eq (72.42) on the unit circle z = ej ω is plotted in Figure 72.2, along with an empirical spectrum computed by periodogram averaging with a window length of 128 on a time series of length 50,000 The solid line corresponds to the analytically obtained expression (72.42), while the circles represent the spectral samples estimated by periodogram averaging 72.8 Modeling Eventually Expanding Maps with Markov Maps One approach to studying the statistics of more general eventually expanding maps involves approximation by Markov maps—the statistics of any eventually expanding map can be approximated to 1999 by CRC Press LLC c FIGURE 72.2: Comparison of analytically computed power spectrum to empirical power spectrum for the map of Figure 72.1 The solid line indicates the analytically computed spectrum, while the circles indicate the samples of the spectrum estimated by applying periodogram averaging to a time series of length 50,000 arbitrary accuracy by those of some Markov map This approximation strategy provides a powerful method for analyzing chaotic time series from eventually expanding maps: first approximate the map by a Markov map, then use the previously described techniques to determine its statistics In order for this approach to be useful, an appropriate notion, the approximation quality, and a constructive procedure for generating an approximate map are required A sequence of piecewise-linear Markov maps fˆi (·) with statistics that converge to those of a given eventually expanding map f (·) is said to statistically converge to f (·) More formally: Let f (·) be an eventually expanding map with a unique invariant density p(·) A sequence of maps {fˆi (·)} statistically converges to f (·) if each fˆi (·) has a unique invariant density pi (·) and Rfˆi ,h0 ,h1 , ,hr [k1 , , kr ] → Rf,h0 ,h1 , ,hr [k1 , , kr ] as i → ∞ DEFINITION 72.3 for any continuous hj (·) and all finite kj and finite r Any eventually expanding map f (·) is the limit of a sequence of Markov maps that statistically converges and can be constructed in a straightforward manner The idea is to define a Markov map on an increasingly fine set of partition points that includes the original partition points of f (·) Denote by Q the set of partition points of f (·), and by Qi the set of partition points of the ith map in the sequence of Markov map approximations The sets of partition points for the increasingly fine approximations are defined recursively via Qi = Qi−1 ∪ f −1 (Qi−1 ) (72.43) In turn, each approximating map fˆi (·) is defined by specifying its value at the partition points Qi by a procedure that ensures that the Markov property holds [15] At all other points, the map fˆi (·) is defined by linear interpolation 1999 by CRC Press LLC c Conveniently, if f (·) is an eventually expanding map in the sense of Definition 72.1, then the sequence of piecewise-linear Markov approximations fˆi (·) obtained by the above procedure statistically converges to f (·), i.e., converges in the sense of Definition 72.3 This means that, for sufficiently large i, the statistics of fˆi (·) are close to those of f (·) As a practical consequence, the correlation statistics of the eventually expanding map f (·) can be approximated by first determining a Markov map fˆk (·) that is a good approximation to f (·), and then finding the statistics of Markov map using the techniques described in Section 72.6 References [1] Feely, O and Chua, L.O., Nonlinear dynamics of a class of analog-to-digital converters, Intl J Bifurcation and Chaos in Appl Sci Eng., 325, June 1992 [2] Tang, Y.S., Mees, A.I and Chua, L.O., Synchronization and chaos, IEEE Trans Circuits and Systems, CAS-30(9), 620–626, 1983 [3] Deane, J.H.B and Hamill, D.C., Chaotic behavior in a current-mode controlled DC-DC converter, Electron Lett., 27, 1172–1173, 1991 [4] Espejo, S., Martin, J.D and Rodriguez-Vazquez, A., Design of an analog/digital truly random number generator, in 1990 IEEE International Symposium on Circuits and Systems, 1368– 1371, 1990 [5] Papadopoulos, H.C and Wornell, G.W., Maximum likelihood estimation of a class of chaotic signals, IEEE Trans Inform Theory, 41, 312–317, Jan 1995 [6] Chen, B and Wornell, G.W., Efficient channel coding for analog sources using chaotic systems, in Proc IEEE GLOBECOM, Nov 1996 [7] Devaney, R., An Introduction to Chaotic Dynamical Systems, Addison-Wesley, Reading, MA, 1989 [8] Collet, P and Eckmann, J.P., Iterated Maps on the Interval as Dynamical Systems, Birkhauser, Boston, MA, 1980 [9] Lasota, A and Mackey, M., Probabilistic Properties of Deterministic Systems, Cambridge University Press, Cambridge, 1985 [10] Richard, M.D., Estimation and Detection with Chaotic Systems, Ph.D thesis, M.I.T., Cambridge, MA, Feb 1994 Also RLE Tech Rep No 581, Feb 1994 [11] Risbo, L., On the design of tone-free sigma-delta modulators, IEEE Trans Circuits and Systems II, 42(1), 52–55, 1995 [12] Chase, C., Serrano, J and Ramadge, P.J., Periodicity and chaos from switched flow systems: Contrasting examples of discretely controlled continuous systems, IEEE Trans Automat Contr., 38, 71–83, 1993 [13] Chua, L.O., Yao, Y and Yang, Q., Generating randomness from chaos and constructing chaos with desired randomness, Intl J Circuit Theory and Applications, 18, 215–240, 1990 [14] Natanson, I.P., Theory of Functions of a Real Variable, Frederick Ungar Publishing, New York, 1961 [15] Isabelle, S.H., A Signal Processing Framework for the Analysis and Application of Chaos, Ph.D thesis, M.I.T., Cambridge, MA, Feb 1995 Also RLE Tech Rep No 593, Feb 1995 [16] Myers, C., Kay S and Richard, M., Signal separation for nonlinear dynamical systems, in Proc Intl Conf Acoust Speech, Signal Processing, 1992 [17] Kay, S and Nagesha, V., Methods for chaotic signal estimation, IEEE Trans Signal Processing, 43(8), 2013, 1995 [18] Hofbauer, F and Keller, G., Ergodic properties of invariant measures for piecewise monotonic transformations, Math Z., 180, 119–140, 1982 [19] Peterson, K., Ergodic Theory, Cambridge University Press, Cambridge, 1983 1999 by CRC Press LLC c [20] Lasota, A and Yorke, J.A., On the existence of invariant measures for piecewise monotonic transformations, Trans Am Math Soc., 186, 481–488, Dec 1973 [21] Kalman, R., Nonlinear aspects of sampled-data control systems, in Proc Symp Nonlinear Circuit Analysis, 273–313, Apr 1956 [22] Drake, A.W., Fundamentals of Applied Probability Theory, McGraw-Hill, New York, 1967 [23] Boyarsky, A and Scarowsky, M., On a Class of transformations which have unique absolutely continuous invariant measures, Trans Am Math Soc., 255, 243–262, 1979 [24] Sakai, H and Tokumaru, H., Autocorrelations of a certain chaos, IEEE Trans Acoust., Speech, Signal Processing, 28(5), 588–590, 1990 1999 by CRC Press LLC c ... of Technology 72. 1 72. 3 Signals From Eventually Expanding Maps 72. 4 Estimating Chaotic Signals in Noise 72. 5 Probabilistic Properties of Chaotic Maps 72. 6 Statistics of Markov Maps 72. 7 Power Spectra.. .Nonlinear Maps 72. 1 Introduction 72. 2 Eventually Expanding Maps and Markov Maps Eventually Expanding Maps Steven H Isabelle Massachusetts Institute... with Eq (72. 17) yields Z n−1 1X χ˜ [s−,s+] (x[k]) lim n→∞ n = k=0 χ˜ [s−,s+] (x)p(x)dx (72. 18) Z = ≈ [s−,s+] p(x)dx 2p(s) , (72. 19) (72. 20) where Eq (72. 20) follows from Eq (72. 19) when

Ngày đăng: 16/12/2013, 04:15

Tài liệu cùng người dùng

Tài liệu liên quan