Báo cáo hóa học: " Research Article Exponential Stability for Impulsive BAM Neural Networks with Time-Varying Delays and Reaction-Diffusion Terms" doc

18 304 0
Báo cáo hóa học: " Research Article Exponential Stability for Impulsive BAM Neural Networks with Time-Varying Delays and Reaction-Diffusion Terms" doc

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Hindawi Publishing Corporation Advances in Difference Equations Volume 2007, Article ID 78160, 18 pages doi:10.1155/2007/78160 Research Article Exponential Stability for Impulsive BAM Neural Networks with Time-Varying Delays and Reaction-Diffusion Terms Qiankun Song and Jinde Cao Received 9 March 2007; Accepted 16 May 2007 Recommended by Ulrich Krause Impulsive bidirectional associative memory neural network model with time-varying de- lays and reaction-diffusion terms is considered. Several sufficient conditions ensuring the existence, uniqueness, and global exponential stability of equilibrium point for the ad- dressed neural network are derived by M-matrix theory, analytic methods, and inequal- ity techniques. Moreover, the exponential convergence rate index is estimated, which de- pends on the system parameters. The obtained results in this paper are less restrictive than previously known criteria. Two examples are given to show the effectiveness of the obtained results. Copyright © 2007 Q. Song and J. Cao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1. Introduction The bidirectional associative memory (BAM) neural network model was first introduced by K osko [1]. This class of neural networks has been successfully applied to pattern recog- nition, signal and image processing, artificial intelligence due to its generalization of the single-layer auto-associative Hebbian correlation to two-layer pattern-matched heteroas- sociative circuits. Some of these applications require that the designed n etwork has a unique stable equilibrium point. In hardware implementation, time delays occur due to finite switching speed of the amplifiers and communication time [2]. Time delays will affect the stability of designed neural networks and may lead to some complex dynamic behaviors such as periodic oscil- lation, bifurcation, or chaos [3]. Therefore, study of neural dynamics with consideration of the delayed problem becomes extremely important to manufacture high-quality neural networks. Some results concerning the dynamical behavior of BAM neural networks with 2AdvancesinDifference Equations delays have been reported, for example, see [2–12] and references therein. The circuits di- agram and connection pattern implementing for the delayed BAM neural networks can be found in [8]. Most widely studied and used neural networks can be classified as either continuous or discrete. Recently, there has been a somewhat new categor y of neural networks which are n either purely continuous-time nor purely discrete-time ones, these are called im- pulsive neural networks. This third category of neural networks displays a combination of characteristics of both the continuous-time and the discrete systems [13]. Impulses can make unstable systems stable, so they have been widely used in many fields such as physics, chemistry, biology, population dynamics, and industrial robotics. Some results for impulsive neural networks have been given, for example, see [13–22] and references therein. It is well known that diffusion effect cannot be avoided in the neural networks when electrons are moving in asymmetric electromagnetic fields [23], so we must consider that the activations vary in space as well as in time. There have been some works devoted to the investigation of the stability of neural networks with reaction-diffusion terms, which are expressed by partial differential equations, for example, see [23–26] and references therein. To the best of our knowledge, few authors have studied the stability of impulsive BAM neural network model with both time-varying delays and reaction-diffusion terms. Motivated by the above discussions, the objective of this paper is to give some sufficient conditions ensuring the existence, uniqueness, and global exponential stability of equilib- rium point for impulsive BAM neural networks with time-varying delays and reaction- diffusion terms, without assuming the boundedness, monotonicity, and differentiability on these activation functions. Our methods, which do not make use of Lyapunov func- tional, are simple and valid for the stability analysis of impulsive BAM neural networks with time-varying or constant delays. 2. Model descripti on and preliminaries In this paper, we consider the following model: ∂u i (t,x) ∂t = l  k=1 ∂ ∂x k  D ik ∂u i (t,x) ∂x k  − a i u i (t,x) + m  j=1 c ij f j  v j  t − τ ij (t),x  + α i , t = t k , i = 1, ,n, Δu i  t k ,x  = I k  u i  t k ,x  , i = 1, ,n, k = 1,2, , ∂v j (t,x) ∂t = l  k=1 ∂ ∂x k  D ∗ jk ∂v j (t,x) ∂x k  − b j v j (t,x) + n  i=1 d ji g i  u i  t − σ ji (t),x  + β j , t = t k , j = 1, ,m, Δv j  t k ,x  = J k  v i  t k ,x  , j = 1, ,m, k = 1, 2, (2.1) Q. Song and J. Cao 3 for t>0, where x = (x 1 ,x 2 , ,x l ) T ∈ Ω ⊂ R l , Ω is a bounded compact set with smooth boundary ∂Ω and mes Ω > 0inspaceR l ; u = (u 1 ,u 2 , ,u n ) T ∈ R n ; v = (v 1 ,v 2 , ,v m ) T ∈ R m ; u i (t,x)andv j (t,x) are the state of the ith neurons from the neural field F U and the jth neurons from the neural field F V at time t and in space x, respectively; f j and g i denote the activation functions of the jth neurons from F V and the ith neurons from F U at time t and in space x, respectively; α i and β j are constants, and denote the external inputs on the ith neurons from F U and the jth neurons from F V , respectively; τ ij (t)andσ ji (t) correspond to the transmission delays and satisfy 0 ≤ τ ij (t) ≤ τ ij and 0 ≤ σ ji (t) ≤ σ ji (τ ij and σ ji are constants); a i and b j are positive constants, and denote the rates with which the ith neurons from F U and the jth neurons from F V will reset their potentials to the resting state in isolation when disconnected from the networks and external inputs, respectively; c ij and d ji are constants, and denote the connection strengths; smooth functions D ik = D ik (t,x) ≥ 0andD ∗ jk = D ∗ jk (t,x) ≥ 0 correspond to the transmission diffusion operator along the ith neurons from F U and the jth neurons from F V , respectively. Δu i (t k ,x) = u i (t + k ,x) − u i (t − k ,x)andΔv j (t k ,x) = v j (t + k ,x) − v j (t − k ,x) are the impulses at moments t k and in space x,andt 1 <t 2 < ··· is a strictly increasing sequence such that lim k→∞ t k = +∞. The boundary conditions and initial conditions are given by ∂u i ∂n : =  ∂u i ∂x 1 , ∂u i ∂x 2 , , ∂u i ∂x l  T = 0, i = 1,2, ,n, ∂v j ∂n : =  ∂v j ∂x 1 , ∂v j ∂x 2 , , ∂v j ∂x l  T = 0, j = 1,2, ,m, (2.2) u i (s,x) = φ u i (s,x), s ∈  − σ,0  , σ = max 1≤i≤n,1≤ j≤m  σ ji  , i = 1,2, ,n, v j (s,x) = φ v j (s,x), s ∈ [−τ,0], τ = max 1≤i≤n,1≤ j≤m  τ ij  , j = 1,2, ,m, (2.3) where φ u i (s,x), φ v j (s,x)(i = 1,2, ,n, j = 1,2, ,m) denote real-valued continuous func- tions defined on [ −σ,0]× Ω and [−τ,0]× Ω, respectively. Since the solution (u 1 (t,x), , u n (t,x), v 1 (t,x), ,v m (t,x)) T of model (2.1) is discon- tinuous at the point t k ,bytheoryofimpulsivedifferential equations, we assume that (u 1 (t k ,x), ,u n (t k ,x),v 1 (t k ,x), ,v m (t k ,x)) ≡ (u 1 (t k − 0,x), ,u n (t k − 0,x),v 1 (t k − 0,x), ,v m (t k − 0, x)) T . It is clear that, in general, the partial derivatives ∂u i (t k ,x)/∂t and ∂v j (t k ,x)/∂t do not exist. On the other hand, according to the first and the third equa- tions of model (2.1), there exist the limits ∂u i (t k ∓ 0,x)/∂t and ∂v j (t k ∓ 0,x)/∂t.According to the above convention, we assume ∂u i (t k ,x)/∂t = ∂u i (t k − 0,x)/∂t and ∂v j (t k ,x)/∂t = ∂v j (t k − 0,x)/∂t. Throughout this paper, we make the following assumption. (H) There exist two positive diagonal matrices G = diag(G 1 ,G 2 , ,G n )andF = diag (F 1 ,F 2 , ,F m )suchthat   g i  u 1  − g i  u 2    ≤ G i   u 1 − u 2   ,   f j  v 1  − f j  v 2    ≤ F j   v 1 − v 2   (2.4) for all u 1 ,u 2 ,v 1 ,v 2 ∈ R, i = 1,2, ,n, j = 1,2, ,m. 4AdvancesinDifference Equations For convenience, we introduce two notations. For any u(t,x) = (u 1 (t,x), u 2 (t,x), , u k (t,x)) T ∈ R k ,define   u i (t,x)   2 =   Ω   u i (t,x)   2 dx  1/2 , i = 1,2, ,k. (2.5) For any u(t) = (u 1 (t),u 2 (t), ,u k (t)) T ∈ R k ,defineu(t)=[  k i =1 |u i (t)| r ] 1/r , r>1. Definit ion 2.1. Aconstantvector(u ∗ 1 , ,u ∗ n ,v ∗ 1 , ,v ∗ m ) T is said to be an equilibrium of model (2.1)if −a i u ∗ i + m  j=1 c ij f j  v ∗ j  + α i = 0, i = 1,2, ,n, I k  u ∗ i  = 0, i = 1,2, ,n, k ∈ Z + , −b j v ∗ j + n  i=1 d ji g i  u ∗ i  + β j = 0, j = 1,2, ,m, J k  v ∗ j  = 0, j = 1,2, ,m, k ∈ Z + , (2.6) where Z + denotes the set of all p ositive integers. Definit ion 2.2 (see [3]). A real matrix A = (a ij ) n×n is said to be an M-matrix if a ij ≤ 0(i, j = 1,2, ,n, i = j) and successive principle minors of A are positive. Definit ion 2.3 (see [27]). A map H : R n → R n is a homomorphism of R n onto itself if H ∈ C 0 , H is one-to-one, H is onto, and the inverse map H −1 ∈ C 0 . To prove our result, the following four lemmas are necessary. Lemma 2.4 (see [3]). Let Q be n × n matrix with nonpositive off-diagonal elements, then Q is an M-matrix if and only if one of the following conditions holds. (i) There exists a vector ξ>0 such that Qξ > 0. (ii) There exists a vector ξ>0 such that ξ T Q>0. Lemma 2.5 (see [27]). If H(x) ∈ C 0 satisfies the following conditions: (i) H(x) is injective on R n , (ii) H(x)→+∞ as x→+∞, then H(x) is homomorphism of R n . Lemma 2.6 (see [28]). Let a,b ≥ 0, p>1, then a p−1 b ≤ p − 1 p a p + 1 p b p . (2.7) Lemma 2.7 (see [29]) (C p inequality). Let a ≥ 0, b ≥ 0, p>1, then (a + b) 1/p ≤ a 1/p + b 1/p . (2.8) Q. Song and J. Cao 5 3. Existence and uniqueness of equilibria Theorem 3.1. Under assumpt ion (H), if there ex ist real constants α ij , β ij , α ∗ ji , β ∗ ji (i = 1,2, ,n, j = 1,2, ,m), and r>1 such that W =  A −  C −C ∗ −D ∗ B −  D  (3.1) is an M-matrix, and I k  u ∗ i  = 0, i = 1,2, ,n, k ∈ Z + , J k  v ∗ j  = 0, j = 1,2, ,m, k ∈ Z + , (3.2) then model (2.1)hasauniqueequilibriumpoint(u ∗ 1 , ,u ∗ n ,v ∗ 1 , ,v ∗ m ) T ,where A = diag  a 1 ,a 2 , ,a n  ,  C = diag   c 1 , , c n  with c i = m  j=1 r − 1 r   c ij   (r−α ij )/(r−1) F (r−β ij )/(r−1) j , B = diag  b 1 ,b 2 , ,b m  ,  D = diag   d 1 , ,  d m  with  d j = n  i=1 r − 1 r   d ji   (r−α ∗ ji )/(r−1) G (r−β ∗ ji )/(r−1) i , C ∗ =  c ∗ ij  n×m with c ∗ ij = 1 r   c ij   α ij F β ij j , D ∗ =  d ∗ ji  m×n with d ∗ ji = 1 r   d ji   α ∗ ji G β ∗ ji i . (3.3) Proof. Define the following map associated with model (2.1): H(x, y) =  − A 0 0 −B  x y  +  0 C D 0  g(x) f (y)  +  α β  , (3.4) where C =  c ij  n×m , D =  d ji  m×n , g(x) =  g 1 (x 1  ,g 2 (x 2 ), ,g n  x n )  T , f (y) =  f 1  y 1  , f 2  y 2  , , f m  y m  T , α =  α 1 ,α 2 , ,α n  T , β =  β 1 ,β 2 , ,β m  T . (3.5) In the following, we will prove that H(x, y) is a homomorphism. 6AdvancesinDifference Equations First, we prove that H(x, y)isaninjectivemapon R n+m . In fact, if there exist (x, y) T ,(x, y) T ∈ R n+m and (x, y) T = (x, y) T such that H(x, y) = H(x, y), then a i  x i − x i  = m  j=1 c ij  f j  y j  − f j  y j  , i = 1,2, ,n, (3.6) b j  y j − y j  = n  i=1 d ji  g i  x i  − g i  x i  , j = 1,2, ,m. (3.7) Multiply both sides of (3.6)by |x i − x i | r−1 , it follows from assumption (H) and Lemma 2.6 that a i   x i − x i   r ≤ m  j=1   c ij   F j   x i − x i   r−1   y j − y j   ≤ m  j=1 r − 1 r   c ij   (r−α ij )/(r−1) F (r−β ij )/(r−1) j   x i − x i   r + 1 r m  j=1   c ij   α ij F β ij j   y j − y j   r . (3.8) Similarly, we have b j   y j − y j   r ≤ n  i=1 r − 1 r   d ji   (r−α ∗ ji )/(r−1) G (r−β ∗ ji )/(r−1) i   y j − y j   r + 1 r n  i=1   d ji   α ∗ ji g β ∗ ji i   x i − x i   r . (3.9) From (3.8)and(3.9)weget W    x 1 − x 1   r , ,   x n − x n   r ,   y 1 − y 1   r , ,   y m − y m   r  T ≤ 0. (3.10) Since W is an M-matrix, we get x i = x i , y j = y j , i = 1, 2, , n, j = 1,2, ,m,whichisa contradiction. So, H(x, y)isaninjectivemapon R n+m . Second, we prove that H(x, y)→+∞ as (x, y) T →+∞. Since W is an M-matrix, from Lemma 2.4, we know that there exists a vector γ = (λ 1 , ,λ n ,λ n+1 , ,λ n+m ) T > 0suchthatγ T W>0, that is, λ i  a i − c i  − m  j=1 λ n+j d ∗ ji > 0, i = 1,2, ,n, λ n+j  b j −  d j  − n  i=1 λ i c ∗ ij > 0, j = 1,2, ,m. (3.11) Q. Song and J. Cao 7 Wecanchooseasmallnumberδ such that λ i  a i − c i  − m  j=1 λ n+j d ∗ ji ≥ δ>0, i = 1,2, ,n, λ n+j  b j −  d j  − n  i=1 λ i c ∗ ij ≥ δ>0, j = 1,2, ,m. (3.12) Let  H(x, y) = H(x, y) − H(0,0), and sgn(θ) is the signum function defined as 1 if θ>0, 0 if θ = 0, −1ifθ<0. From assumption (H), Lemma 2.6,and(3.12)wehave n  i=1 λ i   x i   r−1 sgn  x i   H i (x, y)+ m  j=1 λ n+j   y j   r−1 sgn  y j   H n+j (x, y) ≤− n  i=1 λ i a i   x i   r + n  i=1 λ i m  j=1   c ij   F j   y j     x i   r−1 − m  j=1 λ n+j b j   y j   r + m  j=1 λ n+j n  i=1   d ji   G i   x i     y j   r−1 ≤ n  i=1 λ i  − a i + m  j=1 r − 1 r   c ij   (r−α ij )/(r−1) F (r−β ij )/(r−1) j    x i   r + m  j=1 1 r   c ij   α ij F β ij j   y j   r  + m  j=1 λ n+j  − b j + n  i=1 r − 1 r   d ji   (r−α ∗ ji )/(r−1) G (r−β ∗ ji )/(r−1) i    y j   r + n  i=1 1 r   d ji   α ∗ ji G β ∗ ji i   x i   r  =− n  i=1  λ i  a i − c i  − m  j=1 λ n+j d ∗ ji    x i   r − m  j=1  λ n+j  b j −  d j  − n  i=1 λ i c ∗ ij    y j   r ≤−δ   (x, y) T   r . (3.13) From (3.13)wehave δ   (x, y) T   r ≤−  n  i=1 λ i   x i   r−1 sgn  x i   H i (x, y)+ m  j=1 λ n+j   y j   r−1 sgn  y j   H n+j (x, y)  ≤ max 1≤i≤n+m  λ i   n  i=1   x i   r−1    H i (x, y)   + m  j=1   y j   r−1    H n+j (x, y)    . (3.14) 8AdvancesinDifference Equations By using H ¨ older inequalit y we get   (x, y) T   r ≤ max 1≤i≤n+m  λ i  δ  n  i=1   x i   r + m  j=1   y j   r  (r−1)/r ×  n  i=1    H i (x, y)   r + m  j=1    H n+j (x, y)   r  1/r , (3.15) that is,   (x, y) T   ≤ max 1≤i≤n+m  λ i  δ    H(x, y)   . (3.16) Therefore,   H(x, y) ∞ → +∞ as (x, y) T  ∞ → +∞, which directly implies that H(x, y)→+∞ as (x, y) T →+∞.FromLemma 2.5 we know that H(x, y)isaho- momorphism on R n+m . Thus, equation −a i u i + m  j=1 c ij f j  v j  + α i = 0, i = 1,2, ,n, −b j v j + n  i=1 d ji g i  u i  + β j = 0, j = 1,2, ,m (3.17) has unique solution (u ∗ 1 , ,u ∗ n ,v ∗ 1 , ,v ∗ m ) T , which is one unique equilibrium point of model (2.1). The proof is completed.  4. Global exponent ial stability Theorem 4.1. Under assumption (H), if W in Theorem 3.1 is an M-matrix, and I k (u i (t k ,x)) and J k (v j (t k ,x)) satisfy I k  u i  t k ,x  =− γ ik  u i  t k ,x  − u ∗ i  ,0<γ ik < 2, i = 1,2, ,n, k ∈ Z + , J k  v j  t k ,x  =− δ jk  v j  t k ,x  − v ∗ j  ,0<δ ik < 2, j = 1,2, ,m, k ∈ Z + , (4.1) then model (2.1)hasauniquepoint(u ∗ 1 , ,u ∗ n ,v ∗ 1 , ,v ∗ m ) T , which is globally exponentially stable. Proof. From (4.1)weknowthatI k (u ∗ i ) = 0andJ k (v ∗ j ) = 0(i = 1,2, ,n, j = 1, 2, , m, k ∈ Z + ), so the existence and uniqueness of equilibrium point of (2.1)followfrom Theorem 3.1. Q. Song and J. Cao 9 Let (u 1 (t,x), , u n (t,x), v 1 (t,x), ,v m (t,x)) T be any solution of model (2.1), then ∂  u i (t,x) − u ∗ i  ∂t = l  k=1 ∂ ∂x k  D ik ∂  u i (t,x) − u ∗ i  ∂x k  − a i  u i (t,x) − u ∗ i  + m  j=1 c ij  f j  v j  t − τ ij (t),x  − f j  v ∗ j  , t>0, t = t k , i = 1, ,n, k ∈ Z + , (4.2) ∂  v j (t,x) − v ∗ j  ∂t = l  k=1 ∂ ∂x k  D ∗ jk ∂  v j (t,x) − v ∗ j  ∂x k  − b j  v j (t,x) − v ∗ j  + n  i=1 d ji  g i  u i  t − σ ji (t),x  − g i  u ∗ i  , t>0, t = t k , j = 1, ,m, k ∈ Z + . (4.3) Multiply both sides of (4.2)byu i (t,x) − u ∗ i , and integrate, then we have 1 2 d dt  Ω  u i (t,x) − u ∗ i  2 dx = l  k=1  Ω  u i (t,x) − u ∗ i  ∂ ∂x k  D ik ∂  u i (t,x) − u ∗ i  ∂x k  dx − a i  Ω  u i (t,x) − u ∗ i  2 dx + m  j=1 c ij  Ω  u i (t,x) − u ∗ i  f j  v j  t − τ ij (t),x  − f j  v ∗ j  dx. (4.4) From the boundary condition (2.2) and the proof of [22,Theorem1]weget l  k=1  Ω  u i (t,x) − u ∗ i  ∂ ∂x k  D ik ∂  u i (t,x) − u ∗ i  ∂x k  dx =− l  k=1  Ω D ik  ∂  u i (t,x) − u ∗ i  ∂x k  2 dx. (4.5) From (4.4), (4.5), assumption (H), and Cauchy integrate inequality we have d   u i (t,x) − u ∗ i   2 2 dt ≤−2a i   u i (t,x) − u ∗ i   2 2 +2 m  j=1   c ij   F j   u i (t,x) − u ∗ i   2   v j  t − τ ij (t),x  − v ∗ j   2 . (4.6) 10 Advances in Difference Equations Thus D +   u i (t,x) − u ∗ i   2 ≤−a i   u i (t,x) − u ∗ i   2 + m  j=1   c ij   F j   v j  t − τ ij (t),x  − v ∗ j   2 (4.7) for t>0, t = t k , i = 1, ,n, k ∈ Z + . Multiply both sides of (4.3)byv j (t,x) − v ∗ j , similarly, we can get D +   v j (t,x) − v ∗ j   2 ≤−b j   v j (t,x) − v ∗ j   2 + n  i=1   d ji   G i   u i  t − σ ji (t),x  − u ∗ i   2 (4.8) for t>0, t = t k , j = 1, ,m, k ∈ Z + . It follows from (4.1)that   u i  t k +0,x  − u ∗ i   2 =   1 − γ ik     u i  t k ,x  − u ∗ i   2 , i = 1, ,n, k ∈ Z + ,   v j  t k +0,x  − v ∗ j   2 =   1 − δ jk     v j  t k ,x  − v ∗ j   2 , i = j, ,m, k ∈ Z + . (4.9) Let us consider functions ρ i (θ) = λ i  θ r − a i + c i  + m  j=1 λ n+j c ∗ ij e τθ , i = 1,2, ,n, χ j (θ) = λ n+j  θ r − b j +  d j  + n  i=1 λ i d ∗ ji e σθ , j = 1,2, ,m. (4.10) Since W is an M-matrix, from Lemma 2.4, we know that there exists a vector γ = (λ 1 , ,λ n ,λ n+1 , ,λ n+m ) T > 0suchthatWγ> 0, that is, λ i  a i − c i  − m  j=1 λ n+j c ∗ ij > 0, i = 1,2, ,n, λ n+j  b j −  d j  − n  i=1 λ i d ∗ ji > 0, j = 1,2, ,m. (4.11) From (4.11)and(4.10)weknowthatρ i (0) < 0, χ j (0) < 0, and ρ i (θ)andχ j (θ)arecon- tinuous for θ ∈ [0,+∞). Moreover, ρ i (θ),χ j (θ) → +∞ as θ → +∞.Sincedρ i (θ)/dθ > 0, dχ j (θ)/dθ > 0, ρ i (θ)andχ j (θ) are strictly monotone increasing functions on [0,+∞). Thus, there exist constants z ∗ i and z ∗ j ∈ (0,+∞)suchthat ρ i  z ∗ i  = λ i  z ∗ i r − a i + c i  + m  j=1 λ n+j c ∗ ij e z ∗ i τ = 0, i = 1,2, ,n, χ j   z ∗ j  = λ n+j   z ∗ j r − b j +  d j  + n  i=1 λ i d ∗ ji e z ∗ j σ = 0, j = 1,2, ,m. (4.12) [...]... BAM neural networks with delays and impulses,” Chaos, Solitons and Fractals, vol 24, no 1, pp 279–285, 2005 [19] Y Zhang and J Sun, Stability of impulsive neural networks with time delays, ” Physics Letters A, vol 348, no 1-2, pp 44–50, 2005 [20] Y.-T Li and C.-B Yang, “Global exponential stability analysis on impulsive BAM neural networks with distributed delays, ” Journal of Mathematical Analysis and. .. neural networks, ” Neural Networks, vol 12, no 2, pp 273–280, 1999 [14] Z Yang and D Xu, Impulsive effects on stability of Cohen-Grossberg neural networks with variable delays, ” Applied Mathematics and Computation, vol 177, no 1, pp 63–78, 2006 [15] D W C Ho, J Liang, and J Lam, “Global exponential stability of impulsive high-order BAM neural networks with time-varying delays, ” Neural Networks, vol 19, no... Chen, and Y L Fu, Stability of general neural networks with reactiondiffusion,” Science in China Series F, vol 44, no 5, pp 389–395, 2001 [24] L Wang and D Xu, “Global exponential stability of Hopfield reaction-diffusion neural networks with time-varying delays, ” Science in China Series F, vol 46, no 6, pp 466–474, 2003 [25] Q Song, Z Zhao, and Y Li, “Global exponential stability of BAM neural networks with. .. the impulsive BAM neural networks with distributed delays, several sufficient criteria checking the globally exponential stability were obtained by constructing a suitable Lyapunov functional It should be noted that our methods, which do not make use of Lyapunov functional, are simple and valid for the Q Song and J Cao 15 stability analysis of impulsive BAM neural networks with constant delays, time-varying. .. associative memory neural networks involving transmission delays and dead zones,” Neural Networks, vol 12, no 3, pp 455–465, 1999 [7] S Mohamad, “Global exponential stability in continuous-time and discrete-time delayed bidirectional neural networks, ” Physica D, vol 159, no 3-4, pp 233–251, 2001 [8] J Cao and L Wang, Exponential stability and periodic oscillatory solution in BAM networks with delays, ” IEEE... delays, ” IEEE Transactions on Neural Networks, vol 13, no 2, pp 457–463, 2002 [9] J Cao, J Liang, and J Lam, Exponential stability of high-order bidirectional associative memory neural networks with time delays, ” Physica D, vol 199, no 3-4, pp 425–436, 2004 [10] S Xu and J Lam, “A new approach to exponential stability analysis of neural networks with time-varying delays, ” Neural Networks, vol 19, no 1,... with distributed delays and reaction-diffusion terms,” Physics Letters A, vol 335, no 2-3, pp 213–225, 2005 [26] J Qiu, Exponential stability of impulsive neural networks with time-varying delays and reaction-diffusion terms,” Neurocomputing, vol 70, no 4–6, pp 1102–1108, 2007 [27] J Cao and J Wang, “Global exponential stability and periodicity of recurrent neural networks with time delays, ” IEEE Transactions... Dai, and Z Zhang, “Global existence of periodic solutions of BAM neural networks with variable coefficients,” Physics Letters A, vol 317, no 1-2, pp 97–106, 2003 [12] J H Park, “A novel criterion for global asymptotic stability of BAM neural networks with time delays, ” Chaos, Solitons and Fractals, vol 29, no 2, pp 446–453, 2006 [13] Z.-H Guan and G Chen, “On delayed impulsive Hopfield neural networks, ” Neural. .. Circuits and Systems I, vol 52, no 5, pp 920–931, 2005 [28] Q Zhang, X Wei, and J Xu, “New stability conditions for neural networks with constant and variable delays, ” Chaos, Solitons and Fractals, vol 26, no 5, pp 1391–1398, 2005 [29] Q Song and J Cao, Stability analysis of Cohen-Grossberg neural network with both timevarying and continuously distributed delays, ” Journal of Computational and Applied... Covacheva, and E Al-Zahrani, “Continuous-time additive ¸ Hopfield-type neural networks with impulses,” Journal of Mathematical Analysis and Applications, vol 290, no 2, pp 436–451, 2004 [17] D Xu and Z Yang, Impulsive delay differential inequality and stability of neural networks, ” Journal of Mathematical Analysis and Applications, vol 305, no 1, pp 107–120, 2005 [18] Y Li, “Global exponential stability of BAM . Article ID 78160, 18 pages doi:10.1155/2007/78160 Research Article Exponential Stability for Impulsive BAM Neural Networks with Time-Varying Delays and Reaction-Diffusion Terms Qiankun Song and. existence, uniqueness, and global exponential stability of equilib- rium point for impulsive BAM neural networks with time-varying delays and reaction- diffusion terms, without assuming the boundedness,. exponential stability of BAM neural networks with delays and impulses,” Chaos, Solitons and Fractals, vol. 24, no. 1, pp. 279–285, 2005. [19] Y. Zhang and J. Sun, Stability of impulsive neural networks

Ngày đăng: 22/06/2014, 19:20

Tài liệu cùng người dùng

Tài liệu liên quan