Frontiers in Robotics, Automation and Control Part 13 docx

30 238 0
Frontiers in Robotics, Automation and Control Part 13 docx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Fault Detection with Bayesian Network 353 CL c cp = − 1 1 )ln( (30) Or, equivalently: 0)ln(1 =+− c CL pc c (31) Equation (31) admits two solutions: 1 = c (not acceptable) and a second solution (numerically computable) which depends of p and α . With the coefficient c correctly computed, we obtain the equivalence between the bayesian network and the multivariate control charts. We precise that, as univariate charts are simply a particular case of multivariate control chart, the proof given is also available for univariate control charts. In order to demonstrate the proposed approach, we illustrate it on a simple system with two variables. 4.2 Detection with bayesian network We will study a T 2 control chart and a MEWMA control chart (with 1.0 = λ ) modelized by bayesian networks. We choose a false alarm rate %1 = α . When the system is in-control, it follows a multivariate Gaussian distribution with parameters μ and Σ such as: ( ) 105 = μ (32) ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ = 22.1 2.11 Σ (33) In order to monitor this process, we apply the proposed method of detection with bayesian network. So, for a T 2 control chart, we obtain the bayesian network of the figure 6. We have also given the conditional probability table of each node, and where c is equal to 95.28 (solution of equation (31) for %1 = α and 2 = p ). Class C IC OC α − 1 α C X IC ),(~ ΣμX N X C OC ),(~ ΣμX × cN Fig. 6. Bayesian Network similar to T 2 control chart Frontiers in Robotics, Automation and Control 354 In the same way, we can also monitor the process with a MEWMA control chart modelized by the bayesian network of the figure 7, where c is equal to 90.29 (solution of equation (31) for %1= α and 2 = p in the MEWMA case). Class C IC OC α − 1 α C Y IC ( ) ) 2 ,(~ ΣμY λ λ − N Y C OC ( ) ) 2 ,(~ ΣμY λ λ − × cN Fig. 7. Bayesian Network similar to MEWMA control chart We have simulated this system on 30 observations. But, a fault has been introduced from observation 6 to 30. This fault is a mean step of magnitude 0.5 on the first variable. The figure 8 represents the decision taken at each instant respectively for the T 2 chart (left graphs) and for the MEWMA chart (right graphs). On this figure, upper graphs represent the computation of the statistical distance associated with the control chart ( T 2 or T t 2 ). The lower graphs give the a posteriori probability to be in control. The control limit is given on each graph, so we can view the limit on the bayesian network fixed to %991 = − α . Fig. 8. Results of the T 2 and MEWMA chars, and their equivalency in Bayesian Network Fault Detection with Bayesian Network 355 On the figure 8, we can see that for each instant, the decision between a control chart and its modelization by bayesian network is equivalent. We demonstrated that it is possible to have detection of faults in multivariate processes with bayesian networks and we proved that we can easily modelize multivariate control charts with them. 5. Conclusions and outlooks In this chapter, we show that a bayesian network can be an efficient way to diagnose a fault in multivariate processes. We have selected two statistical fault detection techniques (the T 2 chart and the MEWMA chart) and we have demonstrated that these charts can be viewed as a discriminant analysis and so can be implemented in a simple bayesian network. As the efficiency of bayesian network for the diagnosis of systems has already been demonstrated (Verron et al., 2006; Verron et al., 2007), the evident outlook of this work is the full study of the use of bayesian network in order to monitor and control a multivariate process (detection and diagnosis in the same network). 6. References Bakshi, B. R. (1998). Multiscale PCA with application to multivariate statistical process monitoring. AIChE Journal, Vol. 44, No. 7, pp. 1596-1610. Bishop, C. M. (1995). Neural Networks for Pattern Recognition, Oxford University Press. Cover, T. & Hart, P. (1967). Nearest neighbor pattern classification. IEEE Transactions on Information Theory, Vol. 13, pp. 21-27. Chiang, L.H.; Russell, E.L. & Braatz, R.D. (2001). Fault detection and diagnosis in industrial systems, Springer-Verlag, New York. Chow, C. & Liu, C. (1968). Approximating discrete probability distributions with dependence trees. IEEE Transactions on Information Theory, Vol. 14, pp. 462-467. Domingos, P. & Pazzani, M.J. (1996). Beyond Independence: Conditions for the Optimality of the Simple Bayesian Classifier, In Proceedings of the Thirteen International Conference on Machine Learning, pp. 105-112. Duda, R.O.; Hart, P.E. & Stork, D.G. (2001). Pattern Classification 2nd edition, Wiley. Friedman, N.; Geiger, D.& Goldszmidt, M. (1997). Bayesian network classifiers. Machine Learning, Vol. 29, No. 2, pp 131-163. Geiger, D. & Heckerman D. (1996). Knowledge representation and inference in similarity networks and Bayesian multinets. Artificial Intelligence, Vol. 82, pp. 45-74. Hawkins, D. M. (1991). Multivariate quality control based on regression-adjusted variables. Technometrics, Vol. 33, pp. 61-75. Hotelling, H. (1947). Multivariate Quality Control. In Techniques of Statistical Analysis, C. Eisenhart; M.W. Hastay & W.A. Wallis, pp. 111-184, McGraw-Hill, New York. Inza, I.; Larranaga, P.; Sierra, B.; Etxeberria, R.; Lozano, J. & Pena, J. (1999). Representing the behaviour of supervised classification learning algorithms by Bayesian networks. Pattern Recognition Letters, Vol. 20, pp. 1201-1209. Jackson, E.J. (1985). Multivariate quality control. Communication Statistics - Theory and Methods, Vol. 14, No. 2, pp. 657-688. Jensen, F.V. (1996). An introduction to Bayesian Networks, Taylor and Francis, London. Frontiers in Robotics, Automation and Control 356 Kano, M.; Nagao, K.; Hasebe, S.; Hashimoto, I.; Ohno, H.; Strauss, R.; & Bakshi, B. (2002). Comparison of multivariate statistical process monitoring methods with applications to the Eastman challenge problem. Computers and Chemical Engineering, Vol. 26, No. 2, pp. 161-174. Kononenko, I. (1991). Semi-naive bayesian classifier, In Proceedings of the Sixth European Working Session on Learning, pp. 206-219, Porto, Portugal, Springer-Verlag. Kourti, T. & MacGregor, J.F. (1996). Multivariate SPC methods for process and product monitoring. Journal of Quality Technology, Vol. 28, No. 4, pp. 409-428. Langley, P; Iba, W. & Thompson, K. (1992). An analysis of bayesian classifiers, In Proceedings of the Tenth National Conference on Artificial Intelligence, pp. 223-228, San Jose, CA: AAAI Press. Lowry, C.A.; Woodall, W.H.; Champ, C.W. & Rigdon, S.E. (1992). A multivariate exponentially weighted moving average control chart. Technometrics, Vol. 34, No. 1, pp46-53. MacGregor, J. and Kourti, T. (1995). Statistical process control of multivariate processes. Control Engineering Practice, Vol. 3, No. 3, pp. 403-414. Montgomery, D.C. (1997). Introduction to Statistical Quality Control, Third Edition, John Wiley and Sons. Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, Morgan Kaufmann Publishers. Pignatiello, J. & Runger, G. (1990). Comparisons of multivariate CUSUM charts. Journal of Quality Technology, Vol. 22, No. 3, pp. 173-186. Sahami, M. (1996). Learning Limited Dependence Bayesian Classifiers, In Proceedings of the Second International Conference on Knowledge Discovery in Databases, pp. 335-338. Shewhart, W.A. (1931). Economic control of quality of manufactured product, D. Van Nostrand Co., New York. Tax, D. M. J. & Duin, R. P. W. (2001). Combining One-Class Classifiers. Lecture Notes in Computer Science, Vol. 2096, pp. 299-308. Vapnik, V.N. (1995). The Nature of Statistical Learning Theory, Springer. Verron S., Tiplica T., Kobi A. (2006). A new procedure based on mutual information for fault diagnosis of industrial systems, in Workshop on Advanced Control and Diagnosis. Verron S., Tiplica T., Kobi A. (2007). Fault diagnosis of industrial systems with bayesian networks and mutual information, In Proceedings of the Ninth International European Control Conference. 19 A Hierarchical Bayesian Hidden Markov Model for Multi-Dimensional Discrete Data Shigeru Motoi 1 , Yohei Nakada 1 , Toshie Misu 2 , Tomohiro Yazaki 1 Takashi Matsumoto 1 and Nobuyuki Yagi 2 1 Faculty of Science and Engineering, Waseda University, Tokyo, Japan 2 Science and Technical Research Laboratories, NHK (Japan Broadcasting Corporation), Tokyo, Japan 1. Introduction 1.1 Motivation A fundamental problem encountered in many fields is to model data t o given a discrete time-series data sequence ( ) T oo:y ,,L 1 = . This problem is found in diverse fields, such as control systems, robotics, event detection (Motoi et al., 2007), handwriting recognition (Yasuda et al., 2000 ; Funada et al., 2005), and protein structure prediction (Krogh et al., 2001 ; Tusnady & Simon, 1998 ; Kaburagi et al., 2007). The data t o can often be a multi- dimensional variable exhibiting stochastic activity. A powerful tool for solving such problems is multi-dimensional discrete Hidden Markov Models (HMMs), and the effectiveness of this approach has been demonstrated in numerous studies (Motoi et al., 2007 ; Yasuda et al., 2000 ; Funada et al., 2005 ; Kaburagi et al., 2007). The hidden states of the HMMs are treated as hidden factors for emission of the observed data t o . However, if redundant components having low dependencies on the hidden states are contained in the data t o , these components often have a negative impact on the HMM performance. Overcoming this problem requires a method of quantifying the redundancy (state independence) of these components and/or reducing their influence. In this chapter, we describe an extension of the HMM for these kinds of data sequences within the framework of a hierarchical Bayesian scheme. In this extended model, we introduce commonality hyperparameters to describe the degree of commonality of the emission probabilities among different hidden states (that is, hidden factors of the data t o ). Additionally, there is a one-to-one relationship between each hyperparameter and a component of the data t o . This allows us to identify low-dependency components and to minimize their negative impact. Like other Bayesian HMMs, the extended model requires complicated integrations in the learning and prediction processes, usually involving a posterior distribution. Analytic solutions of these integrations are often intractable or non-trivial due to their inherent Frontiers in Robotics, Automation and Control 358 complexity. In this chapter, therefore, we also describe an implementation based on a Markov Chain Monte Carlo (MCMC) method (Scott, 2002). 1.2 Related work In one detailed study, several feature selection methods were considered, such as discriminant feature analysis, principal component analysis, and the sequential search method (Nouza, 1996). In addition, that study also described a fast feature selection algorithm. Our approach described in this chapter may be regarded as a Bayesian feature selection scheme based on the dependencies of the hidden states. There have been a number of studies examining Bayesian HMMs and their implementations, such as (Funada et al., 2005 ; Motoi et al., 2007 ; Huo et al., 1995 ; MacKay, 1997 ; Scott, 2002). Reference (Huo et al., 1995) describes a Maximum A Posteriori (MAP) estimation for Bayesian HMMs, and reference (MacKay, 1997) describes a Variational Bayesian method (so-called ensemble learning). In addition, references (Funada et al., 2005 ; Motoi et al., 2007 ; Scott, 2002) discuss Bayesian HMMs using MCMC. The model that we describe here is an extension of such Bayesian HMMs for discrete multi-dimensional data containing redundant components. There is a well-known successful method to determine redundant components of multi- dimensional (input) data in the field of Bayesian Neural Networks (BNNs), called Automatic Relevance Determination (ARD) (MacKay, 1992 ; Neal, 1996 ; Qi et al., 2004 ; Tipping, 2000 ; Matsumoto et al., 2001 ; Nakada et al., 2005). ARD was first described in (MacKay, 1992); that method used a Laplace approximation. Reference (Neal, 1996) described another ARD using MCMC, and reference (Qi et al., 2004) discusses a variant based on Expectation Propagation. Several studies have also described extensions of the BNN using the ARD method, including, for example, the Relevance Vector Machine (Tipping, 2000) and BNNs for nonlinear time-series data (Matsumoto et al., 2001 ; Nakada et al., 2005). The structure of the extended HMM is completely different from that of such BNNs;nevertheless, the fundamental hierarchical Bayesian concepts show a number of underlying similarities. 2. Model specification In this section, we describe the extended Bayesian HMM. The setting of hyperparameters is the principal difference between our extended model and the conventional Bayesian HMMs (see Sec. 2.5). 2.1 HMM Topology The HMM structure depends on the particular topology employed and the number of states N. Topologies commonly employed include “ergodic” and “left-to-right”. Here we describe only the ergodic topology, since we employed that topology in our experiments, described later. 2.2 Data and hidden variables In the HMM framework, we must consider the time-series data sequence (observation data sequence) () T oo:y ,,L 1 = and the hidden variable sequence ( ) T qq:z ,,L 1 = . The terms t o and A Hierarchical Bayesian Hidden Markov Model for Multi-Dimensional Discrete Data 359 t q represent the time-series data and the hidden variable at time t, and T is the sequence length. The hidden variable t q is a one-dimensional variable that takes finite values among the available N states (that is, { } N,,q t L1 ∈ ), whereas the data t o is a multi-dimensional discrete variable defined by ( ) t,Dt,t o,,o:o L 1 = . Here, D represents the dimension of the data t o , the variable t,k o the k-th component of t o , and k M the number of symbols for t,k o (in other words, { } kt,k M,,o L1 ∈ ). 2.3 Observation model Consider the complete parameter set θ of an HMM. The probability of the data t y is ( ) ( ) ( ) ( ) ,c,b,a:,c,a|zPb,z|yP:|yP z == ∑ θθ @ (1) Here, () ( ) ∏ = = T t tt ,b,q|oP:b,z|yP 1 (2) () ()( ) .a,q|qPc|qP:c,a|zP T t tt ∏ = − = 2 11 (3) The emission probability of the data t o in (2) is () () ,b,q|oP:b,q|oP D k ktt,ktt ∏ = = 1 (4) where () D b,,b:b L 1 = . The probability ( ) ktt,k b,q|oP in Eqn. (4) represents the emission probability of the k-th component t,k o . It is defined as ( ) ,b:b,iq|joP ij,kktt,k = = = (5) where ( ) N,k,kk b,,b:b L 1 = , ( ) k iM,ki,ki,k b,,b:b L 1 = , 1 1 = ∑ = k M j ij,k b , and 10 ≤ ≤ ij,k b . The hidden variable transition probability and the initial hidden variable probability in Eqn. (3) are ( ) ,t,a:a,iq|jqP ijtt 1 1 > = = = − @ (6) Frontiers in Robotics, Automation and Control 360 ( ) ,c:c|iqP i = = 1 (7) Here, ( ) N a,,a:a L 1 = , ( ) iNii a,,a:a L 1 = , 1 1 = ∑ = N j ij a , 10 ≤ ≤ ij a , ( ) N c,,c:c L 1 = , 1 1 = ∑ = N i i c , and 10 ≤≤ i c . 2.4 Prior distribution for parameters Within a Bayesian framework, both the observation model (the likelihood function) and the prior distribution of the parameter set are defined. For the sake of simplicity, many Bayesian HMMs assume parameter independency in the prior distribution. That is to say: ( ) ( ) ( ) ( ) ,|cP|bP|aP|P γ β α φ θ = (8) () () ,|aP:|aP N i ii ∏ = = 1 αα (9) () () ,|bP|bP D k N i i,ki,k ∏∏ == = 11 ββ (10) where ( ) ( ) () () .,,:,,,: ,,:,,,: N,k,kkD N ββββββ α α α γ β α φ LL L 11 1 == = = @ @ The prior distributions of i a , i,k b and c in Eqns. (8)-(10) are also defined using the “natural conjugate” Dirichlet prior distribution: ( ) ( ) ,;a:|aP iiii α α D = (11) ( ) ( ) ,;b:|bP i,ki,ki,ki,k β β D = (12) ( ) ( ) ,;c:|cP γ γ D = (13) where () χ ;⋅D is the Dirichlet distribution with the parameter vector χ, and ( ) iNii ,,: ααα L 1 = , 0> ij α , ( ) iN,ki,ki,k ,,: β β β L 1 = , 0> ij,k β , ( ) N ,,: γγγ L 1 = , 0> i γ . 2.5 Settings for hyperparameter set As in a number of conventional Bayesian HMMs, for example, (Funada et al., 2005 ; Huo et A Hierarchical Bayesian Hidden Markov Model for Multi-Dimensional Discrete Data 361 al., 1995), all components of the hyperparameter vectors are fixed at 1.0, except for i,k β . 1 With our approach on the other hand, we consider a reparameterization of the hyperparameter vectors { } N i i,k 1= β , and the prior distribution of the reparameterized hyperparameters in order to identify components having low dependency on the states (redundant components). A. Reparameterization of i,k β We define the hyperparameter vector i,k β as: ,N,,i,: kki,k L1 = = @ η λ β (14) where () 0>∈R k λ , ( ) k M,k,kk ,,: η η η L 1 = , 10 < < i,k η , and 1 1 = ∑ = k M i i,k η . Here, k λ is the commonality hyperparameter describing the degree of commonality for the emission probabilities of { } T t t,k o 1= : ( ) ktt,k b,q|oP among different hidden states. 2 The hyperprameter k η is a common shape hyperparameter that described the average shape of the emission probabilities ( ) ktt,k b,q|oP for different hidden states. Here, we examine the effect of the commonality hyperparameter k λ on the emission probability i,k b . The shapes of the prior distribution (10) for various values of k λ are shown in Figure 1. Fig. 1 (c) shows a case where k λ is large. Here, the parameter vectors {} N i i,k b 1= , exhibit only small differences, i.e., k k M,k,k,k bbb η ≈ ≈ ≈ ≈ L 21 , meaning that there is low dependency of {} T t t,k o 1= on the states. For smaller k λ on the other hand (Fig. 1 (a) or (b)), the diversity of {} N i i,k b 1= among each state is greater; in other words, the dependency of {} T t t,k o 1= on the states is not low. B. Prior distribution for k λ and k η Here we describe the prior distribution of the hyperparameters k λ and k η used for learning these hyperparameters in a Bayesian learning method described later. The commonality hyperparameter k λ has no well-known “ natural conjugate ” prior distribution. Therefore, the prior distribution for k λ is defined using only information in the 1 This basic setting of the Dirichlet prior distribution makes it equivalent to a non- informative uniform prior distribution. 2 The diversity of ( ) ktt,k b,q|oP among the states corresponds to that of { } N i i,k b 1= because the emission probability of t,k o : ( ) ktt,k b,q|oP is defined by using { } N i i,k b 1= , as shown in equation (5). Frontiers in Robotics, Automation and Control 362 range () ∞∈ , k 0 λ . Although there are a number of alternative prior distributions for a positive continuous variable (for example, the log-normal prior distribution), the prior distribution of k λ is given by the following gamma prior distribution: 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 2 4 6 8 10 12 14 D ( b k,i ; λ k η k ) b k,i 1 b k,i 2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 2 4 6 8 10 12 14 D ( b k,i ; λ k η k ) b k,i 1 b k,i 2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 2 4 6 8 10 12 14 D ( b k,i ; λ k η k ) b k,i 1 b k,i 2 (a) k λ = 2. (b) k λ = 6. (c) k λ = 18. Fig. 1. Dirichlet prior distribution for i,k b , for various values of the commonality hyperparameter k λ . The parameters { } N i i,k b 1= are 3D variables ( ) 321 i,ki,ki,ki,k b,b,bb = , and the common shape hyperparameter k η is constant, ( ) 403030 .,.,. k = η . The component 3i,k b is omitted because it can be determined from 213 1 i,ki,ki,k bbb − − = . This figure clearly shows that, for larger k λ , the parameters { } N i i,k b 1= concentrate more around the average k η . ( ) ( ) ,,;:P kk ω κ λ λ G = (15) where () ,,; ωκ ⋅G is the gamma distribution having shape parameter κ and scale parameter ω . 3 These hyperhyperparameters are set to 01. = κ and 100 = ω in the experiments described in Sec. 4, which allows k λ to be widely distributed within in its available range. There is also no known “natural conjugate” prior distribution for k η . However, there are a limited number of options for the prior distribution because of the constraints of k η , namely, 1 1 = ∑ = k M i i,k η and 10 < < i,k η . Therefore, we use the Dirichlet distribution as the prior distribution for k η : ( ) ( ) ,;:P kk 0 ηηη D= (16) where 0 η denotes the hyperhyperparameter vector. By considering a non-informative 3The gamma distribution is defined as () ( ) () , xx :,;x φω ω ωκ κ κ Γ − = −− 11 exp G where ( ) ⋅ Γ is the gamma function. [...]... 40 associated components using standard information-based criteria showing the degree of the association with each event Frontiers in Robotics, Automation and Control 370 00 .11 pr ctve pr edi obabiiy lt predictiveievent indicator of the predicted results with our extended model and the conventional in (Motoi et al., 2007) are shown in Fig 7 Actual events are indicated in gray These results show that... hidden Markov models: recursive computing in the 21st century J Am Stat Assoc., vol 97, no 457 pp 337–351, Mar 2002 Tipping, M E (2000) The relevance vector machine Adv Neural Inf Process Syst., vol 12, pp 652–658, Jun 2000 374 Frontiers in Robotics, Automation and Control Tusnady, G.; Simon, I (1998) Principles governing amino acid composition of integral membrane proteins: application to topology prediction... step height by integrating ODE and GA The calculation System is shown in Fig.3 GA α ,β ,T ODE θ n(t): joint angles, h : step height Crossover Mutation Evolution Fig 3 Proposed simulation system Evaluation Frontiers in Robotics, Automation and Control 380 GA gives joint angles and step height, and ODE calculate dynamics After that, ODE distinguishes whether the robot could climb or not, and returns the... sequences and l is the index of the sequence The goal of Bayesian learning is, given the training dataset Y and the above model, to evaluate the (joint) posterior distribution for θ and φ : P(θ ,φ |Y ) = ∑ P(θ ,φ , Z|Y ), Z where (17) Frontiers in Robotics, Automation and Control 364 P(θ ,φ , Z|Y ) = P(Y|Z ,θ )P(Z|θ )P(θ |φ )P(φ ) @ @ @ @ , ∑Z ∫ ∫ P(Y|Z ,θ )P(Z|θ )P(θ |φ )P(φ )dθdφ (18) and Z is the... ⎦ ⎣ ⎦ The column and row numbers of the matrix representing parameter a ∗ are the next and current values of the hidden variable The column number in the matrix representing parameter π ∗ is equivalent to the index of the initial hidden variable We explain the emission probabilities of the target HMM in detail in the following B State-dependent and state-independent components ∗ ∗ In this experiment,... limitations; the limitation is determined by attacking angle, radius of sprockets, and length of crawler In order to 376 Frontiers in Robotics, Automation and Control improve its mobility, it is required to adjust the attack angle against the obstacles, enlarge the radius of its sprockets, and lengthen its crawler tracks And the mobility on the area like the stairs is inferior to that of the leg (S Hirose, 2000)... using a certain searching method However, the round robin-like searching method isn't so realistic, because the amount of searching becomes fat and calculation time becomes enormous Therefore, we propose the following idea as one of the approach to solve this problem If certain approximate function can express an optimal joint motion in a few parameters, the required joint motion can be derived in. .. details of the designed proposal distributions in the appendix 6 In the MCMC method, it is usually necessary to discard the initial samples In the experiments described in Sec 4, we generated 1000 samples in the MCMC step (b) (G = 1000), and we used the last 500 samples for the Monte Carlo approximation (R = 500) Frontiers in Robotics, Automation and Control 366 ⎡0.70 ⎢0.10 ⎢ ∗ a = ⎢0.05 ⎢ ⎢0.05 ⎢0.10... shown in Fig.2 1 2 Lifting up crawlers phase This motion is strongly influenced by friction forces, contact forces and impact forces between environments and crawlers Passing over phase In order to generate a crock wise torque at the point of edge of the step and crawlers, the robot has to change its posture This motion is strongly influenced by friction, balance of centre of gravity of the robot and inertia... Therefore, it is need to consider not only the moment in phase 2 but also both of Phase 1 and Phase 2 The maximum climb-able step height is distinguished by changes of postures That is to say, the problem of driving the maximum climb-able step height is the Frontiers in Robotics, Automation and Control 378 optimization problem of each joint motion If the each joint can not realize suitable motion to the environments, . 40 associated components using standard information-based criteria showing the degree of the association with each event. Frontiers in Robotics, Automation and Control 370 of the predicted. non-trivial due to their inherent Frontiers in Robotics, Automation and Control 358 complexity. In this chapter, therefore, we also describe an implementation based on a Markov Chain Monte Carlo. transition probability and the initial hidden variable probability in Eqn. (3) are ( ) ,t,a:a,iq|jqP ijtt 1 1 > = = = − @ (6) Frontiers in Robotics, Automation and Control 360 ( ) ,c:c|iqP i = = 1

Ngày đăng: 11/08/2014, 04:21

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan