Applied Structural and Mechanical Vibrations 2009 Part 6 docx

48 267 0
Applied Structural and Mechanical Vibrations 2009 Part 6 docx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Now, in order to determine completely the first order perturbation of the eigenvector, only the coefficient c kk is left; by imposing the normalization condition and retaining only the first power of γ , we get and by virtue of the expansion (6.79) we are finally led to (6.83) We can now explicitly write the result of the first-order perturbation calculation for the ith eigenvalue and the ith eigenvector as (6.84) (6.85) From the expressions above, it may be noted that only the ith unperturbed parameters enter into the calculation of the perturbed eigenvalue, while the complete unperturbed eigensolution is required to obtain the perturbed eigenvector. Roughly speaking, we could say that the perturbation has the effect of ‘mixing’ the ith unperturbed eigenvector with all the other eigenvectors for which the term in brackets of eq (6.85) is different from zero. Furthermore, a quick glance at the same equation suggests that the closer eigenvectors (to ) give a greater contribution, because, for these vectors, the term is smaller. It is evident that—with only minor modifications—the same results apply if we use the z eigenvectors; we only have to take into account the fact that in this case Also, the perturbed p vectors are orthonormal with respect to the new mass matrix, not with respect to M 0 . Example 6.3. Let us go back to the system of Fig. 6.1, whose eigensolution has been considered in Example 6.1, and make the following modifications: we increase the first mass by 0.25m and decrease the second mass by 0.1m. The total mass of the system changes from 4.0m into 4.15m, which corresponds to an increase of 3.75% with respect to the original situation. Also, let us increase the stiffness of the first spring of 0.1k so that the term changes from 5.0k into 5.1k, i.e. an increase of 2.0% with respect to the original situation. These modifications can be considered small and we expect accurate results from our perturbative calculations. The perturbation terms are Copyright © 2003 Taylor & Francis Group LLC We remember from Example 6.1 that: and so that the first-order perturbative terms for the eigenvalues can be obtained as (eq (6.84)) and hence (6.86) For the first eigenvector, the expansion coefficients are given by from which it follows that (6.87a) and the same procedure for the second eigenvector leads to (6.87b) Copyright © 2003 Taylor & Francis Group LLC Because of the simplicity of this example, these results can be compared to the exact calculation for the modified system, which can be performed with small effort. The exact eigenvalues are and which corresponds to a relative error of 0.07% on the first frequency and a relative error of 0.46% on the second. The exact eigenvectors are and they must be compared, respectively, to eqs (6.87a and b). Some complications appear in the case of degenerate eigenvalues. We will not deal with this subject in detail but a few qualitative considerations can be made. Suppose, for example, that two independent eigenvectors and correspond to the unperturbed eigenvalue (twofold degeneracy). In general, the perturbation will split this eigenvalue into two different values, say λ i1 and λ i2 ; as the perturbation tends to zero, the eigenvectors will tend to two unperturbed eigenvectors and , which will be in general two linear combinations of and . The additional problem is, as a matter of fact, the determination of and : this particular pair—out of the infinite number of combinations of and —will depend on the perturbation itself. For instance, let be an m-fold degenerate eigenvalue and let be a possible choice of mass-orthonormal eigenvectors (i.e. a basis in the subspace relative to the ith eigenvalue). We can then write the expansions (6.88) and (6.89) substitute them in the first-order problem and project the resulting equation successively on the eigenvectors (this is done by premultiplying, respectively, by ). We obtain, after some manipulation, a system of m homogeneous equations, which admits nontrivial solutions if the determinant of the coefficients is equal to zero. This condition results in an algebraic equation of degree m in and its m solutions represent the first-order corrections to . Substitution of each one of these values into the homogeneous system allows the calculation Copyright © 2003 Taylor & Francis Group LLC of the zeroth-order coefficients for the relevant eigenvector. We have thus obtained the desired m linear combinations of the unperturbed eigenvectors (i.e. the ); once these are known, the coefficients can be obtained by projecting the first-order equation on different eigenvectors. It is interesting to note that, in many cases, the effect of the perturbation is to completely or partially ‘remove the degeneracy’ by splitting the degenerate eigenvalue into a number of different frequencies that were indistinguishable in the original system. This circumstance can be useful in some practical applications, and it is worth pointing out that similar procedures apply—with only minor modifications—in the case of distinct but closely spaced eigenvalues. The subject of sensitivity analysis is much broader than shown in our discussion; in general, we can say that some linear systems are extremely sensitive to small changes in the system, and others are not. Sensitive systems are often said to be ‘ill-conditioned’, whereas insensitive systems are said to be ‘well-conditioned’. We will see that the generalized eigenvalue problem of eq (6.24) (or (6.29), which is the same) can be transformed into a standard eigenvalue problem (eq (6–26a)), where A is an appropriate matrix whose form and entries depend on how the transformation is carried out. The key point is that the eigenvalues are continuous functions of the entries of A, so we have reason to believe that a small perturbation matrix will correspond to a small change of the eigenvalues. But one often needs precise bounds to know how small is ‘small’ in each case. We will not pursue this subject further here for two reasons: first, a detailed discussion is beyond the scope of this book and, second, it would lead us too far away from the main topic of this chapter. For the moment, it suffices to say that if A is diagonalizable (see Appendix A on matrix analysis), it can be shown that it is possible to define a ‘condition number’ that represents a quantitative measure of ill-conditioning of the system and provides an upper bound on the perturbation of the eigenvalues due to a unit norm change in the system matrix; furthermore, it may be of interest to note that normal matrices are well-conditioned with respect to eigenvalue computations, that the condition number is generally conservative and that a better bound can be obtained if both A and the perturbing matrices are Hermitian. (The interested reader is referred to Horn and Johnson [1] and Junkins and Kim [2].) 6.4.1 Light damping The free vibration of a damped system is governed by eq (3.101), i.e. (6.90) Copyright © 2003 Taylor & Francis Group LLC As in the undamped case there are 2n independent solutions which can be superposed to meet 2n initial conditions. Assuming a trial solution of the form (6.91) leads to (6.92) which admits a nontrivial solution if the matrix in parentheses on the left- hand side is singular. Equation (6.92) represents what is commonly called a complex (or quadratic) eigenvalue problem because the eigenvalue and the elements of the eigenvector z are, in general, complex numbers; if and z satisfy eq (6.92), then so also do * and z*, where the asterisk denotes complex conjugation. In general, the complex eigenvalue problem is much more difficult than its undamped counterpart and much less attention has been given to efficient numerical procedures for its solution, but we will return to these aspects later. For the moment, let us make the following assumptions: the solution of the undamped problem is known and the system is lightly damped. The damping term can then be considered a small perturbation of the original undamped system and we are in a position to investigate its effect on the eigensolution of the conservative system. Let and p j (j=1, 2, …, n) be the eigenvalues and the mass-orthonormal eigenvectors of the conservative system (i.e. when C=0 in eq (6.92)); under the assumption of light damping we can write (6.93) Substitute these expressions in eq (6.92) and retain only the first-order terms (note that the terms in ∆ C and C∆z are neglected because they are second- order for light damping). After some manipulation we arrive at (6.94) Now, as we did in eq (6.79), we expand ∆p on the basis of the unperturbed eigenvectors, i.e. (6.95) Copyright © 2003 Taylor & Francis Group LLC Substitute eq (6.95) in (6.94) and premultiply the resulting expression by to get (6.96) Since for k=j we get the first-order perturbation of the jth eigenvalue (6.97) Note that a term M jj appears in the denominator of the right-hand side of eq (6.97) if we do not use mass-orthonormal vectors in the calculation. From eq (6.97) two observations can be made: • Each correction to the unperturbed eigenvalues takes the form of a real negative part (matrix C is generally positive definite) which transforms the solution into a damped oscillatory motion and accounts for the fact that the free vibration of real systems dies out with time because there is always some loss of energy. • The first-order correction involves only the diagonal terms of the matrix P T CP which is, in general, nondiagonal unless some assumptions are made on the damping matrix (remember that both M and K become diagonal under the similarity transformation P T MP and P T KP). Off- diagonal terms have only a second-order effect on the unperturbed eigenvalues. When eq (6.96) gives (6.98a) Again, a term M kk appears at the denominator on the right-hand side if the calculation is made with eigenvectors that are not mass-orthonormal; note also that a minus sign appears on the right-hand side if we start with The perturbed eigenvector is then (6.98b) showing that the perturbation splits the original real eigenvector into a pair of complex vectors having the same real part as the undamped mode (remember that, in vibration terminology, the term mode is analogous to Copyright © 2003 Taylor & Francis Group LLC eigenvector: more precisely, a mode is a particular pattern of motion which is mathematically represented by an eigenvector) but having small conjugate imaginary parts. On physical grounds—unless damping has some desirable characteristics which will be considered in a later section—this occurrence translates into the fact that, in a particular mode, each coordinate has a relative amplitude and a relative phase with respect to any other coordinate. In other words, the free vibration of a generally damped system oscillating in a particular mode is no longer a synchronous motion of the whole system: the individual degrees of freedom no longer move in phase or antiphase and they no longer reach their extremes of motion together. For obvious reasons, this pattern of motion is usually called a ‘complex mode’, as opposed to the ‘real mode’ of the undamped system where each coordinate does have an amplitude, but a phase angle which is either 0° or 180° and real numbers suffice for a complete description. 6.5 Structure and properties of matrices M, K and C: a few considerations A fundamental part of the analysis of MDOF systems—and of any physical phenomenon in general—is the solution of the appropriate equations of motion. However, as we stated in Chapter 1, the first step in any investigation is the formulation of the problem; this step involves the selection of a mathematical model which has to be both effective and reliable, meaning that we expect our model to reproduce the behaviour of the real physical system within an acceptable degree of accuracy and, possibly, at the least cost. We always must keep in mind that, once the mathematical model has been chosen, we solve that particular model and the solution can never give more information than that implicitly contained in the model itself. These observations become more important when we consider that: • Numerical procedures implemented on digital computers play a central role (think, for example, to the finite-element method) in the analysis of systems with more than three or four degrees of freedom. • Matrix algebra is the ‘natural language’ of these procedures • The effectiveness and reliability of numerical techniques depend on the structure and properties of the input matrices. • Continuous systems (i.e. systems with an infinite number of degrees of freedom) are very often modelled as MDOF systems. As in the case of an SDOF system, the principal forces acting on an MDOF system are (1) the inertia forces, (2) the elastic forces, (3) the damping forces and (4) the externally applied forces. We will not consider, for the moment, the forces of type (4). Under the assumption of small amplitude vibrations, we have seen in Chapter 3 that matrices M, K and C are symmetrical. Copyright © 2003 Taylor & Francis Group LLC Symmetry is a desirable property and results in significant computational advantages. In essence, the symmetry property of M and K depends on the form of the kinetic and potential energy functions and the symmetry C of depends on the existence of the Rayleigh dissipation function. Unfortunately, for most systems the damping properties are very difficult, if not impossible, to define. For this reason the most common choices for the treatment of its effects are (1) neglect damping altogether (this is often a better assumption than it sounds), (2) assume ‘proportional damping’ (Section 6.7) or (3) use available experimental information on the damping characteristics of a typical similar structure or on the structure itself. We know from Chapter 3 that both kinetic and potential energies can be written as quadratic forms and we know from basic physics that they are essentially positive quantities. If none of the degrees of freedom has zero mass, (eq (3.95)) is never zero unless is a zero vector and hence M, besides being symmetrical, is also positive definite; if some of the degrees of freedom have zero mass then M is a positive semidefinite matrix. Similar considerations apply for the stiffness matrix; unless the system is unrestrained and capable of rigid-body modes, K is a positive definite matrix. When this is not the case, i.e. when rigid-body modes are possible (Section 6.6), the stiffness matrix is positive semidefinite. It is worth pointing out that if a matrix A is symmetrical and positive definite, then A –1 always exists (i.e. A is nonsingular) and is a symmetrical positive definite matrix itself. The fact that either M, or K, or both, are nonsingular is useful when we want to transform the generalized eigenvalue problem (eq (6.29)) into a standard eigenvalue problem (eq (6.26a)), which is the form required by some numerical eigensolvers (section 6.8). 6.5.1 Mass properties The simplest procedure for defining the mass properties of a structure is by concentrating, or lumping, its mass at the points where the displacements are defined. This is certainly not a problem for a simple system such as the one in Fig. 6.1 where mass is, as a matter of fact, localized, but a certain degree of arbitrariness is inevitable for more complex systems. In any case, whatever the method we use to concentrate the masses of a given structure, if we choose the coordinates as the absolute displacement of the masses we obtain a diagonal mass matrix. In fact, the off-diagonal terms are zero because an acceleration at one point produces an inertia force at that point only; this is not strange if we consider that m ij is the force that must be applied at point i to equilibrate the inertia forces produced by a unit acceleration at point j, so that m ii =m i and m ij =0 for A diagonal matrix is certainly desirable for computational purposes, but a serious disadvantage of this approach is the fact that the mass associated with rotational degrees of freedom is zero because a point has no rotational inertia. Copyright © 2003 Taylor & Francis Group LLC This means that—when rotational degrees of freedom must be considered in a specific problem—the mass matrix is singular. In principle, the problem could be overcome by assigning some rotational inertia to the masses associated with rotational degrees of freedom (in which case the diagonal mass coefficient would be the rotational inertia of the mass), but this is easier said than done. The general conclusion is that the lumped mass matrix is a diagonal matrix with nonzero elements for each translational degree of freedom and zero diagonal elements for each rotational degree of freedom. A different approach is based on the assumed-modes method, a far- reaching technique developed along the line of reasoning of Section 5.5 (see also Chapter 9). In that section, a distributed parameter system was modelled as an SDOF system by an appropriate choice of a shape, or trial, function under the assumption that only one vibration pattern is developed during the motion. This basic idea can be improved by superposing n shape functions so that (6.99) where the z i (t) constitute a set of n generalized time-dependent coordinates. (Note that we considered the trial functions in eq (6.99) to depend on one spatial coordinate only, thus implying a one-dimensional problem (for example, an Euler-Bernoulli beam); this is only for our present convenience, and the extension to two or three spatial coordinates is straightforward.) In essence, eq (6.99) represents an n-DOF model of a continuous system, and since the kinetic energy of a continuous system is an integral expression depending on the partial derivative we can substitute eq (6.99) into this expression to arrive at the familiar form (6.100) where the coefficients m ij will now depend on the mass distribution of the system and on the trial functions i . Consider, for example, the axial vibration of an elastic bar of length L and mass per unit length; the kinetic energy is given by (6.101) Inserting eq (6.99) into eq (6.101) leads to (6.102) Copyright © 2003 Taylor & Francis Group LLC [...]... postmultiply by zj We get (6. 157) Subtracting eq (6. 1 56) from eq (6. 157), provided that orthogonality condition leads to the (6. 158) A second orthogonality condition can be obtained if we multiply eq (6. 1 56) by k and eq (6. 157) by j and subtract one of the resulting equations from the other; if we get (6. 159) Equations (6. 158) and (6. 159) are not as simple as their real mode counterparts but, as in that... definite) Equation (6. 165 ), known as the Cholesky factorization, expresses an important theorem of matrix algebra Substitution of eq (6. 165 ) into eq (6. 164 ) and successive premultiplication by LT leads to which, defining the new set of coordinates x=LTz, turns into the standard form (6. 166 ) where is a symmetrical matrix (because K–1 is symmetrical and hence Solving the eigenproblem (6. 166 ) leads to the... get, by virtue of (6. 42b) and (6. 44) (6. 1 16) Premultiplication of the eigenvalue problem by yields (6. 117) where the result of eq (6. 1 16) has been taken into account The process can be repeated to give (6. 118) which can be rewritten in the equivalent form (6. 119) just by premultiplying the term in parentheses on the left-hand side of eq (6. 118) by MM–1 The cases b=0 and b=1 in eq (6. 119) correspond,... decomposition as (6. 168 ) where R is an orthogonal matrix (RRT=I) and D2 is the diagonal matrix of the (positive, this is why we write D2) eigenvalues of M Substitution of eq (6. 168 ) in the generalized eigenproblem gives and since we arrive at (6. 169 ) where we have defined the matrix Now, premultiply eq (6. 169 ) by N–1, insert the identity matrix between K and z on the left-hand side and define the vector... matrix A=M–1K Alternatively we can premultiply the generalized problem by K–1 (provided that K is nonsingular) and arrive at (6. 164 ) where the dynamic matrix is now defined as and Numerical procedures known as iteration methods can be used both for (6. 163 ) and (6. 164 ); however, the form (6. 164 ) is preferred because it can be shown that the iteration converges to the largest value of γ, i.e to the fundamental... in the form of eq (6. 27c), can then be written as the eigenvector expansion (6. 129) where the 2n constants are determined by the initial conditions By using once again the orthogonality conditions, we arrive at the explicit expression (6. 130a) Copyright © 2003 Taylor & Francis Group LLC or, equivalently (6. 130b) Equations (6. 130a) and (6. 130b) are the counterpart of eqs (6. 51) and (6. 53) when the system... energy which reads (6. 108) and is valid for a structure which is initially stress free and not subjected to temperature changes Furthermore, Maxwell’s reciprocity theorem holds and Copyright © 2003 Taylor & Francis Group LLC it can be invoked to prove that and or, in matrix form (6. 109) i.e the flexibility matrix and the stiffness matrix are symmetrical The structure of eqs (6. 1 06) and (6. 107) suggests... terms of their real and imaginary parts These latter relationships are useful in the numerical solution of the complex eigenproblem Example 6. 6 The reader is also invited to consider Example 6. 1 (Fig 6. 1); assume the numerical values m=1 kg, k=2 N/m and add two viscous dampers of constants c1=0.02 and c2=0.01 N s/m in parallel with the springs k1 and k2s The equations of motion (6. 64) have now also a... two modes, say the kth and the mth, to get (6. 145a) so that a and b can be obtained and substituted in eq (6. 142) to determine the jth damping ratio when j is different from k and m Equation (6. 145a) in matrix form reads (6. 145b) Note that Rayleigh damping results approximately in a constant damping ratio for the middle-frequency modes and an increasing damping ratio for the low- and high-frequency modes... eigenvalue as (6. 160 ) (the formal analogy with the SDOF case is evident: Chapter 4), when Copyright © 2003 Taylor & Francis Group LLC it is not difficult to obtain the relationships (6. 161 ) where the kth damping ratio ζk and the kth frequency ωk can be determined from the kth eigenvecctor and the damping, mass and stiffness matrices 6. 8 Generalized and complex eigenvalue problems: reduction to standard form . equivalently (6. 130b) Equations (6. 130a) and (6. 130b) are the counterpart of eqs (6. 51) and (6. 53) when the system admits m rigid-body modes. Furthermore, by virtue of eq (6. 1 26) , it is not difficult. continuous system as a 2-DOF system and express the displacement by means of the two shape functions and From eq (6. 103) we get (6. 105a) and hence the mass matrix (6. 105b) 6. 5.2 Elastic properties The. can be invoked to prove that and or, in matrix form (6. 109) i.e. the flexibility matrix and the stiffness matrix are symmetrical. The structure of eqs (6. 1 06) and (6. 107) suggests that the two

Ngày đăng: 10/08/2014, 20:20

Tài liệu cùng người dùng

Tài liệu liên quan