Báo cáo toán học: "On the First Eigenvalue of Bipartite Graphs" ppt

23 357 0
Báo cáo toán học: "On the First Eigenvalue of Bipartite Graphs" ppt

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

On the First Eigenvalue of Bipartite Graphs Amitava Bhattacharya School of Mathematics Tata Institute of Fundamental Research Homi Bhabha Road, Colaba, Mumbai 400005, INDIA amitava@math.tifr.res.in Shmuel Friedland ∗ Department of Mathematics, Statistics, and Compu ter Science University of Illinois at Chicago Chicago, Illinois 60607-7045, USA friedlan@uic.edu Uri N. Peled Department of Mathematics, Statistics, and Compu ter Science University of Illinois at Chicago Chicago, Illinois 60607-7045, USA uripeled@uic.edu Submitted: Sep 19, 2008; Accepted : Nov 18, 2008; Published: Nov 30, 2008 Mathematics Subject Classification: 05C07, 05C35, 05C50, 15A18 Abstract In this paper we study the maximu m value of the largest eigenvalue for simple bipartite graphs, where the number of edges is given and the number of ver tices on each side of the bipar tition is given. We s tate a conjectured solution, which is an analog of the Brualdi-Hoffman conjecture for general graphs, and prove the conjecture in some special cases. Key words. Bipartite graph, maximal eigenvalue, Brualdi-Hoffman conjecture, degree, sequences, chain graphs. 1 Introduction The purpose of this paper is to study the maximum value of the maximum eigenvalue of certain classes of bipartite graphs. These problems are analogous to the problems ∗ Visiting Professor, Fall 2007 - Winter 2008, Berlin Ma thematical School, Berlin, Germany the electronic journal of combinatorics 15 (2008), #R144 1 considered in the literature for general graphs and 0 − 1 matrices [1, 2, 3, 5, 8]. We describe briefly the main problems and results obtained in this paper. We consider only finite simple undirected graphs bipartite graphs G. Let G = (V ∪ W, E), where V = {v 1 , . . . , v m }, W = {w 1 , . . . , w n } are the two set of vertices of G. We view the undirected edges E of G as a subset of V × W . Denote by deg v i , deg w j the degrees of the vertices v i , w j respectively. Let D(G) = {d 1 (G)  d 2 (G)  ···  d m (G)} be the rearranged set of the degrees deg v 1 , . . . , deg v m . Note that e(G) =  m i=1 deg v i is the number of edges in G. Denote by λ max (G) the maximal eigenvalue of G. Denote by G ni the induced subgraph of G consisting of nonisolated vertices o f G. Note that e(G) = e(G ni ), λ max (G) = λ max (G ni ). It is straightforward to show, see Proposition 2.1, that λ max (G)   e(G). (1.1) Furthermore the equality holds if a nd only if G ni is a complete bipartite gr aph. In what follows we assume that G = G ni , unless stated otherwise. The majority of this paper is devoted to refinements of (1.1) for noncomplete bipartite graphs. We now state the basic problem that this paper deal with. Denote by K p,q = (V ∪ W, E) the complete bipartite g r aph where #V = p, #W = q, E = V × W . We assume here the normalization 1 ≤ p  q. Let e be a positive integer satisfying e  pq. Denote by K(p, q, e) the family of subgraphs of K p,q with e edges and with no isolated vertices and which are not complete bipartite graphs. Problem 1.1. Let 2  p  q, 1 < e < pq be integers. Characterize the graphs which solve the maxima l problem max G∈K(p,q,e) λ max (G). (1.2) We conjecture below an analog of the Brualdi-Hoffman conjecture for nonbipartite graphs [1], which was proved by Rowlinson [5]. See [3, 8] for the proof of partial cases of this conjecture. Conjecture 1.2. Under the assumptions of Problem 1.1 an extremal graph that solves the maximal p roblem (1.2) is obtain ed f rom a complete bipartite graph by adding one vertex and a corresponding number of edges. Our first result toward the solution of Problem 1.1 is of interest by itself. Let D = {d 1  d 2  ···  d m } be a set of positive integers, and let B D be the class of bipartite graphs G with no isolated vertices, where D(G) = D. We show that max G∈B D λ max (G) is achieved for a unique graph, up to isomorphism, which is the chain graph [9], or the difference gra ph [7], corresponding to D. (See §2.) It follows that an extremal graph solving the Problem 1.1 is a chain graph. Our main result, Theorem 8 .1 , shows that Conjecture 1.2 holds in the following cases. Fix r  2 and assume that e ≡ r −1 mod r. Assume that l =  e r   r. Let p ∈ [r, l + 1] and q ∈ [l + 1, l + 1 + l r−1 ]. So K p,q has more than e edges. Then the maximum (1.2) is achieved if and only if G is isomorphic to the following chain graph G r,l+1 . G r,l+1 obtained the electronic journal of combinatorics 15 (2008), #R144 2 from K r−1,l+1 = (V ∪W, E) by adding an additional vertex v r to the set V , and connecting v r to the vertices w 1 , . . . , w l in W . We now list briefly the contents of t he paper. §2 is a preliminary section in which we recall some known results on bipartite graphs and related results on nonnegative matrices. In §3 we show that the maximum eigenvalue of a bipartite graph increases if we replace it by the corresponding chain graph. §4 gives upper estimates on the maximum eigenvalue of chain graphs. In §5 we discuss a minimal problem related to the sharp estimate of chain graphs with two different degrees. §6 discuses a special case of the above minimal problem over the integers. In §7 we introduce C-matrices, which can be viewed as continuous analogs of the square of the adjacency matrix of chain graphs. In §8 we prove Theorem 8.1. 2 Preliminaries Figure 1: The chain graph G D for D = {5, 2, 2, 1}. v 1 v 2 v 3 v 4 w 1 w 2 w 3 w 4 w 5 We now set up some notation and review basic results. Denote by R m×n the set of m×n matrices with real entries. We view A ∈ R m×n as A = (A i,j ) m,n i,j=1 . Let G = (V ∪ W, E) be a bipartite graph with V = {v 1 , . . . , v m }, W = {w 1 , . . . , w n }, possibly with isolated vertices. We arrange the vertices V ∪ W in the order v 1 , . . . , v m , w 1 , . . . , w n . Then the adjacency matrix B of G is of the form B =  0 A A ⊤ 0  , (2.1) where A is an m × n matrix of 0’s and 1’s. We call A the representation matrix of the bipartite graph G. Note that i −th row sum of A is deg v i and the j −th column sum of A is deg w j . The graph G can be specified by sp ecifying the matrix A. Then G does not have isolated vertices if and only if A does not have zero rows and columns. Given D = {d 1  d 2  ···  d m }, a set of positive integers, we construct f r om D the f ollowing graph G D ∈ B D , well-known as a chain graph [9] or a difference graph [7]. The vertices of G D are partitioned into {v 1 , . . . , v m } and {w 1 , . . . , w n }, n = d 1 , and the neighbors of v i are w 1 , w 2 , . . . , w d i . This is illustrated in Figure 1. the electronic journal of combinatorics 15 (2008), #R144 3 We now recall the well known spectral properties of the symmetric matrix B ∈ R (m+n)×(m+n) of the form (2.1), where A ∈ R m×n + , i.e. A is m ×n matrix with nonnegative entries. The spectrum of B is real (by the symmetry of B) and symmetric around the origin (because if (x, y) is an eigenvector for λ, then (x, −y) is an eigenvector for −λ). Every real matrix possesses a singular value decomposition (SVD). Specifically, if A is m ×n of rank r, then there exist positive numbers σ i = σ i (A), i = 1, . . . , r (the singular values of A) and orthogo na l matrices U, V of orders m, n such that A = U ΣV ⊤ , where Σ = diag(σ 1 , . . . , σ r , 0, . . .) is an m ×n matrix having the σ i along the main diagonal and otherwise zeros. It is possible and usually done to have the σ i in non-increasing order. For symmetric matrices the singular values are the absolute values of the eigenvalues. The ma- trix B fr om (2.1) satisfies B 2 =  AA ⊤ 0 0 A ⊤ A  , and so the eigenvalues of B 2 are those of AA ⊤ together with those of A ⊤ A. Using the SVD for A we see that AA ⊤ has the m eigenvalues σ 2 1 , . . . , σ 2 r , 0, 0, . . . and A ⊤ A has the n eigenvalues σ 2 1 , . . . , σ 2 r , 0, 0, . . The eigenvalues of B are therefore square roots of these numbers, and by the symmetry of the spectrum of B, the eigenvalues of B are the m + n numbers σ 1 , . . . , σ r , 0, . . . , 0, −σ r , . . . , −σ 1 . In particu- lar, the largest eigenva l ue of B is σ 1 (A). We denote this eigenvalue by λ max (B) = σ 1 (A). If B is the adjacency matrix of G then λ max (G) = λ max (B) = σ 1 (A). For x = (x 1 , . . . , x n ) ⊤ ∈ R n we denote by x =   n j=1 x 2 j , the Euclidean norm of x. For A ∈ R m×n the operator norm of A is given by σ 1 (A) =  λ max (AA ⊤ ) =  λ max (A ⊤ A). We can find σ 1 (A) by the fo llowing maximum principle. σ 1 (A) = max x∈R m ,x=1 y∈R n ,y=1 x ⊤ Ay = max y∈R n ,y=1 Ay. (2.2) To see this, consider the SVD A = UΣV ⊤ . Every x ∈ R m with x = 1 can be written as x = Ua, a = (a 1 , . . . , a m ) ⊤ , with a = 1, and every y ∈ R n with y = 1 can be written as y = V b, b = (b 1 , . . . , b n ) ⊤ , with b = 1. Then x ⊤ Ay = a ⊤ U ⊤ AV b = a ⊤ Σb = r  i=1 a i b i σ i  σ 1 r  i=1 |a i b i |   σ 1  r  i=1 a 2 i r  i=1 b 2 i  1 2  σ 1  m  i=1 a 2 i n  i=1 b 2 i  1 2 = σ 1 ab = σ 1 . Equality is achieved when x is the first column of U and y is the first column of V , and this proves the first equality of (2.2). The second equality is obtained by observing that for a g iven y, the maximizing x is parallel to Ay. Another useful fact that can be derived from the SVD A = UΣV ⊤ is the following: if (x, y) is an eigenvector of  0 A A ⊤ 0  belonging to σ 1 > 0, then x = y. To see this, observe that Ay = σ 1 x and A ⊤ x = σ 1 y. Define vectors a = U ⊤ x and b = V ⊤ y. Then Σb = ΣV ⊤ y = U ⊤ Ay = σ 1 U ⊤ x = σ 1 a Σ ⊤ a = Σ ⊤ U ⊤ x = V ⊤ A ⊤ x = σ 1 V ⊤ y = σ 1 b. the electronic journal of combinatorics 15 (2008), #R144 4 It follows that f or all i we have σ i b i = σ 1 a i and σ i a i = σ 1 b i . Thus (σ 1 + σ i )(b i − a i ) = 0. Since σ 1 + σ i > 0, it follows that a i = b i for all i, and so a = b, i.e., U ⊤ x = V ⊤ y. The orthogonal matrices U ⊤ and V ⊤ preserve the norms, and therefore x = y. Recall the Rayleigh quotient characterization of the largest eigenvalue of a symmetric matric M ∈ R m×m : λ max (M) = max x=1 x ⊤ Mx. Every x achieving the maximum is an eigenvector of M belonging to λ max (M). If the entries of M are non-negative (M  0), the maximization can be restricted to vectors x with non-negative entries (x  0) because x ⊤ Mx  |x| ⊤ M|x| and |x| = x, where |x| = (|x 1 |, . . . , |x m |) ⊤ . Recall that a square non-negative matrix C is said to be irreducible when some power of I + C is positive (has positive entries). Equivalently the digraph induced by C is strongly connected. Thus a symmetric non-negative matrix B is irreducible when the graph induced by B is connected. For a rectangular non-negative matrix A, AA ⊤ is irreducible if and only if the bipartite graph with adjacency matrix B given by (2.1) is connected. If a symmetric non-negative matrix B is irreducible, then the Perron-Frobenius theo- rem implies that the spectral radius of B is a simple root of the characteristic polynomial of B and the corresponding eigenvector can be chosen to be positive. The following result is well known and we bring its proof for completeness. Proposition 2.1. A = (A i,j ) m,n i,j=1 ∈ R m×n + and assume that B is of the form (2.1). Then λ max (B)      m,n  i,j=1 A 2 i,j . (2.3) Equality holds if and only if either A = 0 or A is a rank one m atrix. In particular, if G is a bipartite graph with e(G)  1 edges then λ max (G)   e(G), (2.4) and equality holds if and only if G ni is K p,q , where pq = e(G). Proof. Let r be the rank of A. Recall that the positive eigenvalues of AA ⊤ are σ 1 (A) 2 , . . ., σ r (A) 2 . Hence trace AA ⊤ =  m,n i,j A 2 i,j =  r k=1 σ k (A) 2  σ 1 (A) 2 . Combine this equality with the equality λ max (B) = σ 1 (A) to deduce (2.3). Clearly, equality holds if and only if either r = 0, i.e. A = 0, or r = 1. Assume now that G is a bipartite graph. Let A be the representation matrix of G. Then trace AA ⊤ = e(G). Hence (2.3) implies (2.4). Assume that G = K p,q . Then the entries of the representation matrix A consist of all 1. So rank of A is one and e(K p,q ) = pq, i.e. equality holds in (2.4). Conversely, suppose that λ max (G) =  e(G). Hence λ max (G ni ) =  e(G ni ). Let C ∈ R p×q be the representation matrix of G ni . Since G ni satisfies equality in (2.3) we deduce that C is a rank one matrix. But C is 0 − 1 matrix that does not have a zero row or column. Hence all the rows and columns of C must be identical. Hence all the entries of C are 1, i.e. G ni is a complete bipartite graph with e(G) edges. the electronic journal of combinatorics 15 (2008), #R144 5 3 The Optimal Graphs The aim of this section is to prove the following theorem. Theorem 3.1. Let D = {d 1  d 2  ···  d m } be a set of positive integers. Then the chain graph G D is the unique graph in B D , (up to isomorphism), which solves the maximum problem max G∈B D λ max (G). Let us call a graph G ∈ B D optimal if it solves the maximum problem of the above theorem. Our first goal is to prove that every optimal graph is connected. For that purpose we partially order the finite sets of positive integers as follows. Definition 3.2. Let D = {d 1  d 2  ···  d m } and D ′ = {d ′ 1  d ′ 2  ···  d ′ m ′ } be sets of m and m ′ positive integers. Then D > D ′ means that m  m ′ , and d 1  d ′ 1 , d 2  d ′ 2 , . , d m ′  d ′ m ′ , and D = D ′ . Theorem 3.3. If D > D ′ , then λ max (G D ) > λ max (G D ′ ). Proof. Let A be the m ×d 1 matrix of 0’s and 1’s with row sums d 1  d 2  ···  d m , and columns ordered so that each row is left-justified (1’s first, then 0’s). Then B =  0 A A ⊤ 0  of order m + d 1 is the adjacency matrix of G D . Let A ′ and B ′ be defined similarly for G D ′ . Let M = BB ⊤ and M ′ = B ′ B ′ ⊤ . Then M and M ′ are symmetric non- negative irre- ducible matrices of orders m and m ′ , and λ max (G D ) = λ max (M), λ max (G M ′ ) = λ max (M ′ ). Case 1: m = m ′ . In this case, by Definition 3.2, at least one of the inequalities d 1  d ′ 1 , d 2  d ′ 2 , . . . , d m  d ′ m holds with strict inequality. It follows that M  M ′ (i.e., M −M ′ is a non-negative matrix), and some integer i ∈ [1, m] satisfies M i,i > M ′ i,i . Therefore every positive vector y satisfies y ⊤ My > y ⊤ M ′ y. Let y = x ′ be the positive Perron-Frobenius eigenvector of the irreducible matrix M ′ , with x ′  = 1. Then by the Rayleigh quotient we have λ max (M)  x ′ ⊤ Mx ′ > x ′ ⊤ M ′ x ′ = λ max (M ′ ), as required. Case 2: m > m ′ . In this case, let L be the principal submatrix of M consisting of its first m ′ rows and columns. By Definition 3.2 we have d 1  d ′ 1 , d 2  d ′ 2 , . . . , d m ′  d ′ m ′ . Therefore L  M ′ , and hence λ max (L)  λ max (M ′ ). Since L is symmetric and non-negative, there exists a vector y ∈ R m ′ with y  0, y = 1 satisfying λ max (L) = y ⊤ Ly. Extend y with zeros to a vector x ∈ R m . Then x  0 and x = 1 and y ⊤ Ly = x ⊤ Mx  λ max (M). Equality cannot occur here, for if it did, then x would be the unique Perro n-Frobenius eigenvector of the irreducible matrix M and x would be positive, whereas x i = 0 for i > m ′ . Thus λ max (M) > λ max (L)  λ max (M ′ ), as required. Lemma 3.4. If G ∈ B D is connected, then λ max (G)  λ max (G D ). the electronic journal of combinatorics 15 (2008), #R144 6 Proof. Let D = {d 1  ···  d m } and n  d 1 . Let B =  0 A A ⊤ 0  be the adjacency matrix of G, where A is m×n with row sums given by D. Since G is connected, B is irreducible. Let (x, y) be the positive Perron-Frobenius eigenvector of B belonging to λ max (G) = σ 1 (A), with x = (x 1 , . . . , x m ), y = (y 1 , . . . , y n ): σ 1 (A)  x y  =  0 A A ⊤ 0  x y  , (3.1) and so Ay = σ 1 (A)x. As we observed in Section 2, we have x = y, and so we may choose a normalization such that x = y = 1. We reorder the columns of A so that y 1  y 2  ···  y n > 0. The rows are still in their original o r der, and so the row sums are d 1  ···  d m in this order. Let ←− A be the matrix obtained from A by left-justifying each row, i.e., moving all the 1’s of the row to the beginning of the row. Then  0 ←− A ←− A ⊤ 0  is the adjacency matrix o f G D with n−d 1 zero rows and columns appended at the end, a nd therefore λ max (G D ) = σ 1 ( ←− A ). Since y 1  y 2  ···  y n  0 and since ←− A is obtained from A by left-justifying each row, we have ←− A y  Ay. Since x  0, we have x ⊤ ←− A y  x ⊤ Ay. (2.2 ) yields λ max (G D ) = σ 1 ( ←− A ) = max u∈R m ,u=1 v∈R n ,v=1 u ⊤ ←− A v  x ⊤ ←− A y  x ⊤ Ay = x ⊤ σ 1 (A)x = σ 1 (A) = λ max (G). (3.2) Lemma 3.5. An optimal graph must be connected. Proof. Let G ∈ B D be an optimal graph. The graph G is bipartite, and one side of the bipartition (call it the first side) has degrees given by D. Let G 1 , . . . , G k be the connected components of G. Then λ max (G) = λ max (G i ) for some i. Like G, the component G i is also bipartite with the bipartition inherited from that of G. Let D i be the set of degrees of G i on the first side o f the bipartition. If G is disconnected, then D > D i , and therefore λ max (G)  λ max (G D ) > λ max (G D i )  λ max (G i ), where the first inequality is by the optimality of G, the second by Theorem 3.3, and the third by Lemma 3.4 and the connectivity of G i . This contradicts the equality above and proves that G must be connected. We are now ready to prove our main theorem. Proof. (of Theorem 3.1). Let G ∈ B D be optimal with adjacency matrix B =  0 A A ⊤ 0  . By Lemma 3.5 G is connected. We begin as in the proof of Lemma 3.4. We let (x, y) be the positive Perron-Frobenius eigenvector of B belonging to λ max (G) = σ 1 (A), with x = (x 1 , . . . , x m ) ⊤ , y = (y 1 , . . . , y n ) ⊤ , x = y = 1. In other words, (3.1) holds, o r equivalently Ay = σ 1 (A)x (3.3) A ⊤ x = σ 1 (A)y (3.4) the electronic journal of combinatorics 15 (2008), #R144 7 However, this time we reorder both the rows and the columns of A so that x 1  x 2  ···  x m > 0 and y 1  y 2  ···  y n > 0 , so now the row sums of A, which we still denote by d 1 , d 2 , . . . , d m , are not necessarily non-decreasing. As before, we let ←− A be the matrix obtained from A by left-justifying each row. The graph with adjacency matrix  0 ←− A ←− A ⊤ 0  is still isomorphic to G D plus n−d 1 isolated vertices, and therefore λ max (G D ) = σ 1 ( ←− A ). For the same reasons as before we have ←− A y  Ay, and therefore (3.2) holds. Moreover, by the optimality of G we have equality throughout (3.2). In particular G D is optimal and λ max (G D ) = σ 1 (A), so from now on we abbreviate σ 1 (A) = σ 1 ( ←− A ) = σ 1 . Now ←− A y  Ay and x ⊤ ←− A y = x ⊤ Ay and x > 0 give ←− A y = Ay = σ 1 x. (3.5) The first two rows of (3.5) and x 1  x 2 now give y 1 + ···+ y d 1 = σ 1 x 1  σ 1 x 2 = y 1 + ···+ y d 2 , and since y > 0 we must have d 1  d 2 . The same argument with rows 2 and 3 shows d 2  d 3 , and so on. We have established that the row sums of A are non-decreasing, i.e., d 1  d 2  ···  d m . (3.6) Note that by (3.6), the columns of ←− A are top-justified, i.e., the 1’s are above the 0’s. For this reason and x  0 we have ←− A ⊤ x  A ⊤ x, and hence y ⊤ ←− A ⊤ x  y ⊤ A ⊤ x by y  0. The analog of (3.2) for ←− A ⊤ now holds with equality throughout and we obtain ←− A ⊤ x = A ⊤ x = σ 1 y. (3.7) Our remaining task is to show that d 1 = n and A = ←− A , and therefore G is isomorphic to G D . For that purpose we need notation for rows of ←− A with equal sums, and similarly for columns. We introduce the fo llowing notation for the row sums of ←− A : r 1 = d 1 = ··· = d m 1 > r 2 = d m 1 +1 = ··· = d m 1 +m 2 > ··· > > r h = d m 1 +···+m h−1 +1 = ··· = d m 1 +···+m h , (3.8) where m 1 + ··· + m h = m. This is illustrated in Figure 2. Fro m (3.5) we have σ 1 x i = ( ←− A y) i = y 1 + ··· + y d i . Therefore by (3.8) and y > 0 we obtain x 1 = ··· = x m 1 > x m 1 +1 = ··· = x m 1 +m 2 > ··· > x m 1 +···+m h−1 +1 = ··· = x m 1 +···+m h > 0. (3.9) the electronic journal of combinatorics 15 (2008), #R144 8 Figure 2: The notation for the row sums of ←− A . r h r 2 r 1 m h m 2 m 1 Analogously using (3.7) and (3.8) and x > 0 we obtain y 1 = ··· = y r h > y r h +1 = ··· = y r h−1 > ··· > y r 2 +1 = ··· = y r 1 > 0 = y r 1 +1 = y r 1 +2 = ··· = y n . (3.10) Fro m (3.10) and y > 0, we conclude that d 1 = r 1 = n. We are now ready to show that A = ←− A . Since d 1 = r 1 = n, t he first m 1 rows o f A are all-1, and so are the first m 1 rows of ←− A . Now let m 1 + 1  i  m 1 + m 2 be a n index of one of the next m 2 rows. Both A and ←− A have d i = r 2 1’s in row i. Let the 1’s in row i of A lie in columns k 1 , . . . , k r 2 . Then by (3.5) we have r 2  j=1 y k j = (Ay) i = ( ←− A y) i = r 2  j=1 y j . (3.11) However, by (3.10) the last r 1 −r 2 components of y are smaller than all other components. Therefore if any k j lies in the range {r 2 + 1, . . . , r 1 }, it would follow that  r 2 j=1 y k j <  r 2 j=1 y j , contradicting (3.11). Therefore k j = j for j = 1 . . . , r 2 , in other words rows i of A and ←− A are the same. An analogous argument can be applied to the next m 3 rows, and so on, and it follows that ←− A = A. The arguments of the proof o f the above theorem yield. Corollary 3.6. Let the assumptions of Problem 1.1 holds. Then any H ∈ K(p, q, e) satisfying max G∈K(p,q,e) λ max (G) = λ max (H) is isomorphic to G D , for some D = {d 1  d 2  ···  d m }, where m  p and d 1  q. the electronic journal of combinatorics 15 (2008), #R144 9 4 Estimations of the Largest Eigenvalue In this section we give lower and upper bounds for λ max (G), where G is an optimal graph with a given adjacency matrix  0 A A ⊤ 0  . (Our upper bound improves the upper bound (2.4).) Recall the concept of the second comp ound matrix Λ 2 A of an m×n matrix A = (A i,j ) [6]: Λ 2 A is an  m 2  ×  n 2  matrix with rows indexed by (i 1 , i 2 ), 1  i 1 < i 2  m and columns indexed by (j 1 , j 2 ), 1  j 1 < j 2  n. The entry in row (i 1 , i 2 ) and column (j 1 , j 2 ) of Λ 2 A is given by Λ 2 A (i 1 ,i 2 )(j 1 ,j 2 ) = det  A i 1 ,j 1 A i 1 ,j 2 A i 2 ,j 1 A i 2 ,j 2  . (4.1) Note that (Λ 2 A) ⊤ = Λ 2 A ⊤ . It follows from the Cauchy-Binet theorem that for matrices A, B of compatible dimensions one has Λ 2 (AB) = (Λ 2 A)(Λ 2 B). One also has Λ 2 I = I and therefore Λ 2 A −1 = (Λ 2 A) −1 for nonsingular A. In particular, the second compound matrix o f an ortho gonal matrix is orthogonal, and therefore the SVD carries over to the second compound: if the SVD of A is A = UΣV ⊤ , then the SVD of Λ 2 A is (Λ 2 A) = (Λ 2 U)(Λ 2 Σ)(Λ 2 V ) ⊤ . It follows that if the singular values of A are σ 1  σ 2  ···, then the singular values of Λ 2 A are σ i σ j , i < j. In particular, when the rank of A is larger than 1, equivalently Λ 2 A = 0, we have σ 1 σ 2 = σ 1 (Λ 2 A) = max w=0 (Λ 2 A)w w , (4.2) where the second equality follows by a pplying (2.2) to Λ 2 A. We now specialize to A given by (2.1), which is the adjacency matrix of an optimal graph. Thus A is a matrix o f 0’s and 1’s whose rows ar e left-justified and whose columns are top-justified. We use the notation (3.8) for the row sums of A. For such A the entries of Λ 2 A can only be 0 or −1. Indeed, if in (4.1) A i 2 ,j 2 = 1, then A i 1 ,j 2 = A i 2 ,j 1 = A i 1 ,j 1 = 1 and the determinant vanishes. If A i 2 ,j 2 = 0 and the determinant does not vanish, then again A i 1 ,j 2 = A i 2 ,j 1 = A i 1 ,j 1 = 1 and the determinant equals −1. In the latter case we say that (i 1 , i 2 ) and (j 1 , j 2 ) are in a Γ-configuration. To estimate σ 1 σ 2 from below, we take a particular column vector w in (4.2): the (j 1 , j 2 ), j 1 < j 2 entry of w is 1 if column (j 1 , j 2 ) of Λ 2 A is nonzero; otherwise this entry of w is zero. (The assumption Λ 2 A = 0 implies that w = 0.) By (4.2) we have σ 1 σ 2  (Λ 2 A)w w . (4.3) Since w is a vector of 0’s and 1’s, w 2 is the number o f nonzero entries of w, that is to say, the numb er of nonzero columns of Λ 2 A. We count the nonzero columns (j 1 , j 2 ), j 1 < j 2 of Λ 2 A as follows. Fix j 2 . There is a unique k = 1, . . . , h −1 such that r k+1 + 1  j 2  r k . If j 1 is chosen among 1, . . . , r k+1 , then there exist (i 1 , i 2 ) such that (i 1 , i 2 ) and (j 1 , j 2 ) are in a Γ-configuration, and otherwise not. It follows that for our fixed j 2 , there are r k+1 values of j 1 such that column (j 1 , j 2 ) of Λ 2 A is nonzero. We can vary j 2 without changing the electronic journal of combinatorics 15 (2008), #R144 10 [...]... a = e Furthermore the definitions of ω, ω ′ yield the equalities (4.10) To complete the proof, we show that equality holds in (4.3) and therefore also in (4.6), i.e., b = ω Since Λ2 A has rank 1 and its elements are only 0 and −1, all its nonzero rows are equal Say it has c nonzero rows, each with d elements of −1 The trace of (Λ2 A)⊤ (Λ2 A) 2 2 is the sum of squares of the singular values of Λ2 A,... equals the sum of squares of the elements of Λ2 A, namely cd On the other hand, our chosen vector w satisfies w 2 = d and (Λ2 A)w 2 = cd2 A)w 2 (because each of the c nonzero rows of Λ2 A multiplied by w gives −d) Hence (Λ2w 2 = √ cd Thus both sides of (4.3) are equal to cd We suspect that under the conditions of Theorem 4.1 for h in (4.9) 5 3 one has strict inequality A Minimization Problem The first... vertices Proof Assume that GD has the Ferrers diagram given in Figure 2 with h = 2 Let n1 = r2 and n2 = r1 − r2 Assume first that r = 2 and 3 e is odd Then m1 1, m2 1, n1 1 and n2 1, so m1 + m2 2 and n1 + n2 2 Then for e 5 the theorem follows from Theorem 5.4 for r = 2 and Theorem 4.1 For e = 3 the theorem is trivial For r 3 the theorem follows from Proposition 5.3, Theorem 5.4 and Theorem 4.1 the electronic... ) ∈ Rp Then all the minors of M(c) are +ց nonnegative In particular M(c) is a nonnegative definite matrix If c1 > · · · > cp > 0, then all the principal minors of M(c) are positive, i.e., M(c) is positive definite Corollary 7.2 Let c = (c1 , , cp ) ∈ Rp Then the rank of M(c) is equal to the +ց number of distinct positive elements in {c1 , , cp } Proof Let {ci1 , , cik } be the set of all distinct... ∞ is the right-hand side of (7.6) Proof The equalities (7.3), (7.4) are straightforward The equality (7.5) follows from them and the identity 2 p 2 λi λj = 1 i . Introduction The purpose of this paper is to study the maximum value of the maximum eigenvalue of certain classes of bipartite graphs. These problems are analogous to the problems ∗ Visiting Professor,. this paper we study the maximu m value of the largest eigenvalue for simple bipartite graphs, where the number of edges is given and the number of ver tices on each side of the bipar tition is. absolute values of the eigenvalues. The ma- trix B fr om (2.1) satisfies B 2 =  AA ⊤ 0 0 A ⊤ A  , and so the eigenvalues of B 2 are those of AA ⊤ together with those of A ⊤ A. Using the SVD for

Ngày đăng: 07/08/2014, 21:21

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan