Ebook Number theory - An introduction to mathematics (2/E): Part 2

320 53 0
Ebook Number theory - An introduction to mathematics (2/E): Part 2

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Part 2 book “Number theory - An introduction to mathematics” has contents: The arithmetic of quadratic forms, the geometry of numbers, the number of prime numbers, a character study, uniform distribution and ergodic theory, elliptic functions, connections with number theory.

VII The Arithmetic of Quadratic Forms We have already determined the integers which can be represented as a sum of two squares Similarly, one may ask which integers can be represented in the form x + 2y or, more generally, in the form ax + 2bx y + cy , where a, b, c are given integers The arithmetic theory of binary quadratic forms, which had its origins in the work of Fermat, was extensively developed during the 18th century by Euler, Lagrange, Legendre and Gauss The extension to quadratic forms in more than two variables, which was begun by them and is exemplified by Lagrange’s theorem that every positive integer is a sum of four squares, was continued during the 19th century by Dirichlet, Hermite, H.J.S Smith, Minkowski and others In the 20th century Hasse and Siegel made notable contributions With Hasse’s work especially it became apparent that the theory is more perspicuous if one allows the variables to be rational numbers, rather than integers This opened the way to the study of quadratic forms over arbitrary fields, with pioneering contributions by Witt (1937) and Pfister (1965–67) From this vast theory we focus attention on one central result, the Hasse–Minkowski theorem However, we first study quadratic forms over an arbitrary field in the geometric formulation of Witt Then, following an interesting approach due to Frăohlich (1967), we study quadratic forms over a Hilbert field Quadratic Spaces The theory of quadratic spaces is simply another name for the theory of quadratic forms The advantage of the change in terminology lies in its appeal to geometric intuition It has in fact led to new results even at quite an elementary level The new approach had its debut in a paper by Witt (1937) on the arithmetic theory of quadratic forms, but it is appropriate also if one is interested in quadratic forms over the real field or any other field For the remainder of this chapter we will restrict attention to fields for which + = Thus the phrase ‘an arbitrary field’ will mean ‘an arbitrary field of characteristic = 2’ The proofs of many results make essential use of this restriction on the W.A Coppel, Number Theory: An Introduction to Mathematics, Universitext, DOI: 10.1007/978-0-387-89486-7_7, © Springer Science + Business Media, LLC 2009 291 292 VII The Arithmetic of Quadratic Forms characteristic For any field F, we will denote by F × the multiplicative group of all nonzero elements of F The squares in F × form a subgroup F ×2 and any coset of this subgroup is called a square class Let V be a finite-dimensional vector space over such a field F We say that V is a quadratic space if with each ordered pair u, v of elements of V there is associated an element (u, v) of F such that (i) (u + u , v) = (u , v) + (u , v) for all u , u , v ∈ V ; (ii) (αu, v) = α(u, v) for every α ∈ F and all u, v ∈ V ; (iii) (u, v) = (v, u) for all u, v ∈ V It follows that (i) (u, v + v ) = (u, v ) + (u, v ) for all u, v , v ∈ V ; (ii) (u, αv) = α(u, v) for every α ∈ F and all u, v ∈ V Let e1 , , en be a basis for the vector space V Then any u, v ∈ V can be uniquely expressed in the form n n u= ξjej, v= j =1 ηjej, j =1 where ξ j , η j ∈ F( j = 1, , n), and n α j k ξ j ηk , (u, v) = j,k=1 where α j k = (e j , ek ) = αkj Thus n α j k ξ j ξk (u, u) = j,k=1 is a quadratic form with coefficients in F The quadratic space is completely determined by the quadratic form, since (u, v) = {(u + v, u + v) − (u, u) − (v, v)}/2 (1) Conversely, for a given basis e1 , , en of V , any n × n symmetric matrix A = (α j k ) with elements from F, or the associated quadratic form f (x) = x t Ax, may be used in this way to give V the structure of a quadratic space Let e1 , , en be any other basis for V Then n ei = τ jie j , j =1 where T = (τi j ) is an invertible n × n matrix with elements from F Conversely, any such matrix T defines in this way a new basis e1 , , en Since Quadratic Spaces 293 n (ei , ek ) = τ j i β j h τhk , j,h=1 where β j h = (e j , eh ), the matrix B = (β j h ) is symmetric and A = T t BT (2) Two symmetric matrices A, B with elements from F are said to be congruent if (2) holds for some invertible matrix T with elements from F Thus congruence of symmetric matrices corresponds to a change of basis in the quadratic space Evidently congruence is an equivalence relation, i.e it is reflexive, symmetric and transitive Two quadratic forms are said to be equivalent over F if their coefficient matrices are congruent Equivalence over F of the quadratic forms f and g will be denoted by f ∼ F g or simply f ∼ g It follows from (2) that det A = (det T )2 det B Thus, although det A is not uniquely determined by the quadratic space, if it is nonzero, its square class is uniquely determined By abuse of language, we will call any representative of this square class the determinant of the quadratic space V and denote it by det V Although quadratic spaces are better adapted for proving theorems, quadratic forms and symmetric matrices are useful for computational purposes Thus a familiarity with both languages is desirable However, we not feel obliged to give two versions of each definition or result, and a version in one language may later be used in the other without explicit comment A vector v is said to be orthogonal to a vector u if (u, v) = Then also u is orthogonal to v The orthogonal complement U ⊥ of a subspace U of V is defined to be the set of all v ∈ V such that (u, v) = for every u ∈ U Evidently U ⊥ is again a subspace A subspace U will be said to be non-singular if U ∩ U ⊥ = {0} The whole space V is itself non-singular if and only if V ⊥ = {0} Thus V is non-singular if and only if some, and hence every, symmetric matrix describing it is non-singular, i.e if and only if det V = We say that a quadratic space V is the orthogonal sum of two subspaces V1 and V2 , and we write V = V1 ⊥V2 , if V = V1 + V2 , V1 ∩ V2 = {0} and (v , v ) = for all v ∈ V1 , v ∈ V2 If A1 is a coefficient matrix for V1 and A2 a coefficient matrix for V2 , then A= A1 0 A2 is a coefficient matrix for V = V1 ⊥V2 Thus det V = (det V1 )(det V2 ) Evidently V is non-singular if and only if both V1 and V2 are non-singular If W is any subspace supplementary to the orthogonal complement V ⊥ of the whole space V , then V = V ⊥ ⊥W and W is non-singular Many problems for arbitrary quadratic spaces may be reduced in this way to non-singular quadratic spaces 294 VII The Arithmetic of Quadratic Forms Proposition If a quadratic space V contains a vector u such that (u, u) = 0, then V = U ⊥U ⊥ , where U = u is the one-dimensional subspace spanned by u Proof For any vector v ∈ V , put v = v − αu, where α = (v, u)/(u, u) Then (v , u) = and hence v ∈ U ⊥ Since U ∩ U ⊥ = {0}, the result follows ✷ A vector space basis u , , u n of a quadratic space V is said to be an orthogonal basis if (u j , u k ) = whenever j = k Proposition Any quadratic space V has an orthogonal basis Proof If V has dimension 1, there is nothing to prove Suppose V has dimension n > and the result holds for quadratic spaces of lower dimension If (v, v) = for all v ∈ V , then any basis is an orthogonal basis, by (1) Hence we may assume that V contains a vector u such that (u , u ) = If U1 is the 1-dimensional subspace spanned by u then, by Proposition 1, V = U1 ⊥U1⊥ By the induction hypothesis U1⊥ has an orthogonal basis u , , u n , and u , u , , u n is then an orthogonal basis for V ✷ Proposition says that any symmetic matrix A is congruent to a diagonal matrix, or that the corresponding quadratic form f is equivalent over F to a diagonal form δ1 ξ12 + · · · + δn ξn2 Evidently det f = δ1 · · · δn and f is non-singular if and only if δ j = (1 ≤ j ≤ n) If A = then, by Propositions and 2, we can take δ1 to be any element of F × which is represented by f Here γ ∈ F × is said to be represented by a quadratic space V over the field F if there exists a vector v ∈ V such that (v, v) = γ As an application of Proposition we prove Proposition If U is a non-singular subspace of the quadratic space V , then V = U ⊥U ⊥ Proof Let u , , u m be an orthogonal basis for U Then (u j , u j ) = (1 ≤ j ≤ m), since U is non-singular For any vector v ∈ V , let u = α1 u + · · · + αm u m , where α j = (v, u j )/(u j , u j ) for each j Then u ∈ U and (u, u j ) = (v, u j ) (1 ≤ j ≤ m) Hence v − u ∈ U ⊥ Since U ∩ U ⊥ = {0}, the result follows ✷ It may be noted that if U is a non-singular subspace and V = U ⊥W for some subspace W , then necessarily W = U ⊥ For it is obvious that W ⊆ U ⊥ and dim W = dim V − dim U = dim U ⊥ , by Proposition Proposition Let V be a non-singular quadratic space If v , , v m are linearly independent vectors in V then, for any η1 , , ηm ∈ F, there exists a vector v ∈ V such that (v j , v) = η j (1 ≤ j ≤ m) Moreover, if U is any subspace of V , then Quadratic Spaces 295 (i) dim U + dim U ⊥ = dim V ; (ii) U ⊥⊥ = U ; (iii) U ⊥ is non-singular if and only if U is non-singular Proof There exist vectors v m+1 , , v n ∈ V such that v , , v n form a basis for V If we put α j k = (v j , v k ) then, since V is non-singular, the n × n symmetric matrix A = (α j k ) is non-singular Hence, for any η1 , , ηn ∈ F, there exist unique ξ1 , , ξn ∈ F such that v = ξ1 v + · · · + ξn v n satisfies (v , v) = η1 , , (v n , v) = ηn This proves the first part of the proposition By taking U = v , , v m and η1 = · · · = ηm = 0, we see that dim U ⊥ = n−m Replacing U by U ⊥ , we obtain dim U ⊥⊥ = dim U Since it is obvious that U ⊆ U ⊥⊥ , this implies U = U ⊥⊥ Since U non-singular means U ∩ U ⊥ = {0}, (iii) follows at once from (ii) ✷ We now introduce some further definitions A vector u is said to be isotropic if u = and (u, u) = A subspace U of V is said to be isotropic if it contains an isotropic vector and anisotropic otherwise A subspace U of V is said to be totally isotropic if every nonzero vector in U is isotropic, i.e if U ⊆ U ⊥ According to these definitions, the trivial subspace {0} is both anisotropic and totally isotropic A quadratic space V over a field F is said to be universal if it represents every γ ∈ F × , i.e if for each γ ∈ F × there is a vector v ∈ V such that (v, v) = γ Proposition If a non-singular quadratic space V is isotropic, then it is universal Proof Since V is isotropic, it contains a vector u = such that (u, u) = Since V is non-singular, it contains a vector w such that (u, w) = Then w is linearly independent of u and by replacing w by a scalar multiple we may assume (u, w) = If v = αu + w, then (v, v) = γ for α = {γ − (w, w)}/2 ✷ On the other hand, a non-singular universal quadratic space need not be isotropic As an example, take F to be the finite field with three elements and V the 2-dimensional quadratic space corresponding to the quadratic form ξ12 + ξ22 Proposition A non-singular quadratic form f (ξ1 , , ξn ) with coefficients from a field F represents γ ∈ F × if and only if the quadratic form g(ξ0 , ξ1 , , ξn ) = −γ ξ02 + f (ξ1 , , ξn ) is isotropic Proof Obviously if f (x , , x n ) = γ and x = 1, then g(x , x , , x n ) = Suppose on the other hand that g(x 0, x , , x n ) = for some x j ∈ F, not all zero If x = 0, then f certainly represents γ If x = 0, then f is isotropic and hence, by Proposition 5, it still represents γ ✷ Proposition Let V be a non-singular isotropic quadratic space If V = U ⊥W , then there exists γ ∈ F × such that, for some u ∈ U and w ∈ W , (u, u) = γ , (w, w) = −γ 296 VII The Arithmetic of Quadratic Forms Proof Since V is non-singular, so also are U and W , and since V contains an isotropic vector v , there exist u ∈ U , w ∈ W , not both zero, such that (u , u ) = −(w , w ) If this common value is nonzero, we are finished Otherwise either U or W is isotropic Without loss of generality, suppose U is isotropic Since W is non-singular, it contains a vector w such that (w, w) = 0, and U contains a vector u such that (u, u) = −(w, w), by Proposition ✷ We now show that the totally isotropic subspaces of a quadratic space are important for an understanding of its structure, even though they are themselves trivial as quadratic spaces Proposition All maximal totally isotropic subspaces of a quadratic space have the same dimension Proof Let U1 be a maximal totally isotropic subspace of the quadratic space V Then U1 ⊆ U1⊥ and U1⊥ \U1 contains no isotropic vector Since V ⊥ ⊆ U1⊥ , it follows that V ⊥ ⊆ U1 If V is a subspace of V supplementary to V ⊥ , then V is non-singular and U1 = V ⊥ + U1 , where U1 ⊆ V Since U1 is a maximal totally isotropic subspace of V , this shows that it is sufficient to establish the result when V itself is non-singular Let U2 be another maximal totally isotropic subspace of V Put W = U1 ∩ U2 and let W1 , W2 be subspaces supplementary to W in U1 , U2 respectively We are going to show that W2 ∩ W1⊥ = {0} Let v ∈ W2 ∩ W1⊥ Since W2 ⊆ U2 , v is isotropic and v ∈ U2⊥ ⊆ W ⊥ Hence v ∈ U1⊥ and actually v ∈ U1 , since v is isotropic Since W2 ⊆ U2 this implies v ∈ W , and since W ∩ W2 = {0} this implies v = It follows that dim W2 + dim W1⊥ ≤ dim V But, since V is now assumed nonsingular, dim W1 = dim V − dim W1⊥ , by Proposition Hence dim W2 ≤ dim W1 and, for the same reason, dim W1 ≤ dim W2 Thus dim W2 = dim W1 , and hence dim U2 = dim U1 ✷ We define the index, ind V , of a quadratic space V to be the dimension of any maximal totally isotropic subspace Thus V is anisotropic if and only if ind V = A field F is said to be ordered if it contains a subset P of positive elements, which is closed under addition and multiplication, such that F is the disjoint union of the sets {0}, P and −P = {−x : x ∈ P} The rational field Q and the real field R are ordered fields, with the usual interpretation of ‘positive’ For quadratic spaces over an ordered field there are other useful notions of index A subspace U of a quadratic space V over an ordered field F is said to be positive definite if (u, u) > for all nonzero u ∈ U and negative definite if (u, u) < for all nonzero u ∈ U Evidently positive definite and negative definite subspaces are anisotropic Proposition All maximal positive definite subspaces of a quadratic space V over an ordered field F have the same dimension Quadratic Spaces 297 Proof Let U+ be a maximal positive definite subspace of the quadratic space V Since U+ is certainly non-singular, we have V = U+ ⊥W , where W = U+⊥ , and since U+ is maximal, (w, w) ≤ for all w ∈ W Since U+ ⊆ V , we have V ⊥ ⊆ W If U− is a maximal negative definite subspace of W , then in the same way W = U− ⊥U0 , where U0 = U−⊥ ∩ W Evidently U0 is totally isotropic and U0 ⊆ V ⊥ In fact U0 = V ⊥ , since U− ∩ V ⊥ = {0} Since (v, v) ≥ for all v ∈ U+ ⊥V ⊥ , it follows that U− is a maximal negative definite subspace of V If U+ is another maximal positive definite subspace of V , then U+ ∩ W = {0} and hence dim U+ + dim W = dim(U+ + W ) ≤ dim V Thus dim U+ ≤ dim V − dim W = dim U+ But U+ and U+ can be interchanged ✷ If V is a quadratic space over an ordered field F, we define the positive index ind+ V to be the dimension of any maximal positive definite subspace Similarly all maximal negative definite subspaces have the same dimension, which we will call the negative index of V and denote by ind− V The proof of Proposition shows that ind+ V + ind− V + dim V ⊥ = dim V Proposition 10 Let F denote the real field R or, more generally, an ordered field in which every positive element is a square Then any non-singular quadratic form f in n variables with coefficients from F is equivalent over F to a quadratic form g = ξ12 + · · · + ξ p2 − ξ p+1 − · · · − ξn2 , where p ∈ {0, 1, , n} is uniquely determined by f In fact, ind+ f = p, ind− f = n − p, ind f = min( p, n − p) Proof By Proposition 2, f is equivalent over F to a diagonal form δ1 η12 + · · · + δn ηn2 , where δ j = (1 ≤ j ≤ n) We may choose the notation so that δ j > for j ≤ p and 1/2 δ j < for j > p The change of variables ξ j = δ j η j ( j ≤ p), ξ j = (−δ j )1/2 η j ( j > p) now brings f to the form g Since the corresponding quadratic space has a p-dimensional maximal positive definite subspace, p = ind+ f is uniquely determined Similarly n − p = ind− f , and the formula for ind f follows readily ✷ It follows that, for quadratic spaces over a field of the type considered in Proposition 10, a subspace is anisotropic if and only if it is either positive definite or negative definite Proposition 10 completely solves the problem of equivalence for real quadratic forms (The uniqueness of p is known as Sylvester’s law of inertia.) It will now be shown that the problem of equivalence for quadratic forms over a finite field can also be completely solved Lemma 11 If V is a non-singular 2-dimensional quadratic space over a finite field Fq , of (odd) cardinality q, then V is universal 298 VII The Arithmetic of Quadratic Forms Proof By choosing an orthogonal basis for V we are reduced to showing that if α, β, 2 γ ∈ F× q , then there exist ξ, η ∈ Fq such that αξ + βη = γ As ξ runs through Fq , αξ takes (q + 1)/2 = + (q − 1)/2 distinct values Similarly, as η runs through Fq , γ − βη2 takes (q + 1)/2 distinct values Since (q + 1)/2 + (q + 1)/2 > q, there exist ξ, η ∈ Fq for which αξ and γ − βη2 take the same value ✷ Proposition 12 Any non-singular quadratic form f in n variables over a finite field Fq is equivalent over Fq to the quadratic form + δξn2 , ξ12 + · · · + ξn−1 where δ = det f is the determinant of f There are exactly two equivalence classes of non-singular quadratic forms in n variables over Fq , one consisting of those forms f whose determinant det f is a square × in F× q , and the other those for which det f is not a square in Fq Proof Since the first statement of the proposition is trivial for n = 1, we assume that n > and it holds for all smaller values of n It follows from Lemma 11 that f represents and hence, by the remark after the proof of Proposition 2, f is equivalent over Fq to a quadratic form ξ12 + g(ξ2 , , ξn ) Since f and g have the same determinant, the first statement of the proposition now follows from the induction hypothesis × Since F× q contains (q −1)/2 distinct squares, every element of Fq is either a square or a square times a fixed non-square The second statement of the proposition now follows from the first ✷ We now return to quadratic spaces over an arbitrary field A 2-dimensional quadratic space is said to be a hyperbolic plane if it is non-singular and isotropic Proposition 13 For a 2-dimensional quadratic space V , the following statements are equivalent: (i) (ii) (iii) (iv) V is a hyperbolic plane; V has a basis u , u such that (u , u ) = (u , u ) = 0, (u , u ) = 1; V has a basis v , v such that (v , v ) = 1, (v , v ) = −1, (v , v ) = 0; − det V is a square in F × Proof Suppose first that V is a hyperbolic plane and let u be any isotropic vector in V If v is any linearly independent vector, then (u , v) = 0, since V is non-singular By replacing v by a scalar multiple we may assume that (u , v) = If we put u = v + αu , where α = −(v, v)/2, then (u , u ) = (v, v) + 2α = 0, (u , u ) = (u , v) = 1, and u , u is a basis for V If u , u are isotropic vectors in V such that (u , u ) = 1, then the vectors v = u + u /2 and v = u − u /2 satisfy (iii), and if v , v satisfy (iii) then det V = −1 Finally, if (iv) holds then V is certainly non-singular Let w1 , w2 be an orthogonal basis for V and put δ j = (w j , w j ) ( j = 1, 2) By hypothesis, δ1 δ2 = −γ , where γ ∈ F × Since γ w1 + δ1 w2 is an isotropic vector, this proves that (iv) implies (i) ✷ Quadratic Spaces 299 Proposition 14 Let V be a non-singular quadratic space If U is a totally isotropic subspace with basis u , , u m , then there exists a totally isotropic subspace U with basis u , , u m such that (u j , u k ) = or according as j = k or j = k Hence U ∩ U = {0} and U + U = H1⊥ · · · ⊥Hm , where H j is the hyperbolic plane with basis u j , u j (1 ≤ j ≤ m) Proof Suppose first that m = Since V is non-singular, there exists a vector v ∈ V such that (u , v) = The subspace H1 spanned by u , v is a hyperbolic plane and hence, by Proposition 13, it contains a vector u such that (u , u ) = 0, (u , u ) = This proves the proposition for m = Suppose now that m > and the result holds for all smaller values of m Let W be the totally isotropic subspace with basis u , , u m By Proposition 4, there exists a vector v ∈ W ⊥ such that (u , v) = The subspace H1 spanned by u , v is a hyperbolic plane and hence it contains a vector u such that (u , u ) = 0, (u , u ) = Since H1 is non-singular, H1⊥ is also non-singular and V = H1⊥H1⊥ Since W ⊆ H1⊥ , the result now follows by applying the induction hypothesis to the subspace W of the quadratic space H1⊥ ✷ Proposition 15 Any quadratic space V can be represented as an orthogonal sum V = V ⊥ ⊥H1⊥ · · · ⊥Hm ⊥V0 , where H1 , , Hm are hyperbolic planes and the subspace V0 is anisotropic Proof Let V1 be any subspace supplementary to V ⊥ Then V1 is non-singular, by the definition of V ⊥ If V1 is anisotropic, we can take m = and V0 = V1 Otherwise V1 contains an isotropic vector and hence also a hyperbolic plane H1 , by Proposition 14 By Proposition 3, V1 = H1⊥V2 , where V2 = H1⊥ ∩ V1 is non-singular If V2 is anisotropic, we can take V0 = V2 Otherwise we repeat the process After finitely many steps we must obtain a representation of the required form, possibly with V0 = {0} ✷ Let V and V be quadratic spaces over the same field F The quadratic spaces V , V are said to be isometric if there exists a linear map ϕ : V → V which is an isometry, i.e it is bijective and (ϕv, ϕv) = (v, v) for all v ∈ V By (1), this implies (ϕu, ϕv) = (u, v) for all u, v ∈ V 300 VII The Arithmetic of Quadratic Forms The concept of isometry is only another way of looking at equivalence For if ϕ : V → V is an isometry, then V and V have the same dimension If u , , u n is a basis for V and u , , u n a basis for V then, since (u j , u k ) = (ϕu j , ϕu k ), the isometry is completely determined by the change of basis in V from ϕu , , ϕu n to u1, , un A particularly simple type of isometry is defined in the following way Let V be a quadratic space and w a vector such that (w, w) = The map τ : V → V defined by τ v = v − {2(v, w)/(w, w)}w is obviously linear If W is the non-singular one-dimensional subspace spanned by w, then V = W ⊥W ⊥ Since τ v = v if v ∈ W ⊥ and τ v = −v if v ∈ W , it follows that τ is bijective Writing α = −2(v, w)/(w, w), we have (τ v, τ v) = (v, v) + 2α(v, w) + α2 (w, w) = (v, v) Thus τ is an isometry Geometrically, τ is a reflection in the hyperplane orthogonal to w We will refer to τ = τw as the reflection corresponding to the non-isotropic vector w Proposition 16 If u, u are vectors of a quadratic space V such that (u, u) = (u , u ) = 0, then there exists an isometry ϕ : V → V such that ϕu = u Proof Since (u + u , u + u ) + (u − u , u − u ) = 2(u, u) + 2(u , u ) = 4(u, u), at least one of the vectors u + u , u − u is not isotropic If u − u is not isotropic, the reflection τ corresponding to w = u − u has the property τ u = u , since (u − u , u − u ) = 2(u, u − u ) If u + u is not isotropic, the reflection τ corresponding to w = u + u has the property τ u = −u Since u is not isotropic, the corresponding reflection σ maps u onto −u , and hence the isometry σ τ maps u onto u ✷ The proof of Proposition 16 has the following interesting consequence: Proposition 17 Any isometry ϕ : V → V of a non-singular quadratic space V is a product of reflections Proof Let u , , u n be an orthogonal basis for V By Proposition 16 and its proof, there exists an isometry ψ, which is either a reflection or a product of two reflections, such that ψu = ϕu If U is the subspace with basis u and W the subspace with basis u , , u n , then V = U ⊥W and W = U ⊥ is non-singular Since the isometry ϕ1 = ψ −1 ϕ fixes u , we have also ϕ1 W = W But if σ : W → W is a reflection, the extension τ : V → V defined by τ u = u if u ∈ U , τ w = σ w if w ∈ W , is also a reflection By using induction on the dimension n, it follows that ϕ1 is a product of reflections, and hence so also is ϕ = ψϕ1 ✷ By a more elaborate argument E Cartan (1938) showed that any isometry of an n-dimensional non-singular quadratic space is a product of at most n reflections ... ξ 12 − a 22 − b(ξ 32 − aξ 42 ), ξ 12 − a 22 − b (ξ 32 − aξ 42 ) are anisotropic and thus equivalent It follows from Witt’s cancellation theorem that the binary forms b(ξ 32 − aξ 42 ) and b (ξ 32 − aξ 42. .. Forms then ζ 12 − a 22 − bζ 32 + abζ 42 = (ξ 12 − a 22 − bξ 32 + abξ 42) (η 12 − a 22 − bη 32 + abη 42) It follows that G a,b is a subgroup of F × ✷ Proposition 30 A field F is a Hilbert field if and only... , an It follows from Lemma 32 that we may restrict attention to the case where a1 ξ 12 + a2 22 is equivalent to b1 ξ 12 + b2 22 and a j = b j for all j > Then (a1 , a2 ) F = (b1 , b2 ) F

Ngày đăng: 22/01/2020, 17:30

Từ khóa liên quan

Mục lục

  • Number Theory: An Introduction to Mathematics, Second Edition

    • Contents

    • Preface to the Second Edition

    • The Expanding Universe of Numbers

      • 0 Sets, Relations and Mappings

      • 1 Natural Numbers

      • 2 Integers and Rational Numbers

      • 3 Real Numbers

      • 4 Metric Spaces

      • 5 Complex Numbers

      • 6 Quaternions and Octonions

      • 7 Groups

      • 8 Rings and Fields

      • 9 Vector Spaces and Associative Algebras

      • 10 Inner Product Spaces

      • 11 Further Remarks

      • 12 Selected References

      • Additional References

      • Divisibility

        • 1 Greatest Common Divisors

        • 2 The Bézout Identity

        • 3 Polynomials

        • 4 Euclidean Domains

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan