Theory algebra (Tài liệu tiếng anh)

250 13 0
  • Loading ...
1/250 trang
Tải xuống

Thông tin tài liệu

Ngày đăng: 26/11/2018, 10:00

Contents Rings 1.1 Definition and Examples 1.2 Domains, Fields, and Division Rings 1.3 Homomorphisms and Subrings 11 1.4 Products of Rings 18 1.5 Algebras 23 Ideals 28 2.1 Ideals and Quotient Rings 28 2.2 Generating Sets 32 2.3 The Unit Ideal and Maximal Ideals 36 2.4 Prime Ideals in a Commutative Ring 40 2.5 The Chinese Remainder Theorem 49 Modules 56 3.1 Definition and Examples 56 3.2 The Usual Constructions 59 3.3 Products, Sums, and Free Modules 62 3.4 Mixing Ideals and Submodules 70 Categories 78 4.1 Definitions and Examples 78 4.2 Universal Properties 82 4.3 Change of Rings 95 Some Tools 5.1 101 Localization 101 5.2 Determinants 112 5.3 Exact Sequences 121 5.4 Projective and Injective Modules 129 5.5 Noetherian Rings and Modules 139 5.6 Graded Rings 144 Principal Ideal Domains 151 6.1 Euclidean Domains 151 6.2 Unique Factorization 153 6.3 Modules over a Euclidean Domain 157 6.4 Modules over a Principal Ideal Domain 164 6.5 Canonical Forms for Matrices 172 A A Brief Introduction to Metric Spaces A.1 The Basics 188 188 A.2 Tietze’s Extension Theorem 197 B Rank of Free Modules, Infinite Case 202 C Unique Factorization of Polynomials 206 C.1 Groups of Divisibility 206 C.2 Divisibility in Polynomial Rings 213 D The Intersection of Two Plane Curves D.1 Associated Primes and Primary Decompositions 222 222 D.2 Euler Characteristics and Hilbert Functions 232 D.3 Monomial Bases 245 Chapter 1: Rings §1.1 Definition and Examples We’ll begin with what you should already know: Definition: A group is a set G with a binary operation (which we will express as multiplication) that satisfies the following three properties: (i) For every x, y, x ∈ G, (xy)z = x(yz) (ii) There is an e ∈ G such that ex = x = xe for every x ∈ G (iii) For every x ∈ G there is a y ∈ G such that xy = e = yx If the operation is also commutative, then we say that G is an abelian group From the abstract point of view, a ring is simply a group with an extra operation Formally: Definition: A ring is a set R with two binary operations (which we will express as addition and multiplication) that satisfy the following properties: (i) The addition makes R an abelian group (and we write the identity element as 0) (ii) For any x, y, z ∈ R we have x(y + z) = xy + xz and (y + z)x = yx + zx (iii) For any x, y, z ∈ R we have x(yz) = (xy)z (iv) There is an element ∈ R such that 1x = x = x1 for all x ∈ R If the multiplication is also commutative, then we say that R is a commutative ring For the sake of completeness, I should mention that while there is no debate whatsoever about the definition of group, there are a number of people who use a weaker definition of ring, dropping property (iii) or (iv) They would then call my rings “associative rings with identity” However, for this course we will always assume that rings satisfy properties (iii) and (iv) Example 1.1.1: Z, Q, R, and C are all commutative rings Example 1.1.2: Let R be any ring (usually taken to be commutative) The polynomial ring with coefficients in R and variable x is the set ∞ ak xk | each ak ∈ R and ak = for all but finitely many k R[x] = k=0 with addition ∞ ∞ ak xk + k=0 and multiplication ∞ bk xk = (ak + bk )xk k=0 k=0 ∞ ∞ ak xk k=0 ∞ bk xk ck xk = k=0 k=0 k bk−i — this multiplication comes from the distributive axiom and the idea that (ai xi )(bj xj ) where ck = i=0 should be bj xi+j Note in particular that this means that the variable x commutes with every element of the coefficient ring R; this will be important later on You should keep in mind that the “x” here is just a symbol — the elements of R[x] are basically just sequences of elements of R, and we only use the powers of x as place holders to say where we are in the sequence (and to motivate the definition of the multiplication) In particular, two elements of the polynomial ring are equal if and only if all of their coefficients are equal ∞ ak xk ∈ R[x] we will get a function R → R which takes an r ∈ R to Of course, given any f (x) = k=0 ∞ k f (r) = ak r If R is one of the rings Z, Q, R, or C then two polynomials f and g are the same as k=0 elements of R[x] if and only if they give the same function R → R — for these coefficient rings, a non-zero polynomial can have only finitely many roots but if f and g define the same function then f − g will have infinitely many roots However, this is not true for all choices of coefficient ring For example, suppose R = {0, 1} with addition and multiplication given by the tables: + × 0 0 1 1 Then r2 = r for all r ∈ R, so x2 and x give the same function R → R, but they are not the same as elements of R[x] Example 1.1.3: We can modify the previous example to give polynomial rings in more than one variable At first glance, it would make sense to this by letting R[x1 , , xn ] be the set of all formal sums ak1 , ,kn xk11 · · · xknn with the kj being non-negative integers and all but finitely many of the ak1 , ,kn being zero However, this makes a formal definition of multiplication quite complicated — with a bit of fussing, you could come up with one, but is it really something you want to use to prove theorems? Instead, we will define R[x1 , , xn ] inductively, letting R[x1 , , xn ] = R[x1 , , xn−1 ][xn ] — that is, R[x1 , , xn ] is a polynomial ring in one variable with the coefficients being polynomials in n − variables This inductive definition of polynomial rings makes it relatively easy to prove theorems of the type “If R is nice then so is R[x1 , , xn ]” — all you really have to is prove that R[x] is nice and then use induction on the number of variables For example, if you want to call my bluff and check all of the ring axioms, you only have to check them for R[x]; you then get R[x1 , , xn ] for free To see how this fancy definition fits with the more straightforward one, consider the two variable case: ∞ fk y k where the coefficient fk is an element of R[x] If we An element of R[x, y] = R[x][y] looks like g = k=0 ∞ ak,j xj , then our element g of R[x][y] becomes g = write this fk as j=0 ak,j xj y k , which looks more k≥0,j≥0 bk,j xj y k , we like what we think an element of R[x, y] should be Conversely, given a formal sum k≥0,j≥0 can make it look like an element of R[x][y] just by grouping together terms that involve the same power of y To multiply two elements in R[x][y] we use the distributive axiom together with the formula: (ai,j xi )y j (bk,l xk )y l = (ai,j xi )(bk,l xk ) y j+l = (ai,j bk,l xi+k )y j+l which hopefully matches what you think the product of two polynomials in two variables should be Example 1.1.4: Let R be a ring (again usually taken to be commutative) and let G be a group The group ring of G with coefficients in R is the set     R[G] = ag g | each ag ∈ R and ag = for all but finitely many g   g∈G with addition ag g + and multiplication bg g = g∈G g∈G     ag g   g∈G where cg = (ag + bg )g g∈G bg g  = g∈G cg g g∈G ah bh−1 g h∈G This multiplication formula should look a lot like the one for a polynomial ring, with “h−1 g” taking the place of “k − i” In fact, we’re doing basically the same thing, except that instead of adding powers of some fixed variable, we’re multiplying group elements — that, is when we multiply two single terms ah h and bk k we get (ah bk )(hk) Again, note that this means that we are forcing the coefficients to commute with the group elements If G is written additively instead of multiplicatively, then this formula would become (ah h)(bk k) = (ah bk )(h+k), which looks a bit ridiculous To make it look a bit more natural we instead write elements of ag xg R[G] like polynomials with the powers taken from G — that is, an element of R[G] has the form g∈G where x is just a symbol Then the multiplication formula     g∈G ag xg   bg xg  = g∈G ah b−h+g g∈G xg h∈G amounts to saying xh xk = xh+k Example 1.1.5: Let A be an abelian group Recall that an endomorphism of A is a group homomorphism from A to itself The set End A of endomorphisms of A is a ring where the addition is defined point-wise and the multiplication is given by composition: Given f, g ∈ End A, the sum f + g is the function defined by (f + g)(a) = f (a) + g(a) and the product f g is the function defined by (f g)(a) = f (g(a)) Similarly, if V is a real vector space then the set EndR V of linear transformations V → V is a ring with point-wise addition and composition for multiplication Example 1.1.6: Let R be a ring (usually taken to be commutative) and let n be a positive integer Let Mn (R) be the set of n × n matrices whose entries are elements of R Given two such matrices A and B, we can define the sum A + B and product AB just as for real matrices, and these operations make the set Mn (R) into a ring You can think of this example as a variation on the previous example: If V is a finite dimensional real vector space (say with dimension dim V = n) and you have a specific basis for V in mind, then you learned in linear algebra that any endomorphism of V is given by an element of Mn (R); also, the sum and product operations on matrices correspond to the sum and product on endomorphisms Similarly, any endomorphism of the free abelian group Zn = Z ⊕ · · · ⊕ Z is given by an element of Mn (Z) (If you haven’t seen this before, start with the case n = and show that any group homomorphism f : Z → Z is given by multiplication by f (1) For the general case, given a group homomorphism g : Zn → Zn , and two integers i and j between and n, let gi,j be the composition πi ◦ g ◦ ιj of g with the j th inclusion Z → Zn and the ith projection Zn → Z Letting B ∈ Mn (Z) be the matrix whose ij th entry is gij (1), you should be able to prove that g(a) = Ba for every a ∈ Zn — viewing a as a column vector, of course.) Example 1.1.7: Let R be a ring (usually taken to be commutative) and let X be any set The set F (X; R) of functions from X to R is a ring where both addition and multiplication are defined pointwise Traditionally, groups are used mostly to describe symmetries, in some sense of the word On the other hand, the examples in this section (and variations that will appear later in this course) show rings in three different contexts: • Like groups, they can be used to describe “symmetries” of some object, provided that this object itself has an algebraic structure: For example, the endomorphism ring of an abelian group A can be thought of as describing the symmetries of A Also, we will see later that the group ring Z[G] gives information about how the group G can act on abelian groups (see Example 3.1.6); likewise, R[G] tells us how G can act on real vector spaces • Rings also appear in number theory, giving us a context in which to talk about divisibility and prime factorizations; in fact parallels between factoring integers and factoring polynomials — that is, parallels between the rings Z and R[x] — were among the first reasons for creating the concept of an abstract ring (See Section 6.2, for example.) • Finally, rings show up in geometry where certain subrings of F (X; R) can tell us about the geometry of the set X (as in Example 1.4.3) Likewise, viewing elements of the ring R[x1 , , xn ] as functions from Rn to R lets us use the polynomial ring to describe the geometry of Rn ; we will come back to this theme often You should also keep in mind that the “symmetry” rings (R[G], End A, and Mn (R)) are usually noncommutative while the “arithmetic” rings (Z and R[x]) and “geometry” rings (R[x1 , , xn ] and F (X; R)) are commutative (as long as the “coefficient” ring R is commutative) §1.2 Domains, Fields, and Division Rings It is easy to prove that if R is a ring and x is any element of R, then x0 = = 0x (just use distributivity and the equation + = 0) However, it is not always true that xy = implies x = or y = 0: Example 1.2.1: Let R be the ring of functions F ({0, 1}, Z) Let f, g ∈ R be defined by f (0) = 1, f (1) = 0, g(0) = 0, and g(1) = Then (f g)(0) = · = and (f g)(1) = · = 0, so f g = 0, but neither f nor g is zero Definitions: Let R be a ring and x ∈ R (i) We say that x is a left zerodivisor if there is a y ∈ R, y = such that xy = Similarly, x is a right zerodivisor if zx = for some z = (ii) A left inverse for x is a y ∈ R with yx = Similarly, a right inverse for x is a z ∈ R with xz = (iii) We say that x is a unit if it has both a left inverse and a right inverse If R is commutative, then the distinctions “left” and “right” are meaningless: x is a left zerodivisor if and only if it is right zerodivisor, and similarly for inverses; so, if we are in a situation where we are assuming R is commutative we will just talk about zerodivisors and inverses without any reference to sidedness Lemma 1.2.1 Let R be a ring and x ∈ R (i) If x has a left (resp right) inverse then x is not a left (resp right) zerodivisor (ii) If x is a unit then any left inverse of x is also a right inverse and vice versa Furthermore, this inverse is unique Proof: (i) Suppose y is a left inverse of x If xz = then yxz = as well, but yx = 1, so this says z = Thus x is not a left zerodivisor The proof of the “right” case is similar (ii) Since x is a unit, we have a left inverse y and a right inverse z Using the fact that yx = we get yxz = z, and using the fact that xz = we get yxz = y Therefore y = z, which shows that our left inverse y is also a right inverse for x and our right inverse z is also a left inverse It further shows that any two inverses for x have to be the same Part (ii) of this lemma lets us speak of “the” inverse of a unit u; we write this inverse as u−1 The existence of a left inverse does not keep a ring element from being a right zero-divisor: Example 1.2.2: Let A be a countably generated free abelian group — A = Z ⊕ Z ⊕ · · · — and let R = End A Let f be the “shift right” endomorphism of A, g the “shift left”, and h the projection on the first component That is: f (a1 , a2 , ) = (0, a1 , a2 , ) g(a1 , a2 , ) = (a2 , a3 , ) h(a1 , a2 , ) = (a1 , 0, 0, ) Then gf = 1, so f has a left inverse, but hf = 0, so f is a right zerodivisor This also shows that in part (ii) of the lemma we definitely had to assume that our x had inverses on both sides before we could prove that a left inverse was also a right inverse If a is a unit of a ring R, then for any b ∈ R, the equation ax = b will have a unique solution — namely x = a−1 b More generally, if a has a right inverse then there will always be at least one solution, and if a has a left inverse there will be at most one solution In fact, to get solutions to be unique, one only needs to know that a is not a left zerodivisor (Make sure you can prove all these statements and formulate analogs for equations of the form xa = b — start by working with the case where a is a unit and see what you use to prove that a−1 b is a solution and what you use to prove that it is the only solution.) Definitions: Let R be a ring in which = (i) R is a division ring if every non-zero element of R is a unit (ii) R is a field if it is both commutative and a division ring (iii) R is an integral domain (or sometimes just domain) if it is commutative and no non-zero element of R is a zerodivisor In this definition we explicitly assumed that = If = then R has only that one element, since for any x ∈ R we have x = 1x = 0x = This is called the zero ring and it is frequently a troublemaker — there are a lot of theorems which are true for all rings except the zero ring So, to keep from having to worry about that case when we talk about division rings, fields, and domains (which are supposed to be nice rings), we simply declare that these classes of rings not include the zero ring By Lemma 1.2.1, any field is also an integral domain The converse, of course, is not true, since Z is a domain but not a field Proposition 1.2.2 If R is an integral domain, then the polynomial ring R[x] is also an integral domain Proof: If f and g are two non-zero elements of R[x] then we can write them as f = an xn + · · · + a0 and g = bm xm + · · · + b0 with an and bm both non-zero Then the coefficient of xn+m in the product f g is an bm — all the other terms of f and g are too small to contribute to the xn+m part of the product Since R is an integral domain, an bm = 0, which makes f g = Note that this proof also shows that if R is a domain and f and g are two elements of R[x], then deg f g = deg f + deg g This should let you prove that R[x] will never be a field You should be careful with polynomial rings R[x] where the coefficient ring R is not an integral domain — if a and b are non-zero so on — that is, for n > we inductively pick a free module Fn+1 and homomorphism dF n+1 : Fn+1 → Fn F F with im dF n+1 = ker dn To round things out, we let Fn = for n < (and thus dn = for n ≤ 0) This gives us a complex F• : dF dF · · · → F2 −→ F1 −→ F0 → (When we write a complex with a zero at one end and nothing past that zero, that is taken to mean that all the other modules in that direction are zero as well.) Obviously Hn (F• ) = for n < 0, since all the modules involved are 0, and we chose our dn ’s so that Hn (F• ) = for n > At n = 0, we have F Z0 (F• ) = ker dF = F0 , because d0 = 0; that makes H0 (F• ) = F0 F0 ∼ = = im ε = M F ker ε im d1 A complex like this is called a free resolution of the module M Example D.2.2: Let R be a unique factorization domain and let a and b be two non-zero elements of R which not have any common factors We build a free resolution of R/(a, b) as follows: Our module is cyclic, so we can start with F0 = R, the map ε : R → R/(a, b) being just the quotient map Then ker ε is just the ideal generated by the two elements a and b, which suggests that we should make F1 = R ⊕ R F with dF (r, s) = + sb Now, if (r, s) ∈ ker d1 , then sb = −ra In particular, that says that b divides However, we are assuming that a and b not have any common factors, so that forces b to divide r — say r = bx Likewise, a must divide s — say s = ay Cancelling an ab from the equation ayb = −bxa we get y = −x Thus ker dF = {(bx, −ax) | x ∈ R} (Actually, I only proved one inclusion, but the other is F very easy.) So, we let F2 = R and define dF : F2 → F1 by d2 (x) = (bx, −ax) You can then check that ker dF = (0 :R (a, b)), which is zero since R is a domain Therefore we can stop here — that is, we can let Fn = for all n > Thus our free resolution of R/(a, b) is the complex h b −a i [a b] → R −→ R ⊕ R −→ R → In particular, our ring K[x, y] is a UFD, so as long as our polynomials f and g have no common factors, this will give us a free resolution of K[x, y]/(f, g) We would like to try to use this resolution to compute the 234 dimension of K[x, y]/(f, g); unfortunately, that won’t actually work, but trying will lead us to something that will work We will use a variation on the “rank and nullity” theorem from linear algebra; it will require complexes that are short: Definitions: Let R be a ring and C• a complex of R-modules • C• is bounded below (or bounded on the right) if there is an N ∈ Z such that Cn = for all n < N • C• is bounded above (or bounded on the left) if there is an N ∈ Z such that Cn = for all n > N • C• is bounded if it is both bounded above and bounded below For example, all free resolutions are bounded below One can dualize the idea of a free resolution to get an injective resolution — make repeated use of the fact that every module can be viewed as a submodule of an injective module, just as the construction of a free resolution used the fact that every module is a quotient of a free module — and these injective resolutions will always be bounded above Our particular free resolution from Example D.2.2 is bounded on both sides, so it is a bounded complex Proposition D.2.1 Let K be a field and C• a bounded complex of finite dimensional K-vector spaces Then (−1)n dim Cn = n (−1)n dim Hn (C• ) n Proof: First, note that while the two sums are in theory infinite, the fact that C• is bounded makes all but finitely many of the summands 0, so the sums are effectively finite We will induct on the number of non-zero Cn ’s If all of them are zero, then both sides of our equation are zero, so that case works Now pick the largest k with Ck = and let D• be the complex with     if n = k     Dn = Ck−1 /im dC if n = k − k       Cn otherwise 235 C C D Since dC k−1 (im dk ) = 0, dk−1 induces a map dk−1 : Dk−1 → Dk−2 , so we get a complex Also, Hn (D• ) = Hn (C• ) for all n = k — the only one that requires any work is the case n = k − 1, which follows from Hk−1 (D• ) = ker dD k−1 = ker dC k−1 = Hk−1 (C• ) im dC k Now, this new complex D• is non-zero in fewer places, so we may assume by induction that (−1)n dim Dn = (−1)n dim Hn (D• ) n n Translating this into an equation in terms of the complex C• we get (−1)k−1 dim Ck−1 /im dC k + (−1)n dim Cn = n 0, n+2 n − e1 − e2 + n − e1 + n − e2 + n+2 − − + 2 2 = (n+2)(n+1) = 12 n2 + 32 n + Rewriting our expression for HM (n) using this formula and then simplifying shows that HM (n) = e1 e2 for all n > e1 + e2 (In this computation, I would suggest first writing (n − e1 − e2 )2 − (n − e1 )2 − (n − e2 )2 + n2 as a difference of differences of squares.) So, what we have now is that for large enough n the dimension of the nth degree piece of K[x, y, z]/(f˜, g˜) is exactly the product of the degrees of f and g Unfortunately, the graded pieces of K[x, y, z]/(f˜, g˜) aren’t really what we are interested in, but at least we have something involving the product (deg f )(deg g) Now all we have to is prove that the dimension of K[x, y]/(f, g) is bounded by the dimension of some graded piece of K[x, y, z]/(f˜, g˜); that is the topic of the next section 244 §D.3 Monomial Bases In the case of a single variable, we know that dim K[x]/(f ) = deg f because the monomials xi , i < deg f , form a basis for K[x]/(f ) To adapt this directly to the case of two variables, we would need to know what it means for a pair of indices (i, j) to be less than an integer Rather than worry about that, we start with an idea of what it means for one monomial to be less than (or bigger than) another one For the duration of this section, the word “monomial” will refer only to a product of variables — that is, we assume that the coefficient is If I want to include the possibility that the coefficient is not 1, then I will use the word “term” Definition: Let m1 = xi11 · · · xirr and m2 = xj11 · · · xjrr be two distinct monomials in the ring K[x1 , , xr ] We say that m1 > m2 if one of these two conditions holds: • deg m1 > deg m2 • If l is the largest index with il = jl , then il < jl This says that a large monomial will be one whose total degree is large but whose degree in the late variables is small Example D.3.1: In K[x, y, z] we have x3 < x2 z < xy z < x2 yz x3 is the smallest because its total degree is smallest x2 z is smaller than xy z because, while they have the same total degree, x2 z has more z than xy z has The fact that xy z has more y than x2 z has doesn’t enter the picture, since that’s not the last spot where the two monomials disagree However, the larger power of y does force xy z to be smaller than x2 yz You should verify the following facts: Lemma D.3.1 (i) The relation “
- Xem thêm -

Xem thêm: Theory algebra (Tài liệu tiếng anh), Theory algebra (Tài liệu tiếng anh)

Gợi ý tài liệu liên quan cho bạn

Nhận lời giải ngay chưa đến 10 phút Đăng bài tập ngay