Pinter Algebra Chapter 3 Summary
This is part of a series of summaries I am writing for Pinter’s A Book of Abstract Algebra.1 Chapter 3 is about groups, about which I have already written a post earlier. So from this post we will take the definition of binary operations (which will simply be regarded as operations here) and what makes them associative and commutative, as well as the definition of sets having identities and their elements having inverses with respect to an operation for the definition of a group.2 Some examples of groups include
- \((\Z, +)\), called the additive group of integers. Also simply denoted as \(\Z\).
- \((\Q, +)\), called the additive group of rational numbers. We often denote it simply as \(\Q\).3
- \((\Q^*, \cdot)\), called the multiplicative group of nonzero rational numbers, where \(\Q^* = \set{x \in \Q : x \neq 0}\). Sometimes simply denoted as \(\Q^*\).4
- \((\Q^+, \cdot)\), called the multiplicative group of positive rational numbers. It is sometimes simply denoted as \(\Q^{\pos}\).
- \((\R, +)\), called the additive group of real numbers and often denoted as \(\R\).
- \((\R^*, \cdot)\), called the multiplicative group of nonzero real numbers, where \(\R^* = \set{x \in \R : x \neq 0}\). Sometimes simply denoted as \(\R^*\).
- \((\R^+, \cdot)\), called the multiplicative group of positive real numbers. It is sometimes simply denoted as \(\R^{\pos}\).
For good practice, we should note the difference and specify whether we are referring to the group (with its operation) or the set.
Groups tend to appear in nature. Unlike the infinite groups previously mentioned, these groups tend to be finite groups (groups with a finite number of elements). Such groups have a surprising number of scientific applications. First, the *group of integers modulo n}, denoted \(\Z/n\Z\) or sometimes \(\Z_n\) for short, is the additive group of integers under a conventional definition of addition except whose elements are equivalence classes of infinitely many integers.
\[\Z/n\Z = \set{1, 2, 3, \ldots, n-1}\]
This group is cyclic because it could be generated by one element (which we will talk about later). One could think of its elements as rotations of a unit circle where each element \(k \in \Z/n\Z\) corresponds to a clockwise rotation of \(k/2\pi n\), and the addition of two elements to be the two rotations in succession. It is then clear that addition is both associative and commutative in \(\Z/n\Z\) (as the order of applying rotations does not change the result) and that 0 is the neutral element/identity in \(\Z/n\Z\). Lastly, for each element \(k \in \Z/n\Z\), \(n-k\) is also an element in \(\Z/n\Z\) and \(k+(n-k) = n = 0\) (a full rotation coincides with the starting point).
Note that it is not required for all elements in a group \(G\) to commute (that is, for the operation to be commutative on \(G\)) by the group axioms. Hence, a group like \(\Z/n\Z\) for which all elements commute is called an Abelian group.
Groups on \(\R \times \R\)
Let’s consider a few groups and their operations on subsets of \(\R \times \R\).
- Let \((a, b)*(c, d) = (ad + bc, bd)\) on the set \(G = \set{(x, y) \in \R \times \R : y \neq 0}\). Notice that this operation corresponds to addition of fractions.5 \[\frac{a}{b} + \frac{c}{d} = \frac{ad + bc}{bd}.\] Hence, it is clear that this operation is associative and commutative, and the verification is left as an exercise. Similar to how \(0 = \frac{0}{1}\) is a neutral element of \(\R\), the element \((0, 1)\) is an identity \(G\). \[(0, 1)*(a, b) = (0b + 1a, 1b) = (a, b)\] Now, for each element \((a, b) \in G\), note that \(b \neq 0\). We observe that \[(a, b)*(-a, b) = (ab - ba, b^2) = (0, b^2).\] Hence, it is fairly easy to see that \((a, b)*(-\frac{a}{b^2}, \frac{1}{b}) = (0, 1)\) by dividing \((-a, b)\) by \(b^2\) and this should yield the inverse of \((a, b)\) in \(G\). Hence, \(G\) is an Abelian group. Note that the definition of \(G\) is important, for if we did not guarantee that \(b \neq 0\) we would not be able to divide by \(b^2\).
- Let \((a, b)*(c, d) = (ac, bc + d)\) on the set \(G = \set{(x, y) \in \R \times \R : x \neq 0}\). Then \((c, d)*(a, b) = (ca, da + b)\), but \(bc + d = da + b\) is not true in general, and thus this operation is not commutative. However, this operation is indeed associative. \[ (a, b)*\left[(c, d)*(e, f)\right] = (a, b)*(ce, de + f) = (ace, bce + de + f), \\ \left[(a, b)*(c, d)\right]*(e, f) = (ac, bc + d)*(e, f) = (ace, bce + de + f). \] Then we notice that for all \((a, b) \in G\), \[ (a, b)*(1, 0) = (1a, 1b + 0) = (a, b), \\ (1, 0)*(a, b) = (1a, 0a + b) = (a, b). \] Thus, \((1, 0) \in G\) is the neutral element of \(G\). Next, we wish to find the inverse element \((c, d)\) of each element \((a, b) \in G\), noting that \(a \neq 0\). First, this inverse element has to satisfy \((a, b)*(c, d) = (1, 0)\), which implies \(ac = 1\) and \(bc + d = 0\). Since \(a \neq 0\), we see that \(c = \frac{1}{a}\), and \(d = -bc = -\frac{b}{a}\). Let’s verify that \((\frac{1}{a}, -\frac{b}{a})\) also works for the other case. \[\left(\frac{1}{a}, -\frac{b}{a}\right)*(a, b) = \left(\frac{a}{a}, \frac{-ba}{a} + b\right) = (1, 0).\] Hence, we have shown that \(G\) is a (non-Abelian) group. However, if we extend \(G\) to \(\R \times \R\), an element like \((0, 1)\) has no inverse for example, since no multiple of \(0\) yields \(1\). Thence, the operation \(*\) no longer forms a group (on \(\R \times \R\)).
- Let \((a, b)*(c, d) = (ac - bd, ad + bc)\) on the set \(G = (\R \times \R)\setminus\set{(0, 0)}\). Notice that this operation corresponds to the multiplication of complex numbers in the standard form \(a + bi\). \[(a+bi)(c+di) = (ac - bd) + i(ad + bc).\] Hence, it is clear that this operation is associative and commutative, and the verification is again left as an exercise. Like the element \(1 = 1 + 0i \in \C\), \((1, 0)\) is the neutral element of \(G\). To demonstrate this fact, consider the following. \[(1, 0)*(a, b) = (1a - 0b, 1b + 0a) = (a, b).\] Next, for each element \((a, b) \in G\), we wish to find an element \((c, d)\) such that \((a, b)*(c, d) = (1, 0)\) (keeping commutativity in mind). Note that since the origin was deleted from \(\R \times \R\) to form \(G\), either \(a \neq 0\) or \(b \neq 0\), hence \(a^2 + b^2 \gt 0\). This implies \(ac - bd = 1\) and \(ad + bc = 0\). From the latter we have that \(d = -\frac{bc}{a}\), which when substituted into the former yields \[ ac + b\frac{bc}{a} = \frac{a^2c + b^2c}{a} = c\frac{a^2 + b^2}{b} = 1,\] and solving for \(c\) we obtain that \(c = \frac{a}{a^2 + b^2}\), and \[d = -\frac{b(a)}{a(a^2 + b^2)} = -\frac{b}{a^2 + b^2}.\] And it follows from commutativity that \(\left(\frac{a}{a^2 + b^2}, -\frac{b}{a^2 + b^2}\right)\) is the multiplicative inverse of \((a, b)\) in \(G\). However, if we extend \(G\) again to \(\R \times \R\), including the origin, we notice that for any \((a, b) \in \R \times \R\), \[(0, 0)*(a, b) = (0a - 0b, 0b + 0a) = (0, 0) \neq (1, 0).\] Therefore, the origin element cannot have an inverse under \(*\), and the operation \(*\) no longer forms a group (on \(\R \times \R\)).
Groups of Subsets of a Set
Define the symmetric difference of two sets.6
\[X \triangle Y = (X \setminus Y) \cup (Y \setminus X)\]
It is clear that this operation is commutative and associative. The latter is a little harder to see; however, if one observes the diagram above, \(A \triangle B\) comprises the areas 1, 3, 4, 6 while \(B \triangle C\) comprises the areas 2, 3, 4, 7. Hence, both \(A \triangle (B \triangle C)\) and \((A \triangle B) \triangle C\) comprise the areas 1, 3, 5, and 7. For a more rigorous proof of these facts, see [this post]. We claim that the symmetric difference operation \(\triangle\) forms a group on the powerset of any nonempty set \(D\), in other words, that for \(D \neq \varnothing\), \((\mcal{P}(D), \triangle)\) forms a group.7
First, we notice that \(\varnothing \subset D\) (the empty set is a subset of every set) and consequently \(\varnothing \in \mcal{P}(D)\) by the definition of the powerset. Then, for any set \(X \in \mcal{P}(D)\), \[ \varnothing \triangle X = X \triangle \varnothing = (X \setminus \varnothing) \cup (\varnothing \setminus X) = X \cup \varnothing = X. \] Hence, the empty set \(\varnothing\) is an identity of the group \(\mcal{P}(D)\). Finding the inverse \(Y \in \mcal{P}(D)\) of some element \(X \in \mcal{P}(D)\) is much more interesting. We require that the union of two sets \(X \setminus Y\) and \(Y \setminus X\) be equal to the identity element, which is the empty set. It follows that both of these sets have to be emtpy too. Furthermore, it could be easily seen that \(X \setminus Y = \varnothing\) iff \(X \subseteq Y\) and \(Y \setminus X = \varnothing\) iff \(Y \subseteq X\).8 Since we need both \(X \subseteq Y\) and \(Y \subseteq X\), it follows that \(X = Y\) and that the inverse of every set under \(\triangle\) is itself! We could also say that for all \(X \in \mcal{P}(D)\), \(X^2 = X \triangle X = \varnothing\) so that \(X\) has order 2 (we will discuss this concept further in Chapter 10).
A Checkerboard Game
Picture a checkerboard with squares labelled 1 through 4, with 1 at the top left, 2 at the top right, 3 at the bottom left, and 4 at the bottom right. One checker sits on the board on one of the squares. There are four possible moves to make with the checker:
- \(I\): Stay put.
- \(H\): Move horizontally (i.e. \(1 \mapsto 2, 2 \mapsto 1, 3 \mapsto 4, 4 \mapsto 3\)).
- \(V\): Move vertically (i.e. \(1 \mapsto 3, 2 \mapsto 4, 3 \mapsto 1, 4 \mapsto 2\)).
- \(D\): Move diagonally (i.e. \(1 \mapsto 4, 2 \mapsto 3, 3 \mapsto 2, 4 \mapsto 1\)).
And we have the operation \(*\) on the set \(G = \set{I, H, V, D}\) where \(X*Y\) entails applying \(Y\) first and then \(X\). And indeed, if one looks at these actions as bijective maps, and our “operation” could be described as function composition, then associativity follows. \(I\) is clearly the identity since it does not change the position of the checker. Furthermore, each other element has order 2, and they all yield the identity \(I\) when applied to themselves, meaning that they are their own inverses. Hence, \((G, *)\) forms a group. Then, we shall write out the \(n \times n\) group multiplication table/matrix \([a_{ij}]\) of \(G\), and after letting \(g_1, g_2, g_3, g_4\) be \(I, H, V, D\) respectively, \(a_{ij} = g_ig_j\) (\(1 \leq i, j \leq 4\)). It also follows that if the matrix is symmetric under transposition, i.e. that \(a_{ij} = a_{ji}\) for all \(1 \leq i, j \leq 4\), then \(g_ig_j = g_jg_i\), which makes the group \(G\) Abelian. The group table is written out as follows.
We see that the table is indeed symmetric under transposition, so we conclude that \((G, *)\) is an Abelian group.
Groups in Binary Coding
Strings of binary numbers (the numbers 0 and 1) are called binary words. The number of 1s and 0s in any binary word is called its length. Information is sometimes received incorrectly when transmitted, and coding theory deals with detecting and correcting errors of transmission. If a word \(\mbf{a} = a_1 a_2 \cdots a_n\) is sent and a word \(\mbf{b} = b_1 b_2 \cdots b_n\) is received (where \(\mbf{a}\) is not necessarily equal to \(\mbf{b}\)), then the error pattern is the sequence \(\mbf{e} = e_1 e_2 \cdots e_n\) defined by
\[ e_j = \begin{cases} 0 \quad & a_j = b_j \\ 1 & a_j \neq b_j \end{cases} \]
So \(\mbf{e}\) is essentially a bit-wise XOR operation on \(\mbf{a}\) and \(\mbf{b}\), hence we will define it as the operation \(\oplus\) and write \(\mbf{a} \oplus \mbf{b} = \mbf{e}\), and let \(\bB^n\) be the set of all binary words of length \(n\). Rewriting everything using tuple notation instead for convenience, we see that
\[(a_1, a_2, \ldots, a_n) \oplus (b_1, b_2, \ldots, b_n) = (a_1 \oplus b_1, a_2 \oplus b_2, \ldots, a_n \oplus b_n).\]
The properties of \(\oplus\) on binary words of length 1 (\(\bB^1\), which comprises just 0 and 1) generalizes to \(\bB^n\) for each positive integer (n). Since \(1 \oplus 0 = 0 = 0 \oplus 1\), it is clear that \(\oplus\) is commutative on \(\bB^1\) and hence \(\bB^n\). Similarly, since
\[ 0 \oplus (0 \oplus 0) = 0 \oplus 0 = 0 = 0 \oplus 0 = (0 \oplus 0) \oplus 0, \\ 0 \oplus (0 \oplus 1) = 0 \oplus 1 = 1 = 0 \oplus 1 = (0 \oplus 0) \oplus 1, \\ 0 \oplus (1 \oplus 0) = 0 \oplus 1 = 1 = 1 \oplus 0 = (0 \oplus 1) \oplus 0, \\ 0 \oplus (1 \oplus 1) = 0 \oplus 0 = 0 = 1 \oplus 1 = (0 \oplus 1) \oplus 1, \\ 1 \oplus (0 \oplus 0) = 1 \oplus 0 = 1 = 1 \oplus 0 = (1 \oplus 0) \oplus 0, \\ 1 \oplus (0 \oplus 1) = 1 \oplus 1 = 0 = 1 \oplus 1 = (1 \oplus 0) \oplus 1, \\ 1 \oplus (1 \oplus 0) = 1 \oplus 1 = 1 = 0 \oplus 0 = (1 \oplus 1) \oplus 0, \\ 1 \oplus (1 \oplus 1) = 1 \oplus 0 = 1 = 0 \oplus 1 = (1 \oplus 1) \oplus 1, \]
we know that \(\oplus\) is associative on \(\bB^1\) and thus \(\bB^n\). Then, we notice that \(1 \oplus 0 = 0 \oplus 1 = 1\) and \(0 \oplus 0 = 0\), thus \(0 \oplus a = a \oplus 0 = a\). Hence, the element \(\mbf{0} = (0, \ldots, 0) \in \bB^n\) is the neutral element of each \(\bB^n\). Furthermore, given \(\mbf{a} = (a_1, a_2, \ldots, a_n) \in \bB^n\), we wish to find an inverse element \(\mbf{b} = (b_1, b_2, \ldots, b_n) \in \bB^n\) such that \(\mbf{a} \oplus \mbf{b} = \mbf{0}\), or equivalently, \(a_j \oplus b_j = 0\) for \(1 \leq j \leq n\), which implies that \(a_j = b_j\) and that \(\mbf{b} = \mbf{a}\). Hence, we know that the inverse \(-\mbf{a} := \mbf{b}\) is equal to \(\mbf{a}\) for all \(\mbf{a} \in \bB^n\). We then note that \(\mbf{a} \oplus \mbf{b} = \mbf{a} \oplus (-\mbf{b})\) since \(-b = b\), and that \(\mbf{a} \oplus \mbf{a} = \mbf{a} \oplus (-\mbf{a}) = \mbf{0}\) for all \(\mbf{a} \in \bB^n\). Another consequence is that if we know \(\mbf{a} \oplus \mbf{b} = \mbf{c}\) then
\[\mbf{a} \oplus \mbf{b} \oplus \mbf{b} = \mbf{a} \oplus \mbf{0} = \mbf{a} = \mbf{c} \oplus \mbf{b}.\]
Maximum-Likelihood Decoding
A code is a subset is a subset of \(\bB^n\). For example, the code \[C_1 = \set{00000, 00111, 01001, 01110, 10011, 10100, 11010, 11101}\] is a subset of \(\bB^5\). The elements of \(C_1\) are called code words. During transmission, only the code words are transmitted, and if non-code-word word is received, then there must have been an error of transmission. Hence, in a well designed code, it should be unlikely for an error to change one code word to another, and thus letting the error slip. Furthermore, it should be easy to locate and correct such errors.
The weight of a binary word is the number of 1s in the word, and the distance between two binary words is the number of digits by which they differ. Also, if \(\mbf{a}\) and \(\mbf{b}\) were binary words of the same length, then their difference is the weight of the binary word \(\mbf{a} \oplus \mbf{b}\). Let’s denote the distance between two words \(\mbf{a}\) and \(\mbf{b}\) using a distance function \(d(\mbf{a}, \mbf{b})\). The minimum distance of a code \(C\) is the smallest distance between all pairs of code words, or \[\min\set{d(\mbf{a}, \mbf{b}) : \mbf{a}, \mbf{b} \in C}.\]
In the example above (with \(C_1\)), the minimum distance is 2, i.e. at least two errors of transmission are needed to mutate a code word into another code word. In general, the bigger the minimum distance of a code, the better it is at handling a greater number of errors of transmission.
In practice, codes are constructed by positioning information positions (those that encode information) and redundancy positions (those that satisfy parity-check equations to ensure the validity of the code).
-
Charles C. Pinter, A Book of Abstract Algebra, 2nd ed. (Mineola, NY: Dover Publications, 2013), 25-35. The diagrams displayed also come from this book. ↩
-
When the operation on a group \((G, *)\) is clear, the group is simply denoted as \(*\). However, in general, the operation needs to be specified. Furthermore, when we refer to “elements of a group” we mean the elements of the set that constitutes the group. And we will use the terms “identity” and “neutral element” of a group interchangeably. ↩
-
Since \(0\in\Q\), \(\Q\) would not be a group under conventional multiplication. Hence, it is usually unambiguous to assume that the group \(\Q\) has operation addition (\(+\)). ↩
-
Because \(0 \notin \Q\) and 0 is the (unique) additive identity under the conventional definition of addition, \(\Q^*\) cannot be an additive group. Again, removing ambiguity. A similar argument explains why \(\Q^+\) is not an additive group either. Furthermore, all of these points also apply to the real numbers \(\R\). ↩
-
They are not the set of rational numbers because the numbers \(a, b, c, d\) are not necessarily integers, so that the fractions are not necessarily rational. ↩
-
For this part, I will be using the set-theoretic notation that I am accustomed to, which is quite different from the way it is presented in the book; however, it does not change the content. ↩
-
See the Powerset Axiom. Also, \(D = \varnothing\) would trivially form a group, but we do not consider that. ↩
-
I think it is okay to use “iff” as an abbreviation for the biconditional “if and only if.” ↩