Thursday, 11 June 2015

Informal combinatorial proofs

Pascal's Triangle


A term in the middle is the sum of two terms above it.
$\binom{n}{k}+\binom{n}{k+1}=\binom{n+1}{k}$



The sum of all elements in each row is $2^n$.
$\sum\limits_{k=0}^n \binom{n}{k}=2^n$

Binomial theorem
$\sum\limits_{k=0}^n \binom{n}{k} x^k=(1+x)^n$
Put $x=1$, then $\sum\limits_{k=0}^n \binom{n}{k}=2^n$.
Put $x=2$, then $\sum\limits_{k=0}^n 2^k \binom{n}{k}=3^n$.

Bijections
Given a set $X$ and a non-negative integer $r$, an r-subset of $X$ is a subset $A \subset X$ of cardinality $r$. We denote the set of r-subsets of $X$ by $P_r(X)$, namely, $P_r(X)=\{A \subset X|\: |A|=r\}$. Define the binomial coefficient to be the cardinality of the set $P_r(X)$ when $|X|=n$.
There is a bijection $P_r(X) \to P_{n-r}(X)$ given by mapping each subset to its complement $A \mapsto A^c$. Hence $\binom{n}{r}=\binom{n}{n-r}$.
[...]

Combinatorial arguments
Key idea: count the same quantity in two different ways

$n^2=(n-1)^2+2(n-1)+1$
We count the number of pairs $(i,j)$ where $1 \leq i,j \leq n$, which is $n^2$. We subdivide the pairs in regard to how many 1s they have. There are $(n-1)^2$ pairs with no 1s; $2(n-1)$ pairs with one 1s; 1 pair with two 1s. Hence the result. In the same vein, we have $n^3=(n-1)^3+3(n-1)^2+3(n-1)+1$.

Remark: We can use the same situation to prove another identity, $1+3+5+\cdots+(2n-1)=n^2$. We classify the pairs according to the value of $\text{max}(i,j)$. There is one pair $(1,1)$ with $\text{max}(i,j)=1$ and three pairs $(1,2), (2,1), (2,2)$ with $\text{max}(i,j)=2$. Similarly, there are five pairs when $\text{max}(i,j)=3$; seven pairs with max value $4$; $(2n-1)$ pairs with max value $n$.

$\binom{m+n}{2}=\binom{m}{2}+\binom{n}{2}+mn$
We count the number of ways of selecting two numbers from a set $S$ with $m+n$ elements. We partition $S$ into two subsets $A$ and $B$ with $m$ and $n$ elements respectively. Then, either both the two elements are from A or B, or there is one each from A and B. Hence the stated identity.

Stars and bars
The number of solutions to the equation $a_1+a_2+\cdots+a_n=k, a_i \geq 0, a_i \in \mathbb{N}$ is $\binom{k+n-1}{k}$.
Put differently, an n-set has $\binom{k+n-1}{k}$ multisets of size $k$.

The problem is the same as finding the number of ways of placing $k$ undistinguishable objects (denoted by $*$) in $n$ numbered boxes where $a_i$ represents the number of objects in box $i$. We now put $k$ objects in a row.
$******$
We use vertical bars $|$ to represent the boxes, meaning the separation between two consecutive boxes. Hence we need $n-1$ bars. For example, in the arrangement $**|***|*$, we have 3 boxes of which the first has 2 objects, the second has 3 objects and the last has 1 object.
In total there will be $k+n−1$ positions with stars or bars. Therefore, the number of solutions is $\binom{k+n-1}{k}$. [...]

Useful:
http://www.maths.qmul.ac.uk/~pjc/notes/comb.pdf
http://www.millersville.edu/~rumble/Math.310/pascal3.pdf
http://www.erp.sophia.ac.jp/Projects/ese/notes/rmm/lecture4.pdf
https://math.dartmouth.edu/~m68f13/lectec.pdf

Friday, 5 June 2015

Four fundamental subspaces

Prerequisites

Orthogonal vectors & subspaces
Two vectors $v_1$ and $v_2$ are orthogonal if $v_1^T v_2=\left\lVert v_1 \right\rVert \left\lVert v_2 \right\rVert \cos\theta=0$.
Two subspaces $M_1$ and $M_2$ are orthogonal if $v_1 \bot v_2$ for all $v_1 \in M_1$ and $v_2 \in M_2$.

Orthogonal complement
The orthogonal complement to a subspace $M \subset R^n$ is defined by $M^\bot=\{w \in R^n: w^T v=0 \: \forall v \in M \}$.
$M^\bot$ is the largest (unique) subspace orthogonal to $M$.
The following holds:
$M^\bot$ is a subspace.
$R^n=M \oplus M^\bot$
If $\text{dim}\:M=k$, then $\text{dim}\:M^\bot=n-k$.
$(M^\bot)^\bot=M$



The four fundamental subspaces

Definitions
$A=\begin{pmatrix} a_{11} & \cdots & a_{1n} \\ \vdots & \cdots & \vdots \\ a_{m1} & \cdots & a_{mn} \end{pmatrix}=\begin{pmatrix} \vec{a_1} & \cdots & \vec{a_n} \end{pmatrix}=\begin{pmatrix} \overline{a}_1^T \\  \overline{a}_2^T \\  \vdots \\ \overline{a}_m^T \end{pmatrix}$

Column space
$C(A)=\{Ax\:|\:x \in R^n \}=\{ \sum_{j=1}^n \vec{a_j}x_j\:|\:x_j \in R, j=1,2,\cdots,n\}$

Nullspace
$N(A)=\{x \in R^n\:|\:Ax=0\}=\{x \in R^n\:|\:\overline{a}_i^T x=0, i=1,2,\cdots,m\}$

Row space
$C(A^T)=\{A^Tu\:|\:u \in R^m \}=\{ \sum_{i=1}^m \overline{a}_i u_i\:|\:u_i \in R, i=1,2,\cdots,m\}$

Left nullspace
$N(A^T)=\{u \in R^m\:|\:A^T u=0\}=\{u \in R^m\:|\:\vec{a_j}^T u=0, j=1,2,\cdots,n \}=\{u \in R^m\:|\:u^T A=0\}$

The left nullspace is the space of all vectors y such that $A^Ty=0$, or equivalently $y^TA=0$. Thus the term left nullspace.

Geometric picture [later]

Relation to RREF [later]

Summary

Let $A$ be an $m\times n$ matrix, and $r$ be its rank.

subspace notation subspace of dimension
row space $C(A^T)$ $R^n$ $r$
column space $C(A)$ $R^m$ $r$
nullspace $N(A)$ $R^n$ $n-r$
left nullspace $N(A^T)$ $R^m$ $m-r$

$\text{dim}\:C(A^T)+\text{dim}\:N(A)=\text{dim}\:R^n\\
\text{dim}\:C(A)+\text{dim}\:N(A^T)=\text{dim}\:R^m$



Fundamental Theorem of Linear Algebra

Let A be an $m \times n$ matrix.
1. $N(A)$ is the orthogonal complement of $C(A^T)$ in $R^n$.
2. $N(A^T)$ is the orthogonal complement of $C(A)$ in $R^m$.
3. $N(A) \oplus C(A^T)=R^n$
4. $N(A^T) \oplus C(A)=R^m$

The following orthogonality relations hold:
$C(A)^\bot=N(A^T)\\
N(A^T)^\bot=C(A)\\
C(A^T)^\bot=N(A)\\
N(A)^\bot=C(A^T)$

Explanations:

Orthogonal subspaces
We will examine an $m \times n$ matrix $A$. Then both the row space and the nullspace are subspaces of $R^n$. [$A^T$ is $n \times m$ and $u$ is $m \times 1$. By matrix multiplication, the row space is $n \times 1$.] Suppose $x$ is a vector in the nullspace of $A$. Then $Ax=0$. From the definition of matrix multiplication, we have $$\begin{pmatrix} \text{row 1} \\ \text{row 2} \\ \vdots \\ \text{row m} \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \end{pmatrix}=0.$$ As the row space is the linear combination of the rows, the dot product of $x$ with any vector in the row space must be $0$. So if $v\in C(A^T)$ and $w\in N(A)$, we must have $v^T w=0$ Therefore, the row space and the nullspace of $A$ are orthogonal.

Similarly, every vector in the left nullspace of $A$, $N(A^T)$, is perpendicular to every vector in the column space of $A$, $C(A)$. Hence, the column space of A and the left nullspace of A are orthogonal.

Orthogonal complements
We have established that the row space and the nullspace are orthogonal. It turns out the nullspace is in fact the orthogonal complement of the row space. The dimension of the nullspace is $n-r$, where $r$ is the rank (dimension of row space) of $A$. If there were a vector orthogonal to the row space but not to the nullspace, then the dimension of $C(A^T)^\bot$ would be at least $n-r+1$. But this would be too large for both $C(A^T)$ and $C(A^T)^\bot$ to fit in $R^n$. Therefore, the nullspace is the largest subspace perpendicular to the row space, and we have $C(A^T)^\bot=N(A)$. Using similar arguments, we have $N(A)^\bot=C(A^T)$.

What is so useful about complements is that we can use them to break down vectors into components. Namely, for an $m \times n$ matrix $A$, any vector $x \in R^n$ can be written as the sum of a component $x_r$ in the row space and a component $x_n$ in the nullspace: $x=x_r+x_n$.

Now, let's consider $Ax=b$. We claim that every output vector $b$ comes from one and only one vector in the row space. Write $x=x_r+x_n$. Then $Ax=Ax_r+Ax_n=Ax_r+0=Ax_r$. Thus, we know there is a vector $x_r$ in the row space such that $Ax_r=b$. Suppose $x_r'$ is another vector in the row space such that $Ax_r'=b$. Note that $x_r-x_r'$ is in row space since it is the difference of two vectors in the row space. Now $A(x_r-x_r')=Ax_r-Ax_r'=b-b=0$. Then $x_r-x_r'$ is in the nullspace. As the row space and nullspace are orthogonal complements, the only vector in both of them is $0$. So, $x_r-x_r'=0$ and $x_r=x_r'.\:\Box$

Relation to domain and codomain
Consider an $m \times n$ matrix. The matrix $A$ maps from $R^n$ to $R^m$; we say that its domain is $R^n$ and its codomain is $R^m$. Since a vector in the domain either maps to zero or elsewhere, we can say that the domain includes the nullspace of $A$ and the preimage of the range of $A$. Also, the codomain of $A$ includes the range of $A$ and the part we cannot reach with $A$. Now, the matrix $A^T$ maps from $R^m$ to $R^n$. We can say that the range of $A^T$ is the preimage of the range of $A$ and that the nullspace of $A^T$ is the subspace we cannot reach with $A$.

To sum up, we have the four fundamental subspaces as
• an orthogonal decomposition of the domain of $A$:
nullspace of $A$ $\oplus$ range of $A^T$
• an orthogonal decomposition of the codomain of $A$:
range of $A$ $\oplus$ nullspace of $A^T$

Useful:
http://xoax.net/math/crs/linear_algebra_mit/lessons/Lecture10/

Reference:
https://rip94550.wordpress.com/2009/09/07/linear-algebra-the-four-fundamental-subspaces/

Thursday, 4 June 2015

Spanning and linearly independence

A basis of a vector space V is subset $\{v_1,\cdots,v_n\} \subset V$ satisfying the two properties:
(i) $\{v_1,\cdots,v_n\}$ spans V;
(ii) $\{v_1,\cdots,v_n\}$ is linearly independent.

(i) To show spanning: Take an arbitrary vector $v \in V$, and show that $v$ can be written as a linear combination of $v_1, \cdots,v_n$, namely, $v=a_1v_1+\cdots+a_nv_n$.

(ii) To show linearly independence: Prove that $a_1v_1+\cdots+a_nv_n=0$ implies $a_1=a_2=\cdots=a_n=0$.

Let $au_1(x)+bu_2(x)=0$. Differentiating $\Rightarrow au_1'(x)+bu_2'(x)=0$.
If $W(u_1,u_2)=\begin{vmatrix} u_1 & u_2 \\ u_1' & u_2' \end{vmatrix}=u_1u_2'-u_2u_1' \neq 0$, then $a=b=0$ and $u_1$ and $u_2$ are linearly independent.
Note: $W(u_1, u_2)$ is known as the Wronskian determinant of functions $u_1$ and $u_2$.

Examples:
Spanning

$1$ and $i$ are linearly independent in $\mathbb{C}$ regarded as a vector space over $\mathbb{R}$ but they are linearly dependent if $\mathbb{C}$ is regarded as a complex vector space.



$u_1(x)=\sin x$ and $u_2(x)=\cos x$ are linearly independent.

Proof: Let $a\sin x +b\cos x=0$.
Differentiating $\Rightarrow a\cos x-b\sin x=0$.
So $a=b \sin x (\cos x)^{-1} \Rightarrow b(\sin^2 x (\cos x)^{-1}+\cos x)=0$
$\Rightarrow b (\cos x)^{-1}=0 \Rightarrow b=0$ Thus $a=b=0$.

Alternatively, $W(u_1,u_2)=u_1u_2'-u_2u_1'=-\sin^2 x-\cos^2 x=-1 \Rightarrow$ linear independence.



Show that the functions $x, xe^x, e^{-x}$ are linearly independent in the vector space $C^\infty(\mathbb{R})$.

Proof: Suppose $ax+bxe^x+ce^{-x}=0$ for all $x \in \mathbb{R}$, where $a,b,c$ are constants.

Method I – Differentiation:
$\begin{align}\: & ax+bxe^x+ce^{-x}=0 \; &(1)\\
\text{Differentiate both sides:}\: & a+be^x+bxe^x-ce^{-x}=0 \; &(2)\\
\text{Differentiate again:}\: & 2be^x+bxe^x+ce^{-x}=0 \; &(3)\\
\text{Differentiate yet again:}\: & 3be^x+bxe^x-ce^{-x}=0 \; &(4)\\
\text{And again:}\: & 4be^x+bxe^x+ce^{-x}=0 \; &(5) \end{align}$

$(5)-(3) \Rightarrow 2be^x=0 \Rightarrow b=0$
Substitute $b=0$ in $(3) \Rightarrow c=0$.
Substitue $b=c=0$ in $(2) \Rightarrow a=0$.

Method II – Limit:
For any $x \neq 0$ divide both sides by $xe^x$. Then, $ae^{-x}+b+cx^{-1}e^{-2x}=0$
L.H.S approaches $b$ as $x \to +\infty \Rightarrow b=0$.
Now $ax+ce^{-x}=0$ for all $x \in \mathbb{R}$.
For any $x \neq 0$ divide both sides by x. Then $a+cx^{-1}e^{-x}=0$.
L.H.S. approaches $a$ as $x \to +\infty \Rightarrow a=0$.
Now $ce^{-x}=0 \Rightarrow c=0$.

Linear Independence in function spaces



Applications in ordinary differential equations [later]

Reference:
Linear independence

Tuesday, 2 June 2015

Direct sum

The sum of two subspaces is the subspace $U+V=\text{span}(U \cup V)=\{u+v \in W\:|\:u \in U \wedge v \in V\}$
If $U \cap V=\{\boldsymbol{0}\}$, then the sum is called the direct sum and is denoted by $U \oplus V$.

Put differently, $W=U\oplus V$ iff for every $w \in W$ there exist unique vectors $u \in U$ and $v \in V$ such that $w=u+v$.

If $U_i \leq W$ for $1 \leq i \leq k$, then $W=U_1\oplus \cdots \oplus U_k$ iff $W=U_1+U_2+\cdots+U_k$ and $U_r \cap \sum\limits_{i \neq r} U_i=\{0\}$ for $1 \leq r \leq k$.
Note: It is not sufficient that $U_i \cap U_j=\{0\}$ whenever $i \neq j$.

Examples:
The complex numbers are the direct sum of the real and purely imaginary numbers. $\mathbb{C}=\mathbb{R}\oplus i\mathbb{R}$

$\langle e_1 \rangle+\langle e_2,e_3 \rangle=\langle e_1 \rangle \oplus \langle e_2,e_3 \rangle$ is a direct sum but $\langle e_1,e_2 \rangle+\langle e_2,e_3 \rangle$ is not a direct sum since $\langle e_1,e_2 \rangle \cap \langle e_2,e_3 \rangle= \langle e_2 \rangle \neq \{\boldsymbol{0} \}$.

Any function is a sum of even and odd function.
$E=\{f \in F(\mathbb{R})|f(t)=f(-t)\},O=\{f \in F(\mathbb{R})|f(t)=-f(-t)\}$
$E\oplus O=F(\mathbb{R})$
$f(t)=\frac{1}{2}[f(t)+f(-t)-f(-t)+f(t)]=\frac{1}{2}[f(t)+f(-t)]+\frac{1}{2}[f(t)-f(-t)]$

Denote the vector space of square matrices by $M_{n \times n}(\mathbb{R})$.
Denote by $M_{n \times n}(\mathbb{R})_+, M_{n \times n}(\mathbb{R})_-$ the vector subspaces of symmetric and skew-symmetric matrices.
$M_{n \times n}(\mathbb{R})=M_{n \times n}(\mathbb{R})_+\oplus M_{n \times n}(\mathbb{R})_-$
Let $X \in M_{n \times n}(\mathbb{R})$. Then $X_+=\frac{1}{2}(X+X^T)$ and $X_-=\frac{1}{2}(X-X^T)$.
$X=\frac{1}{2}(X+X^T)+\frac{1}{2}(X-X^T)$

Complex square matrices is the direct sum of Hermitian and skew-Hermitian matrices.
Denote the vector space of complex square matrices by $M_{n \times n}(\mathbb{C})$.
Denote by $M_{n \times n}(\mathbb{C})_+, M_{n \times n}(\mathbb{C})_-$ the vector subspaces of Hermitian and skew-Hermitian matrices.
$M_{n \times n}(\mathbb{C})=M_{n \times n}(\mathbb{C})_+\oplus M_{n \times n}(\mathbb{C})_-$
Let $A \in M_{n \times n}(\mathbb{C})$. Then $A_+=\frac{1}{2}(A+A^*)$ and $A_-=\frac{1}{2}(A-A^*)$.
$A=\frac{1}{2}(A+A^*)+\frac{1}{2}(A-A^*)$

Important idea:
If $U \cap V=\{ \boldsymbol{0} \}$ and $U, V$ are subspaces of a vector space with bases $\{u_1, \cdots, u_k\}$ and $\{v_1, \cdots, v_l\}$ respectively, then $\{u_1, \cdots, u_k, v_1, \cdots, v_l \}$ is a basis for $U \oplus V$. Therefore, knowing that $X \oplus Y=X+Y$ allows us to find a basis of $X+Y$ easily: we just find the union of bases of $X$ and $Y$. Namely, if $V=U_1 \oplus U_2 \oplus \cdots \oplus U_k$ and $B_i$ is a basis of $U_i$ then $B_1 \cup B_2 \cup \cdots \cup B_k$ is a basis of V. In particular, $\text{dim}\:V=\sum\limits_{i=1}^k \text{dim}\: U_i$.

Direct sum of vectors
Any vector $\vec{v}\in V$ can be written as a linear combination of the basis vectors, $\vec{v}=v_1e_1+\cdots+v_ne_n$. We normally express this fact in the form of a column vector, $\vec{v}=(v_1,v_2,\cdots,v_n)^T$. Similarly, for any vector $\vec{w}$, we can write $\vec{w}=(w_1,w_2,\cdots,w_m)^T$. Thus, $\vec{v}=(v_1,v_2,\cdots,v_n,0,0,\cdots,0)^T$ and $\vec{w}=(0,0,\cdots,0,w_1,w_2,\cdots,w_m)^T$. Then a direct sum of two non-zero vectors is $\vec{v}\oplus \vec{w}=(v_1,v_2,\cdots,v_n,w_1,w_2,\cdots,w_m)^T$.

Block-diagonal form
A matrix is actually a linear map from a vector space to another vector space. Let's consider the endomorphism $A:V\to V, \vec{v}\mapsto A\vec{v}$ or $\begin{pmatrix} v_1\\v_2\\ \vdots\\ v_n\end{pmatrix}\mapsto \begin{pmatrix} a_{21}&a_{22}&\cdots&a_{2n}\\ \vdots&\vdots&\ddots&\vdots\\ a_{n1}&a_{n2}&\cdots&a_{nn}\end{pmatrix}\begin{pmatrix} v_1\\v_2\\ \vdots\\ v_n\end{pmatrix}.$ Similarly, $B:W\to W, \vec{w}\mapsto B\vec{w}$. On the direct sum space $V\oplus W$, the matrix $A\oplus B$ still acts on the vectors such that $\vec{v}\mapsto A\vec{v}$ and $\vec{w}\mapsto B\vec{w}$. This is achieved by lining up all matrix elements in a block-diagonal form, $$A\oplus B=\begin{pmatrix}A&0_{n\times m}\\ 0_{m\times n}&B\end{pmatrix}.$$
For example, if $n=2,m=3$, and $A=\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{pmatrix},B=\begin{pmatrix}b_{11}&b_{12}&b_{13}\\b_{21}&b_{22}&b_{23}\\b_{31}&b_{32}&b_{33}\end{pmatrix}$, then $$A\oplus B=\begin{pmatrix}a_{11}&a_{12}&0&0&0\\a_{21}&a_{22}&0&0&0\\0&0&b_{11}&b_{12}&b_{13}\\0&0&b_{21}&b_{22}&b_{23}\\0&0&b_{31}&b_{32}&b_{33}\end{pmatrix}.$$
When you act it on $\vec{v}\oplus \vec{w}$, $$(A\oplus B)(\vec{v}\oplus \vec{w})=\begin{pmatrix}A&0_{n\times m}\\ 0_{m\times n}&B\end{pmatrix}\begin{pmatrix}\vec{v}\\ \vec{w}\end{pmatrix}=\begin{pmatrix}A\vec{v}\\B\vec{w}\end{pmatrix}=(A\vec{v})\oplus(B\vec{w}).$$ If you have two matrices, their multiplications are done on each vector space separately, $$(A_1 \oplus B_1)(A_2\oplus B_2)=(A_1A_2)\oplus (B_1B_2).$$
Note that not every matrix on $V\oplus W$ can be written as a direct sum of a matrix on $V$ and another on $W$. There are $(n+m)^2$ independent matrices on $V\oplus W$, while there are only $n^2$ and $m^2$ matrices on $V$ and $W$, respectively. Remaining $(n+m)^2-n^2-m^2=2mn$ matrices cannot be written as a direct sum.

Other useful formulae are $\text{det}(A\oplus B)=\text{det}(A)\text{det}(B)$ and $\text{Tr}(A\oplus B)=\text{Tr}(A)+\text{Tr}(B)$.

More to learn:
geometric picture of sum of two subspaces and examples
http://algebra.math.ust.hk/vector_space/13_direct_sum/lecture2.shtml

more on direct sum
https://math.dartmouth.edu/archive/m22s02/public_html/VectorSpaces.pdf

examples
http://mathwiki.ucdavis.edu/Algebra/Linear_algebra/04._Vector_spaces/4.4_Sums_and_direct_sum

projection operators
https://people.maths.ox.ac.uk/flynn/genus2/alg0506/LALect02.pdf

problems on direct sum
http://en.wikibooks.org/wiki/Linear_Algebra/Combining_Subspaces

http://math.stackexchange.com/questions/1151335/what-is-direct-sum-decomposition
https://web.math.princeton.edu/~jonfick/2012SpMAT204/Direct_Sums.pdf
http://www.math.nyu.edu/faculty/hausner/quotientspaces.pdf
http://www.math.ttu.edu/~drager/Classes/05Summer/m5399/struct.pdf
http://www.mathsman.co.uk/LinearAlgebra-Ver1.4[1].pdf
http://planetmath.org/directsumofmatrices

Monday, 1 June 2015

Fields

A vector space is a non-empty set consisting of elements called vectors which can be added and multiplied by scalars.

The scalars that operate on vectors in a vector space form a field $\mathbb{F}$, where addition $+$ and multiplication $\cdot$ over $\mathbb{F}$ are performed between scalars.



Prerequisites
Groups (another algebraic structure)

A group, denoted by $(G, \cdot)$, is a non-empty set (finite or infinite) $G$ with a binary operator $\cdot$ such that the following four properties are satisfied:

Closure: if $a$ and $b$ belong to $G$, then $a\cdot b$ also belongs to $G$;
Associative: $a\cdot (b\cdot c)=(a\cdot b)\cdot c$ for all $a,b,c \in G$;
Identity element: there is an element $e \in G$ such that $a\cdot e=e\cdot a=a$ for every element $a \in G$;
Inverse element: for every element $a$ in $G$, there's an element $a'$ such that $a\cdot a'=e$ where $e$ is the identity element.

In general, a group is not necessarily commutative, namely $a\cdot b=b\cdot a$ for all $a,b \in G$. When it does, the group is an abelian group. If $G$ has finitely many elements, $G$ is said to be a finite group. The order of $G$ is the number of elements in $G$; it is denoted by $|G|$ or $\#G$.

Examples
Let's consider the set of integers, $\mathbb{Z}$ with addition and multiplication.

$\small \begin{array}{c|c|c} & \text{Addition} & \text{Multiplication} \\ \hline \text{Closure} & \text{a+b is an integer} & \text{a*b is an integer} \\ \text{Associativity} & a+(b+c)=(a+b)+c & a*(b*c)=(a*b)*c \\ \text{Existence of an identity element} & a+0=a & a*1=a \\ \text{Existence of inverse elements} & a+(-a)=0 & \color{blue}{\text{only for 1 and -1}: 1*1=1,\: -1*(-1)=1} \\ \text{Commutativity} & a+b=b+a & a*b=b*a \end{array}$

$(\mathbb{Z}, +)$ is an infinite abelian group but $(\mathbb{Z}, *)$ is not a group.
$(\mathbb{Z}, *)$ is not a group because most of the elements do not have inverses.


Now consider the set of remainders modulo a positive integer $n$, $\mathbb{Z}_n$ or $Z/nZ$, namely $\{0,1,2,\cdots,n-1\}$ with addition modulo $n$ and multiplication modulo $n$.

$\small \begin{array}{c|c|c} & \text{Addition mod n} & \text{Multiplication mod n} \\ \hline \text{Closure} & a+b \equiv c\:\text{mod n},\: 0\leq c\leq n-1 & a*b \equiv c\:\text{mod n},\: 0\leq c\leq n-1 \\ \text{Associativity} & a+(b+c)=(a+b)+c\:\text{mod n} & a*(b*c)=(a*b)*c\:\text{mod n} \\ \text{Existence of an identity element} & a+0=a\:\text{mod n} & a*1=a\:\text{mod n} \\ \text{Existence of inverse elements} & a+(n-a)=0\:\text{mod n} & \color{blue}{\text{only when a is coprime to n}} \\ \text{Commutativity} & a+b=b+a\:\text{mod n} & a*b=b*a\:\text{mod n}\end{array}$

$(\mathbb{Z}_n, +)$ is a finite abelian group with order $n$ but $(\mathbb{Z}_n, *)$ is not a group. Note that $0$ is an element of $\mathbb{Z}_n$ and $0$ is not coprime to any number so there is no inverse for $0$. Therefore $(\mathbb{Z}_n, *)$ is not a group.


Lastly, consider the set of remainders coprime to the modulus $n$. For example, when $n=10$, the set is $\{1,3,5,7,9\}$. In particular, when $n$ is a prime number, the set is $\{1,2,\cdots,n-1\}$. Let's call this set Coprime-n with addition modulo $n$ and multiplication modulo $n$.

$\small \begin{array}{c|c|c} & \text{Addition mod n} & \text{Multiplication mod n} \\ \hline \text{Closure} & \color{blue}{a+b\: \text{may not be in Coprime-n.}} & a*b \equiv c\:\text{mod n},\: \text{c is in Coprime-n} \\ \text{Associativity} & a+(b+c)=(a+b)+c\:\text{mod n} & a*(b*c)=(a*b)*c\:\text{mod n} \\ \text{Existence of an identity element} & \color{blue}{\text{null}} & a*1=a\:\text{mod n} \\ \text{Existence of inverse elements} & \color{blue}{\text{null}} & \text{exists for every a in Coprime-n} \\ \text{Commutativity} & a+b=b+a\:\text{mod n} & a*b=b*a\:\text{mod n}\end{array}$

Now (Coprime-n, +) is not a group and (Coprime-n, *) is a group. (Coprime-n, *) is abelian and finite. When $n$ is a prime number, the order of (Coprime-n, *) is $n-1$. Note that this only holds when $n$ is a prime number. For example, when $n=10$, the order of (Coprime-10, *) is $5$ not $9$.



A field is a non-empty set $F$ with two binary operators which are usually denoted by $+$ and $*$ (or omitted), that satisfy the field axioms:

(1) Closure: If $a,b \in \mathbb{F}$, then $a+b \in \mathbb{F}$ and $ab \in \mathbb{F}$.
(2) Commutativity: If $a,b \in \mathbb{F}$, there hold $a+b=b+a$ and $ab=ba$.
(3) Associativity: If $a,b \in \mathbb{F}$, there hold $(a+b)+c=a+(b+c)$ and $a(bc)=(ab)c$.
(4) Distributivity: For $a,b,c \in \mathbb{F}$, there hold $a(b+c)=ab+ac$.
(5) Existence of zero: There is a scalar, called zero, denoted by 0, such that $a+0=a$ for any $a \in \mathbb{F}$.
(6) Existence of unity: There is a scalar different from zero, called one, denoted by $1$, such that $1a=a$ for any $a \in \mathbb{F}$.
(7) Existence of additive inverse: For any $a \in \mathbb{F}$, there is a scalar, denoted by $-a$, such that $a+(-a)=0$.
(8) Existence of multiplicative inverse: For any $a \in \mathbb{F}\setminus \{0 \}$, there is a scalar, denoted by $a^{-1}$, such that $aa^{-1}=1$.

If the set $F$ is finite, then the field is said to be a finite field. The order of a finite field is the number of elements in the finite field. Finite fields have practical applications in coding theory, cryptography, algebraic geometry and number theory.

Examples of fields: $\mathbb{Q}, \mathbb{R}, \mathbb{C}$

Nonexample of fields: $(\mathbb{Z}, +, *)$ does not form a field because there are no multiplicative inverses for its non-unit elements. (For any $a \in \mathbb{Z}$, $a^{-1}$ is not included in $\mathbb{Z}$.)

$(\mathbb{Z}_n, +, *)$ in general is not a finite field. For example, $\mathbb{Z}_8 \setminus \{0\}=\{1,2,3,4,5,6,7\}$ along with modulo $8$ multiplication does not form a group. However, when $n$ is a prime number, say $5$, $\mathbb{Z}_5 \setminus \{0\}=\{1,2,3,4\}$ along with modulo $5$ multiplication forms the abelian group $\mathbb{Z}_5^*$. Therefore, $(\mathbb{Z}_5, +, *)$ is a finite field. [...]



A set $U$ is a vector space over a field $\mathbb{F}$ if $U$ is non-empty and there is addition between the elements of $U$, called vectors, and scalar multiplication between elements in $\mathbb{F}$, called scalars, and vectors, such that the following axioms hold.
(1) (Closure) For $u,v \in U$, we have $u+v \in U$. For $u \in U$ and $a \in \mathbb{F}$, we have $au \in U$.
(2) (Commutativity) For $u,v \in U$, we have $u+v=v+u$.
(3) (Associativity) For $u,v,w \in U$, we have $u+(v+w)=(u+v)+w$.
(4) (Existence of zero vector) There is a vector, called zero and denoted by $0$ such that $u+0=u$ for any $u \in U$.
(5) (Existence of additive inverse) For any $u \in U$, there is a vector, denoted by $(-u)$, such that $u+(-u)=0$.
(6) (Associativity of scalar multiplication) For any $a,b \in \mathbb{F}$ and $u \in U$, we have $a(bu)=(ab)u$.
(7) (Property of unit scalar) For any $u \in U$, we have $1u=u$.
(8) (Distributivity) For any $a,b \in \mathbb{F}$ and $u,v \in U$, we have $(a+b)u=au+bu$ and $a(u+v)=au+av$.

Reference:
http://www.doc.ic.ac.uk/~mrh/330tutor/index.html

More to explore:
http://www.math.uconn.edu/~kconrad/blurbs/galoistheory/finitefields.pdf
http://aix1.uottawa.ca/~jkhoury/coding.htm

Application of finite fields: coding theory
http://mathworld.wolfram.com/Error-CorrectingCode.html