Examples:
Extension by linearity
Let $V$ be a vector space over $\mathbb{F}$ with basis $B$. Then given any vector space $W$ over $\mathbb{F}$ and any function $f:B\to W$, there exist a unique linear transformation (called linear extension) $\overline{f}:V\to W$ which extends $f$ in the sense that $\overline{f}(v)=f(v)$ for all $v\in B$, so that the diagram below commutes:
$B \xrightarrow{\quad f \quad} W\\ \; i \searrow \quad \;\; \nearrow \overline{f}\\ \qquad \; V$
[$\overline{f}(v)=f(v)$ for all $v\in B$ can be replaced by $\overline{f}|_B=f$.]
This theorem tells us that to define a linear map, we only need to specify where the basis elements are mapped to. Once this is done, we can always extend the mapping to the whole space by linearity. Also, there is no constraint on how such basis elements should be mapped. That is, a linear map is entirely determined by its action on a basis.
Now suppose a linear map $f:X \to Y$ is given between finite-dimensional spaces $X,Y$ and that $\{e_1,\cdots,e_n\},\{v_1,\cdots,v_m\}$ are bases for $X,Y$ respectively. From what we have seen above, the mapping $f$ is uniquely determined by specifying $f(e_i),i=1,\cdots,n$. Since $\{v_1,\cdots,v_m\}$ is a basis for $Y$, each $f(e_i)$ can be expressed as a linear combination of $\{v_1,\cdots,v_m\} \Rightarrow f(e_i)=\sum\limits_{j=1}^m a_{ji}v_j$.
In matrix notation, this means
$\begin{pmatrix} f(e_1) & \cdots & f(e_n)\end{pmatrix}=\begin{pmatrix} v_1 & \cdots & v_m \end{pmatrix} \begin{pmatrix} a_{11} & \cdots & a_{1n}\\ a_{21} & \cdots & a_{2n} \\ & \cdots & \\ a_{m1} & \cdots & a_{mn} \end{pmatrix}.$
Hence every element $x=\sum\limits_{i=1}^n x_ie_i$ is mapped, by the linearity of $f$, as
$f(\sum\limits_{i=1}^n x_ie_i)=\sum\limits_{i=1}^n x_i f(e_i)=\sum\limits_{i=1}^n \sum\limits_{j=1}^m a_{ji}x_iv_j=\sum\limits_{i=1}^m(\sum\limits_{j=1}^n a_{ij}x_j)v_i$.
In other words, when we express $f(\sum\limits_{i=1}^n x_ie_i)$ as $\sum\limits_{i=1}^m y_iv_i$, the coefficients $y_1,\cdots,y_m$ are given by
$\begin{pmatrix}y_1\\y_2\\ \vdots \\y_m \end{pmatrix}=\begin{pmatrix} a_{11} & \cdots & a_{1n}\\ a_{21} & \cdots & a_{2n} \\ & \cdots & \\ a_{m1} & \cdots & a_{mn} \end{pmatrix} \begin{pmatrix}x_1\\x_2\\ \vdots \\x_m\end{pmatrix}$.
This is the matrix representation of $f$ (with respect to the bases $\{e_1,\cdots,e_n\}$ and $\{v_1,\cdots,v_m\}$).
Change of coordinates
Let $V$ be a n-dimensional vector space. Let $\{v_1,v_2,\cdots,v_n\}$ be a basis for $V$ and $g_1:V\to \mathbb{F}^n$ be the coordinate mapping corresponding to this basis. Let $\{u_1,u_2,\cdots,u_n\}$ be another basis for $V$ and $g_2:V\to \mathbb{F}^n$ be the coordinate mapping corresponding to this basis.
$\qquad V\\ g_1 \swarrow \;\; \searrow g_2\\ \mathbb{F}^n \xrightarrow{\qquad\;} \mathbb{F}^n$
The composition $g_2\circ g_1^{-1}$ is a linear operator on $\mathbb{F}^n$. It has the form $x \mapsto Ax$, where $A$ is an $n\times n$ matrix. $A$ is the transition matrix from $\{v_1,v_2,\cdots,v_n\}$ to $\{u_1,u_2,\cdots,u_n\}$. Columns of $A$ are coordinates of the vectors $\{v_1,v_2,\cdots,v_n\}$ with respect to the basis $\{u_1,u_2,\cdots,u_n\}$.
Change of coordinates for a linear operator
Let $T:V\to V$ be a linear operator on a vector space $V$.
Let $A$ be the matrix of $L$ relative to a basis $\{a_1,a_2,\cdots,a_n\}$ for $V$.
Let $B$ be the matrix of $L$ relative to a basis $\{b_1,b_2,\cdots,b_n\}$ for $V$.
Let $U$ be the transition matrix from the basis $\{a_1,a_2,\cdots,a_n\}$ to $\{b_1,b_2,\cdots,b_n\}$.
$\begin{array}[c]{ccc}\qquad \text{a-coordinates of v}&\xrightarrow{\quad A\quad}&\text{a-coordinates of T(v)}\\ \scriptstyle{U}\Bigg\downarrow& &\Bigg\downarrow \scriptstyle{U}\\ \qquad \text{b-coordinates of v} & \xrightarrow{\quad B\quad} & \text{b-coordinates of T(v)} \end{array}$
It follows that $UAx=BUx$ for all $x \in \mathbb{F}^n \Rightarrow UA=BU.$ Then $A=U^{-1}BU$ and $B=UAU^{-1}$.
Another similar commutative diagram [p1]
More similar diagrams [p10,14,15]
Isomorphism
A linear map $T:V\to W$ is isomorphism if we can find $L:W\to V$ such that $TL=I_W$ and $LT=I_V$.
$\begin{array}{lll} V & \xrightarrow{\quad T\quad} & W \\
\uparrow {I_V} & & \uparrow{I_W} \\
V & \xleftarrow{\quad L\quad} & W \\
\end{array}$
Universal property of the quotient
Let $V$ be a vector space over $\mathbb{F}$ and $W$ be a subspace of $V$. Define $\pi:V\to V/W$ by $v \mapsto [v]$. If $W'$ is a vector space over $\mathbb{F}$ and $f:V\to W'$ is a linear map whose kernel contains $W$, then there exists a unique linear map $g:V/W\to W'$ such that $f=g\circ \pi$. This is summarized by the following commutative diagram:
$V \xrightarrow{\qquad f \qquad} W'\\ \; \pi \searrow \quad \qquad \nearrow g\\ \qquad\;\; V/W$
Proof:
Let $W'$ be a vector space over $\mathbb{F}$, $f:V\to W'$ be a linear map with $W \subset \text{ker}f$. Then define $g:V/W\to W'$ to be the map $[v]\mapsto f(v)$.
$g$ is well-defined because if $[v]\in V/W$ and $v_1,v_2\in V$ are both representatives of $[v]$, then there exists $w\in W$ such that $v_1=v_2+W$.
Hence $f(v_1)=f(v_2+w)=f(v_2)+f(w)=f(v_2)$ since f is linear.
The linearity of $g$ comes from linearity of $f$.
Also, by definition $f=g\circ \pi$.
It remains to prove $g$ is unique.
Suppose $h:V/W\to W'$ is such that $h\circ \pi=f=g\circ \pi$. Then $h([v])=g([v])$ for all $v\in V$. Since $\pi$ is surjective, $h=g.\:\Box$
Homomorphism theorem
Let $X,Y$ be vector spaces over $\mathbb{F}$ and $f:X \to Y$ a linear map. Then $X/\text{ker}f \cong \text{im}f$, namely $X/\text{ker}f$ is isomorphic to $\text{im}f$.
Proof: Consider the mapping $\overline{f}:X/\text{ker}f \to Y \; \overline{f}:[x]=x+\text{ker}f \mapsto f(x)$.
A commutative diagram would express this as
$X \xrightarrow{\qquad f \qquad} Y\\ \; \pi \searrow \quad \qquad \nearrow \overline{f}\\ \qquad X/ \text{ker}f$
We can use similar arguments as the previous example to prove that $\overline{f}$ is well-defined and linear.
Now suppose $\overline{f}([x])=0$. Then $f(x)=0$, which implies $x\in \text{ker}f$. This means $[x]=0_{X/\text{ker}f}$. Then $\overline{f}$ is injective. Hence if we confine the codomain of $\overline{f}$ to $\text{im}f$, then $\overline{f}$ becomes a bijection and hence $X/\text{ker}f \cong \text{im}f.\:\Box$
Complexification of vector spaces
Universal property of free groups
Let $G$ be a group wth a generating set $X \subset G$. Then $G$ is free on $X$ iff every map $\phi:X\to H$ from $X$ into a group $H$ can be extended to a unique homomorphism $\overline{\phi}:G\to H$, so that the diagram below commutes:
$X \xrightarrow{\quad \phi \quad} H\\ \; i \searrow \quad \;\; \nearrow \overline{\phi}\\ \qquad \; G$
where $X \xrightarrow{i} G$ is the inclusion of $X$ into $G$.
As you may have noticed, extension by linearity, universal property of quotient, universal property of free groups, and universal property of complexification all share a similar commutative diagram. All these come with three sets and three maps (one of them being injective or surjective; another being natural/standard embedding; lastly a unique map). They are called inital properties in category theory. Check out the formal definition of universal property.
Applications:
The reason we can use Fourier transform to solve differential equations is that the following diagram commutes.
$\begin{array}[c]{ccc}\qquad \text{problem}&\xrightarrow{\text{Fourier transform}}&\text{transformed problem}\\ \scriptstyle{\text{solve (hard)}}\Bigg\downarrow& &\Bigg\downarrow \scriptstyle{\text{solve (easy)}}\\ \qquad \text{solution} & \xleftarrow{\text{Inverse Fourier transform}} & \text{transformed solution} \end{array}$
Outside of the realm of mathematics, we can also see instances of commutative diagram. A famous example is Hess's law in physical chemistry, which states that the change of enthalpy in a chemical reaction is independent of the pathway between the initial and final states. What this means is that we have the following commutative diagram:
$\text{reactants A} \xrightarrow{\Large \text{route 1}} \text{products B}\\ \quad \qquad\ \searrow \quad \qquad \qquad \nearrow\\ \qquad \quad \quad \text{intermediates}\\ \qquad \qquad \quad\text{(route 2)}$
In fact, we have been using this kind of 'reasoning' all along.
Examples:
- In the derivation of normal distribution, we express the same probability in two different ways.
- Some trigonometric identities can be proved employing this idea: evaluating the integral $\int \sin 2x \cos x \:dx$ in two different ways can yield the identity $\cos 3x=4\cos^3 x-3\cos x$.
- Proofs using combinatorial arguments count the same quantity in two different ways.
More to explore:
Category theory
Universal property
More examples of commutative diagrams
Definition of commutative diagrams
Free groups
References:
Commutative diagram
Universal property of free groups [p6]
Examples of commutative digrams [p4-6,28]
Quotient space [p1-3]
Extension by linearity
From Vector Spaces to Function Spaces: Introduction to Functional Analysis with Applications by Yutaka Yamamoto [p16,17,29]
No comments:
Post a Comment