i) Show that for any $n\in \Bbb N$, $\int_0^{\pi/2}\sin^n x\;dx=\dfrac{n-1}{n}\int_0^{\pi/2}\sin^{n-2} x\;dx$.
ii) Show that for any $k\in \Bbb N$, $$\int_0^{\pi/2}\sin^{2k} x\;dx=\dfrac{(2k-1)(2k-3)\cdots 3\cdot 1}{(2k)(2k-2)\cdots 4\cdot 2}\dfrac{\pi}{2}=\dfrac{(2k)!}{(2^k k!)^2}\dfrac{\pi}{2}$$ and $$\int_0^{\pi/2}\sin^{2k+1} x\;dx=\dfrac{2k(2k-2)\cdots 4\cdot 2}{(2k+1)(2k-1)\cdots 3\cdot 1}=\dfrac{(2^k k!)^2}{(2k+1)!}.$$ iii) For $k\in \Bbb N$, let $$\mu_k=\dfrac{\int_0^{\pi/2}\sin^{2k} x\;dx}{\int_0^{\pi/2}\sin^{2k+1} x\;dx}.$$ Show that $1\leq \mu_k\leq \dfrac{2k+1}{2k}$ for each $k\in \Bbb N$ and consequently that $\mu_k\to 1$ as $k\to \infty$. Deduce that $$\sqrt{\pi}=\lim_{k\to \infty} \dfrac{(k!)^2 2^{2k}}{(2k)!\sqrt{k}}.$$ Thus $\pi\sim \dfrac{(k!)^4 2^{4k}}{((2k)!)^2 k}$. (Hint: $\sin^{2k+1}x\leq \sin^{2k}x\leq \sin^{2k-1}x$ for all $x\in[0,\pi/2]$.)
i) $$\begin{align}\int_0^{\pi/2}\sin^n x\;dx&=-\int_0^{\pi/2}\sin^{n-1} x\;d(\cos x)\\
&=-\sin^{n-1}\cos x\bigg|_0^{\pi/2}+(n-1)\int_0^{\pi/2}\cos^2 x\sin^{n-2} x\;dx\\
&=0+(n-1)\int_0^{\pi/2}\sin^{n-2} x\;dx-(n-1)\int_0^{\pi/2}\sin^n x\;dx\\
\Rightarrow \int_0^{\pi/2}\sin^n x\;dx&=\dfrac{n-1}{n}\int_0^{\pi/2}\sin^{n-2} x\;dx
\end{align}$$ ii) $$\begin{align} \int_0^{\pi/2}\sin^{2k} x\;dx&=\dfrac{2k-1}{2k}\int_0^{\pi/2}\sin^{2k-2} x\;dx\\
&=\dfrac{2k-1}{2k}\dfrac{2k-3}{2k}\int_0^{\pi/2}\sin^{2k-4} x\;dx\\
&=\dfrac{2k-1}{2k}\dfrac{2k-3}{2k}\cdots \int_0^{\pi/2}\sin^4 x\;dx\\
&=\dfrac{2k-1}{2k}\dfrac{2k-3}{2k}\cdots \dfrac{3}{4} \int_0^{\pi/2}\sin^2 x\;dx\\
&=\dfrac{2k-1}{2k}\dfrac{2k-3}{2k}\cdots \dfrac{3}{4} \dfrac{1}{2}\int_0^{\pi/2}\sin^0 x\;dx\\\\
&=\dfrac{(2k-1)(2k-3)\cdots 3\cdot 1}{(2k)(2k-2)\cdots 4\cdot 2}\dfrac{\pi}{2}\\
&=\dfrac{(2k)(2k-1)(2k-2)(2k-3)\cdots 3\cdot 2\cdot 1}{[(2k)(2k-2)\cdots 4\cdot 2]^2}\dfrac{\pi}{2}\\
&=\dfrac{(2k)!}{(2^k\cdot [k(k-1)\cdots 2\cdot 1])^2}\dfrac{\pi}{2}\\
&=\dfrac{(2k)!}{(2^k k!)^2}\dfrac{\pi}{2}
\end{align}$$ The other case is similar.
Question: How come the integral of 'odd sine function' is the 'reciprocal' of that of 'even sine function'?
iii) It is simple to prove the claim with the hint (which is true because $\sin x\leq 1$ when $x\in [0,\pi/2]$). For all $x\in[0,\pi/2]$, $$\sin^{2k+1}x\leq \sin^{2k}x\leq \sin^{2k-1}x\\ \int_0^{\pi/2}\sin^{2k+1}x\;dx\leq \int_0^{\pi/2}\sin^{2k}x\;dx\leq \int_0^{\pi/2}\sin^{2k-1}x\;dx\\ \dfrac{\int_0^{\pi/2}\sin^{2k+1}x\;dx}{\int_0^{\pi/2}\sin^{2k+1}x\;dx}\leq \dfrac{\int_0^{\pi/2}\sin^{2k}x\;dx}{\int_0^{\pi/2}\sin^{2k+1}x\;dx}\leq \dfrac{\int_0^{\pi/2}\sin^{2k-1}x\;dx}{\int_0^{\pi/2}\sin^{2k+1}x\;dx}\\ 1\leq \mu_k\leq \dfrac{\int_0^{\pi/2}\sin^{2k-1}x\;dx}{\int_0^{\pi/2}\sin^{2k+1}x\;dx}$$ We have $\dfrac{\int_0^{\pi/2}\sin^{2k-1}x\;dx}{\int_0^{\pi/2}\sin^{2k+1}x\;dx}=\dfrac{[2^{k-1}(k-1)!]^2}{(2k-1)!}\dfrac{(2k+1)!}{(2^k k!)^2}=\dfrac{(2k+1)2k}{4k^2}=\dfrac{2k+1}{2k}$, hence the claim.
By sandwich theorem, since $\lim_{k\to \infty} \dfrac{2k+1}{2k}=1$, we have $\mu_k\to 1$ as $k\to \infty$. But $\mu_k\to 1$ implies $\sqrt{\mu_k}\to 1$.
Question: Why does $\mu_k$ tend to $1$ intuitively?
We can look at the graphs of powers of $\sin x$. When the powers are close to each other, their graphs are very similar. That's why the ratio of the two integrals tends to $1$.
Now, $$\mu_k=\dfrac{\pi}{2}\dfrac{(2k)!(2k+1)!}{(2^k k!)^4}\\ \sqrt{\mu_k}=\dfrac{\sqrt{\pi}}{\sqrt{2}}\dfrac{\sqrt{(2k)!(2k+1)!}}{(2^k k!)^2}=\sqrt{\pi}\dfrac{\sqrt{2k+1}}{\sqrt{2}}\dfrac{(2k)!}{(2^k k!)^2} \to 1.$$ It follows that $$\sqrt{\pi}=\lim_{k\to \infty} \dfrac{(2^k k!)^2}{(2k)!\sqrt{k}}$$ For the approximation of $\pi$, just take the square of the limit.
Wallis Formula is one of the many formulas that approximates $\pi$. There are other proofs of this formula. See the notes by Steven R. Dunbar.
More to explore
notes by Steven R. Dunbar
math fun facts
notes by Ben Lynn
paper
Friday, 6 May 2016
Wallis Formula
Sunday, 1 May 2016
Interesting reults related to $a+b-ab$
Abstract Algebra
Consider a unital ring $R$ with identity $1$. If $a\in R$ has a right inverse $b$, then writing $a=1-z$ and $b=1-w$, we have $$1=ab=(1-z)(1-w)=1-z-w-zw.$$ The condition on $z$ and $w$ is thus $$z+w-zw=0.$$ Since this condition does not involve the identity, we can use it for an arbitrary ring.Definitions: Let $R$ be an arbitrary ring. An element $z\in R$ is called right (left) quasi-regular if there exists an element $w$ in $R$ such that $z+w-zw=0$ $(z+w-wz=0)$. The element $w$ is called a right (left) quasi-inverse of $z$.
Let $R$ be a unital ring with identity $1$. Then $z\in R$ is called right (left) quasi-regular if $1-z$ has a right (left) inverse.
If $z$ is both left and right quasi-regular, then $z$ is quasi-regular.
Let $R$ be a commutative ring and define a binary composition (known as the circle composition) in $R$ by $$a\cdot b=a+b-ab.$$One can check that $(R,\cdot)$ is associative and forms a semigroup. Here in $(R,\cdot)$, the identity is not $1$, but $0$: $a\cdot 0=0=0\cdot a$. The set of quasi-regular elements $Q$ are the elements $a$ s.t. $a\cdot b=0$ and $b\cdot a=0$, namely they are the units of $(R,\cdot)$. We know that the set of units forms a group. It follows that the set of quasi-regular elements together with the circle composition $(Q,\cdot)$ is a group.
Claim: The mapping $$\phi:Q\to U(R)\\ a\mapsto 1-a$$ is an isomorphism of $Q$ onto $U(R)$. [$U(R)$ denotes the group of units of $R$.]
Proof: It is clear that the map is a bijection with inverse $a\mapsto 1-a$. ($1-a\mapsto 1-(1-a)=a$)
$\phi(a\cdot b)=\phi(a+b-ab)=1-a-b+ab=(1-a)(1-b)=\phi(a)\phi(b)$. $\Box$
Claim: Any nilpotent element is quasi-regular.
Proof: If $r^n=0$, and $s=1+r+r^2+\cdots+r^{n-1}$, then $(1-r)s=s(1-r)=1$, so nilpotent elements are quasi-regular. $\Box$
Corollary: If $r$ is nilpotent, then both $1+r$ and $1-r$ are units. [For $1+r$, take $s=\sum_{i=0}^{n-1} (-1)^i r^i$]
Logic
Consider a truth function s.t. $T(A)=1$ if the statement $A$ is true; $T(A)=0$ if $A$ is false. To generalise this, let $T(A)=a$, $T(B)=b$. $$T(A\cap B)=a+b-ab\\ T(A\cup B)=ab$$ The use of $T$ allows one to reduce logical problems to algebraic equations. For more information, one can refer to a text on Boolean algebra.There are similar results of characteristic functions and sets.
$$\chi_{A\cup B}=\chi_A+\chi_B-\chi_{A\cap B}\\
\chi_{A\cap B}=\chi_A\chi_B\\
\chi_{A\triangle B}=\chi_A+\chi_B-2\chi_{A\cap B}\\ \\ |A\cup B|=|A|+|B|-|A\cap B|$$
More to explore:
idempotents
orthogonal idempotents
Jacobson radical
Saturday, 30 April 2016
Find all $x\in \Bbb Z_{85}$ s.t. $(x+2)^{100}=0$.
Find all $x\in \Bbb Z_{85}$ s.t. $(x+2)^{100}=0$.
I: Taking Logarithm
Solving $(x+2)^{100}=0$ for $x\in \Bbb Z_{85}$ is equivalent to finding $x\in \Bbb Z_{85}$ s.t. $(x+2)^{100}=85y$. $$\begin{align}(x+2)^{100}&=85y\\ 100\log(x+2)&=\log(85y)\\ \log(x+2)&=\dfrac{\log(85y)}{100}\\ x+2&=10^{\log(85y)/100}\end{align}$$ The only integer values of $10^{\log(85y)/100}$ are $85,85^2,\cdots,$ (powers of $85$). [For example, when $y=85^{99}$, $10^{\log(85y)/100}=10^{\log(85)}=85$. When $y=85^{199}$, $10^{\log(85y)/100}=10^{2\log(85)}=85^2$.] It follows that $x=85-2,85^2-2,\cdots,$ (powers of $85$)$-2$. But all these values are equal to $-2\equiv_{85} 83$. Therefore, the only $x\in \Bbb Z_{85}$ satisfying the equation is $83$.
II: Solving System of Congruences
Solving $(x+2)^{100}=0$ for $x\in \Bbb Z_{85}$ is equivalent to solving $(x+2)^{100}\equiv_{85} 0$. Note that $85=5\cdot 17$. We can solve the following system of congruences: $$(x+2)^{100}\equiv_5 0\\ (x+2)^{100}\equiv_{17} 0.$$ Some calculations reveal that $$x\equiv_5 3\\ x\equiv_{17} 15.$$ Therefore, there exists $y\in \Bbb Z$ s.t. $x=3+5y$ and thus $$3+5y\equiv_{17} 15\\ 5y\equiv_{17} 12\\ y\equiv_{17} 12\cdot 7\\ y\equiv_{17} 16.$$ Finally, there exists $z\in \Bbb Z$ s.t. $x=3+5(16+17y)=83+85z\equiv_{85} 83$. Alternatively, by Chinese Remainder Theorem, the system has a unique solution $x$ modulo $85$. We have $$n_1=5,n_2=17;\;\;\;\text{and}\;\;\;n_1'=17,n_2'=5\\t_1=-2,t_2=7\\x=a_1t_1n_1'+a_2t_2n_2'=3\cdot -2\cdot 17+15\cdot 7\cdot 5=423\equiv_{85} 83.$$
I: Taking Logarithm
Solving $(x+2)^{100}=0$ for $x\in \Bbb Z_{85}$ is equivalent to finding $x\in \Bbb Z_{85}$ s.t. $(x+2)^{100}=85y$. $$\begin{align}(x+2)^{100}&=85y\\ 100\log(x+2)&=\log(85y)\\ \log(x+2)&=\dfrac{\log(85y)}{100}\\ x+2&=10^{\log(85y)/100}\end{align}$$ The only integer values of $10^{\log(85y)/100}$ are $85,85^2,\cdots,$ (powers of $85$). [For example, when $y=85^{99}$, $10^{\log(85y)/100}=10^{\log(85)}=85$. When $y=85^{199}$, $10^{\log(85y)/100}=10^{2\log(85)}=85^2$.] It follows that $x=85-2,85^2-2,\cdots,$ (powers of $85$)$-2$. But all these values are equal to $-2\equiv_{85} 83$. Therefore, the only $x\in \Bbb Z_{85}$ satisfying the equation is $83$.
II: Solving System of Congruences
Solving $(x+2)^{100}=0$ for $x\in \Bbb Z_{85}$ is equivalent to solving $(x+2)^{100}\equiv_{85} 0$. Note that $85=5\cdot 17$. We can solve the following system of congruences: $$(x+2)^{100}\equiv_5 0\\ (x+2)^{100}\equiv_{17} 0.$$ Some calculations reveal that $$x\equiv_5 3\\ x\equiv_{17} 15.$$ Therefore, there exists $y\in \Bbb Z$ s.t. $x=3+5y$ and thus $$3+5y\equiv_{17} 15\\ 5y\equiv_{17} 12\\ y\equiv_{17} 12\cdot 7\\ y\equiv_{17} 16.$$ Finally, there exists $z\in \Bbb Z$ s.t. $x=3+5(16+17y)=83+85z\equiv_{85} 83$. Alternatively, by Chinese Remainder Theorem, the system has a unique solution $x$ modulo $85$. We have $$n_1=5,n_2=17;\;\;\;\text{and}\;\;\;n_1'=17,n_2'=5\\t_1=-2,t_2=7\\x=a_1t_1n_1'+a_2t_2n_2'=3\cdot -2\cdot 17+15\cdot 7\cdot 5=423\equiv_{85} 83.$$
Friday, 29 April 2016
Dimension of Vector Space
Determine whether each of the following is a vector space and find the dimension and a basis for each that is a vector space:
i) $\mathbb{C}$ over $\mathbb{C}$.
ii) $\mathbb{C}$ over $\mathbb{R}$.
iii) $\mathbb{R}$ over $\mathbb{C}$.
iv) $\mathbb{R}$ over $\mathbb{Q}$.
v) $\mathbb{Q}$ over $\mathbb{R}$.
vi) $\mathbb{Q}$ over $\mathbb{Z}$.
vii) $\mathbb{S}=\{a+b\sqrt{2}+c\sqrt{5}\:|\:a,b,c \in \mathbb{Q}\}$ over $\mathbb{Q,R,\text{or}\: C}$.
Answers:
i) Yes. $\{1\}$ is a basis. Dimension is $1$.
[Say $1+2i,1+i\in \Bbb C$. $(1+2i)\cdot (1+i)=-1+3i=1\cdot (-1+3i)$, where $1$ is the basis element and $-1+3i$ is in the field $\Bbb C$.]
ii) Yes. $\{1,i\}$ is a basis. Dimension is $2$.
[Say $1+i\in \Bbb C$ and $2\in \Bbb R$. $2\cdot (1+i)=2+2i=2\cdot 1+2\cdot i$, where $1,i$ are elements in the basis and $2$ is in the field $\Bbb R$.]
iii) No. $i\in \Bbb C$ and $1\in \Bbb R$, but $i \cdot 1=i\notin \Bbb R$.
iv) Yes. $\{1,\pi,\pi^2,\cdots\}$ are linearly independent over $\Bbb Q$. Dimension is infinite.
v) No. $\sqrt{2}\in \Bbb R$ and $1\in \Bbb Q$, but $\sqrt{2} \cdot 1=\sqrt{2}\notin \Bbb Q$.
vi) No. $\Bbb Z$ is not a field.
vii) Yes only over $\Bbb Q$. $\{1,\sqrt{2},\sqrt{5}\}$ is a basis. Dimension is $3$.
From iii and v, we know that the field is always 'smaller' than its vector space. In fact, this is related to the concept of field extension (to be discussed in our next post).
Let $V=\{(x,y)\:|\:x,y\in \Bbb C\}$. Under the standard addition and scalar multiplication for ordered pairs of complex numbers, is $V$ a vector space over $\Bbb C$? Over $\Bbb R$? Over $\Bbb Q$? If so, find the dimension of $V$.
Answers:
From the previous question, we know $\{(1,0),(0,1)\}$ form a basis for $V$ over $\Bbb C$. Dimension is $2$. $\{(1,0),(i,0),(0,1),(0,i)\}$ form a basis for $V$ over $\Bbb R$. Dimension is $4$. Lastly, the dimension of $V$ over $\Bbb Q$ is infinite.
i) $\mathbb{C}$ over $\mathbb{C}$.
ii) $\mathbb{C}$ over $\mathbb{R}$.
iii) $\mathbb{R}$ over $\mathbb{C}$.
iv) $\mathbb{R}$ over $\mathbb{Q}$.
v) $\mathbb{Q}$ over $\mathbb{R}$.
vi) $\mathbb{Q}$ over $\mathbb{Z}$.
vii) $\mathbb{S}=\{a+b\sqrt{2}+c\sqrt{5}\:|\:a,b,c \in \mathbb{Q}\}$ over $\mathbb{Q,R,\text{or}\: C}$.
Answers:
i) Yes. $\{1\}$ is a basis. Dimension is $1$.
[Say $1+2i,1+i\in \Bbb C$. $(1+2i)\cdot (1+i)=-1+3i=1\cdot (-1+3i)$, where $1$ is the basis element and $-1+3i$ is in the field $\Bbb C$.]
ii) Yes. $\{1,i\}$ is a basis. Dimension is $2$.
[Say $1+i\in \Bbb C$ and $2\in \Bbb R$. $2\cdot (1+i)=2+2i=2\cdot 1+2\cdot i$, where $1,i$ are elements in the basis and $2$ is in the field $\Bbb R$.]
iii) No. $i\in \Bbb C$ and $1\in \Bbb R$, but $i \cdot 1=i\notin \Bbb R$.
iv) Yes. $\{1,\pi,\pi^2,\cdots\}$ are linearly independent over $\Bbb Q$. Dimension is infinite.
v) No. $\sqrt{2}\in \Bbb R$ and $1\in \Bbb Q$, but $\sqrt{2} \cdot 1=\sqrt{2}\notin \Bbb Q$.
vi) No. $\Bbb Z$ is not a field.
vii) Yes only over $\Bbb Q$. $\{1,\sqrt{2},\sqrt{5}\}$ is a basis. Dimension is $3$.
From iii and v, we know that the field is always 'smaller' than its vector space. In fact, this is related to the concept of field extension (to be discussed in our next post).
Let $V=\{(x,y)\:|\:x,y\in \Bbb C\}$. Under the standard addition and scalar multiplication for ordered pairs of complex numbers, is $V$ a vector space over $\Bbb C$? Over $\Bbb R$? Over $\Bbb Q$? If so, find the dimension of $V$.
Answers:
From the previous question, we know $\{(1,0),(0,1)\}$ form a basis for $V$ over $\Bbb C$. Dimension is $2$. $\{(1,0),(i,0),(0,1),(0,i)\}$ form a basis for $V$ over $\Bbb R$. Dimension is $4$. Lastly, the dimension of $V$ over $\Bbb Q$ is infinite.
Labels:
abstract algebra,
linear algebra,
problem of the day
Saturday, 23 April 2016
A problem on diagonalisation
Give examples of the following types of operators defined by a $2\times 2$ matrix or explain why such an operator can't exist:
i) diagonalisable, invertible,
ii) diagonalisable, not invertible,
iii) not diagonalisable, invertible,
iv) not diagonalisable, not invertible.
Let's just consider the case in $\Bbb R$.
i) $$\boxed{\begin{pmatrix}a&0\\0&b\end{pmatrix},\quad \text{a and b are not necessarily different},\quad a,b\neq 0}$$ Any diagonal matrix $D$ is diagonalisable: $$I_n DI_n=D.\\ \boxed{\begin{pmatrix}a&b\\b&a\end{pmatrix},\quad a\neq b,\quad a\neq -b,\quad\text{a can be zero.}}$$ Calculations: $(a-\lambda)^2-b^2=0\Rightarrow \lambda=a-b,a+b$. $$\ker\begin{pmatrix}a-(a+b)&b\\b&a-(a+b)\end{pmatrix}=\text{span}\bigg\{\begin{pmatrix}1\\1\end{pmatrix}\bigg\}\\ \ker\begin{pmatrix}a-(a-b)&b\\b&a-(a-b)\end{pmatrix}=\text{span}\bigg\{\begin{pmatrix}1\\-1\end{pmatrix}\bigg\}$$ The two eigenvectors form an eigenbasis. Alternatively, one can use the result: if $A\in M_{n\times n}(\Bbb F)$ has $n$ distinct eigenvalues in $\Bbb F$, then $A$ is diagonalisable over $\Bbb F$. $$\boxed{\begin{pmatrix}a&b\\b&c\end{pmatrix},\quad a\neq c,\quad b\neq 0,\quad\text{a can be zero.}}$$ The calculation is similar, but involves the use of quadratic formula. Since $a\neq c$ and $b\neq 0$, we have two distinct eigenvalues. Symmetric matrices are thus diagonalisable. $$\boxed{\begin{pmatrix}a&b\\0&c\end{pmatrix},\quad a\neq c,\quad b\neq 0}$$ Upper triangular matrices with distinct diagonal entries are diagonalisable because of the same reason: they have distinct eigenvalues. The above matrices are invertible since their determinants are non-zero.
ii) $$\boxed{\begin{pmatrix}a&0\\0&0\end{pmatrix},\quad a\neq 0}$$ Again, any diagonal matrix is diagonalisable. $$\boxed{\begin{pmatrix}a&a\\a&a\end{pmatrix},\quad a\neq 0}$$ Calculation: $(a-\lambda)^2-a^2=0\Rightarrow \lambda=0,2a$. When $a=1$, the matrix is called matrix of ones. $$\boxed{\begin{pmatrix}a&a\\b&b\end{pmatrix}, \begin{pmatrix}a&b\\a&b\end{pmatrix},\quad a+b\neq 0,\;\;\text{a,b not both zero.}}$$Calculation: $(a-\lambda)(b-\lambda)-ab=0\Rightarrow \lambda=0,a+b$.
iii) $$\boxed{\begin{pmatrix}a&b\\0&a\end{pmatrix},\begin{pmatrix}a&0\\b&a\end{pmatrix},\quad \text{a and b are not necessarily different},\quad a,b\neq 0}$$ These are known as the shear transformations, which are not diagonalisable. Proof by contradiction: Suppose $\begin{pmatrix}a&b\\0&a\end{pmatrix}$ is diagonalisable. Then there exist a diagonal matrix $D$ and an invertible matrix $V$ s.t. $$\begin{pmatrix}a&b\\0&a\end{pmatrix}=V^{-1}DV.$$ We know just by looking at the matrix that $a$ is an eigenvalue with multiplicity $2$. So $D=\begin{pmatrix}a&0\\0&a\end{pmatrix}=aI_2$. We then have $\begin{pmatrix}a&b\\0&a\end{pmatrix}=aV^{-1}I_2V=aI_2=\begin{pmatrix}a&0\\0&a\end{pmatrix}$ which is a contradiction.
iv) $$\boxed{\begin{pmatrix}0&a\\0&0\end{pmatrix},\quad a\neq 0}$$ Its only eigenvalue is $0$ of multiplicity $2$.
From this problem, we see that invertibility does not necessarily nor sufficiently imply diagonalisability! Another important fact that one can observe from (ii) and (iv) is that if a square matrix $A$ is not invertible, then $\lambda=0$ is an eigenvalue of $A$. The converse also holds: if $\lambda=0$ is an eigenvalue of $A$, then $A$ is not invertible.
Claim: $A\in M_{n\times n}(\Bbb F)$ is singular iff $\lambda=0$ is an eigenvalue of $A$.
Proof: $\lambda=0$ is a solution of the characteristic equation $\lambda^n+c_1\lambda^{n-1}+\cdots+c_n=0$ iff $c_n=0$. By definition, $\det(\lambda I-A)=\lambda^n+c_1\lambda^{n-1}+\cdots+c_n$. When $\lambda=0$, we have $\det(\lambda I-A)=\det(-A)=(-1)^n\det(A)=c_n$. We thus have $\det(A)=0$ iff $c_n=0$ iff $\lambda=0.\;\Box$
Here's a summary:
i) diagonalisable, invertible: symmetric, upper triangular, diagonal
ii) diagonalisable, not invertible: nilpotent, scalar multiples of matrix of ones
iii) not diagonalisable, invertible: shear
iv) not diagonalisable, not invertible: nilpotent.
I tried to enumerate all possibilities but failed. I can't imagine the amount of work it takes to characterise diagonalisation for $3\times 3$ matrices...
Questions:
$$\boxed{\begin{pmatrix}a&b\\c&d\end{pmatrix},\quad (a-d)^2=-4bc,\quad ad-bc\neq 0}$$ has repeated eigenvalues, so I consider it fitting (iii), but can the kernel corresponding to that one eigenvalue be spanned by two eigenvectors? Similar questions for $$\boxed{\begin{pmatrix}a&a\\a&b\end{pmatrix},\quad 5a^2+b^2=2ab,\quad a(b-a)\neq 0}\\
\boxed{\begin{pmatrix}a&a\\b&c\end{pmatrix},\quad a^2+c^2=2ac-4ab,\quad a(c-b)\neq 0}\\
\boxed{\begin{pmatrix}0&a\\b&c\end{pmatrix},\quad c^2=-4ab,\quad ab\neq 0}$$ I'm not even sure if there are other possibilities for (iii) and (iv).
i) diagonalisable, invertible,
ii) diagonalisable, not invertible,
iii) not diagonalisable, invertible,
iv) not diagonalisable, not invertible.
Let's just consider the case in $\Bbb R$.
i) $$\boxed{\begin{pmatrix}a&0\\0&b\end{pmatrix},\quad \text{a and b are not necessarily different},\quad a,b\neq 0}$$ Any diagonal matrix $D$ is diagonalisable: $$I_n DI_n=D.\\ \boxed{\begin{pmatrix}a&b\\b&a\end{pmatrix},\quad a\neq b,\quad a\neq -b,\quad\text{a can be zero.}}$$ Calculations: $(a-\lambda)^2-b^2=0\Rightarrow \lambda=a-b,a+b$. $$\ker\begin{pmatrix}a-(a+b)&b\\b&a-(a+b)\end{pmatrix}=\text{span}\bigg\{\begin{pmatrix}1\\1\end{pmatrix}\bigg\}\\ \ker\begin{pmatrix}a-(a-b)&b\\b&a-(a-b)\end{pmatrix}=\text{span}\bigg\{\begin{pmatrix}1\\-1\end{pmatrix}\bigg\}$$ The two eigenvectors form an eigenbasis. Alternatively, one can use the result: if $A\in M_{n\times n}(\Bbb F)$ has $n$ distinct eigenvalues in $\Bbb F$, then $A$ is diagonalisable over $\Bbb F$. $$\boxed{\begin{pmatrix}a&b\\b&c\end{pmatrix},\quad a\neq c,\quad b\neq 0,\quad\text{a can be zero.}}$$ The calculation is similar, but involves the use of quadratic formula. Since $a\neq c$ and $b\neq 0$, we have two distinct eigenvalues. Symmetric matrices are thus diagonalisable. $$\boxed{\begin{pmatrix}a&b\\0&c\end{pmatrix},\quad a\neq c,\quad b\neq 0}$$ Upper triangular matrices with distinct diagonal entries are diagonalisable because of the same reason: they have distinct eigenvalues. The above matrices are invertible since their determinants are non-zero.
ii) $$\boxed{\begin{pmatrix}a&0\\0&0\end{pmatrix},\quad a\neq 0}$$ Again, any diagonal matrix is diagonalisable. $$\boxed{\begin{pmatrix}a&a\\a&a\end{pmatrix},\quad a\neq 0}$$ Calculation: $(a-\lambda)^2-a^2=0\Rightarrow \lambda=0,2a$. When $a=1$, the matrix is called matrix of ones. $$\boxed{\begin{pmatrix}a&a\\b&b\end{pmatrix}, \begin{pmatrix}a&b\\a&b\end{pmatrix},\quad a+b\neq 0,\;\;\text{a,b not both zero.}}$$Calculation: $(a-\lambda)(b-\lambda)-ab=0\Rightarrow \lambda=0,a+b$.
iii) $$\boxed{\begin{pmatrix}a&b\\0&a\end{pmatrix},\begin{pmatrix}a&0\\b&a\end{pmatrix},\quad \text{a and b are not necessarily different},\quad a,b\neq 0}$$ These are known as the shear transformations, which are not diagonalisable. Proof by contradiction: Suppose $\begin{pmatrix}a&b\\0&a\end{pmatrix}$ is diagonalisable. Then there exist a diagonal matrix $D$ and an invertible matrix $V$ s.t. $$\begin{pmatrix}a&b\\0&a\end{pmatrix}=V^{-1}DV.$$ We know just by looking at the matrix that $a$ is an eigenvalue with multiplicity $2$. So $D=\begin{pmatrix}a&0\\0&a\end{pmatrix}=aI_2$. We then have $\begin{pmatrix}a&b\\0&a\end{pmatrix}=aV^{-1}I_2V=aI_2=\begin{pmatrix}a&0\\0&a\end{pmatrix}$ which is a contradiction.
iv) $$\boxed{\begin{pmatrix}0&a\\0&0\end{pmatrix},\quad a\neq 0}$$ Its only eigenvalue is $0$ of multiplicity $2$.
From this problem, we see that invertibility does not necessarily nor sufficiently imply diagonalisability! Another important fact that one can observe from (ii) and (iv) is that if a square matrix $A$ is not invertible, then $\lambda=0$ is an eigenvalue of $A$. The converse also holds: if $\lambda=0$ is an eigenvalue of $A$, then $A$ is not invertible.
Claim: $A\in M_{n\times n}(\Bbb F)$ is singular iff $\lambda=0$ is an eigenvalue of $A$.
Proof: $\lambda=0$ is a solution of the characteristic equation $\lambda^n+c_1\lambda^{n-1}+\cdots+c_n=0$ iff $c_n=0$. By definition, $\det(\lambda I-A)=\lambda^n+c_1\lambda^{n-1}+\cdots+c_n$. When $\lambda=0$, we have $\det(\lambda I-A)=\det(-A)=(-1)^n\det(A)=c_n$. We thus have $\det(A)=0$ iff $c_n=0$ iff $\lambda=0.\;\Box$
Here's a summary:
i) diagonalisable, invertible: symmetric, upper triangular, diagonal
ii) diagonalisable, not invertible: nilpotent, scalar multiples of matrix of ones
iii) not diagonalisable, invertible: shear
iv) not diagonalisable, not invertible: nilpotent.
I tried to enumerate all possibilities but failed. I can't imagine the amount of work it takes to characterise diagonalisation for $3\times 3$ matrices...
Questions:
$$\boxed{\begin{pmatrix}a&b\\c&d\end{pmatrix},\quad (a-d)^2=-4bc,\quad ad-bc\neq 0}$$ has repeated eigenvalues, so I consider it fitting (iii), but can the kernel corresponding to that one eigenvalue be spanned by two eigenvectors? Similar questions for $$\boxed{\begin{pmatrix}a&a\\a&b\end{pmatrix},\quad 5a^2+b^2=2ab,\quad a(b-a)\neq 0}\\
\boxed{\begin{pmatrix}a&a\\b&c\end{pmatrix},\quad a^2+c^2=2ac-4ab,\quad a(c-b)\neq 0}\\
\boxed{\begin{pmatrix}0&a\\b&c\end{pmatrix},\quad c^2=-4ab,\quad ab\neq 0}$$ I'm not even sure if there are other possibilities for (iii) and (iv).
Subscribe to:
Posts (Atom)