Give examples of the following types of operators defined by a 2\times 2 matrix or explain why such an operator can't exist:
i) diagonalisable, invertible,
ii) diagonalisable, not invertible,
iii) not diagonalisable, invertible,
iv) not diagonalisable, not invertible.
Let's just consider the case in \Bbb R.
i) \boxed{\begin{pmatrix}a&0\\0&b\end{pmatrix},\quad \text{a and b are not necessarily different},\quad a,b\neq 0} Any diagonal matrix D is diagonalisable: I_n DI_n=D.\\ \boxed{\begin{pmatrix}a&b\\b&a\end{pmatrix},\quad a\neq b,\quad a\neq -b,\quad\text{a can be zero.}} Calculations: (a-\lambda)^2-b^2=0\Rightarrow \lambda=a-b,a+b. \ker\begin{pmatrix}a-(a+b)&b\\b&a-(a+b)\end{pmatrix}=\text{span}\bigg\{\begin{pmatrix}1\\1\end{pmatrix}\bigg\}\\ \ker\begin{pmatrix}a-(a-b)&b\\b&a-(a-b)\end{pmatrix}=\text{span}\bigg\{\begin{pmatrix}1\\-1\end{pmatrix}\bigg\} The two eigenvectors form an eigenbasis. Alternatively, one can use the result: if A\in M_{n\times n}(\Bbb F) has n distinct eigenvalues in \Bbb F, then A is diagonalisable over \Bbb F. \boxed{\begin{pmatrix}a&b\\b&c\end{pmatrix},\quad a\neq c,\quad b\neq 0,\quad\text{a can be zero.}} The calculation is similar, but involves the use of quadratic formula. Since a\neq c and b\neq 0, we have two distinct eigenvalues. Symmetric matrices are thus diagonalisable. \boxed{\begin{pmatrix}a&b\\0&c\end{pmatrix},\quad a\neq c,\quad b\neq 0} Upper triangular matrices with distinct diagonal entries are diagonalisable because of the same reason: they have distinct eigenvalues. The above matrices are invertible since their determinants are non-zero.
ii) \boxed{\begin{pmatrix}a&0\\0&0\end{pmatrix},\quad a\neq 0} Again, any diagonal matrix is diagonalisable. \boxed{\begin{pmatrix}a&a\\a&a\end{pmatrix},\quad a\neq 0} Calculation: (a-\lambda)^2-a^2=0\Rightarrow \lambda=0,2a. When a=1, the matrix is called matrix of ones. \boxed{\begin{pmatrix}a&a\\b&b\end{pmatrix}, \begin{pmatrix}a&b\\a&b\end{pmatrix},\quad a+b\neq 0,\;\;\text{a,b not both zero.}}Calculation: (a-\lambda)(b-\lambda)-ab=0\Rightarrow \lambda=0,a+b.
iii) \boxed{\begin{pmatrix}a&b\\0&a\end{pmatrix},\begin{pmatrix}a&0\\b&a\end{pmatrix},\quad \text{a and b are not necessarily different},\quad a,b\neq 0} These are known as the shear transformations, which are not diagonalisable. Proof by contradiction: Suppose \begin{pmatrix}a&b\\0&a\end{pmatrix} is diagonalisable. Then there exist a diagonal matrix D and an invertible matrix V s.t. \begin{pmatrix}a&b\\0&a\end{pmatrix}=V^{-1}DV. We know just by looking at the matrix that a is an eigenvalue with multiplicity 2. So D=\begin{pmatrix}a&0\\0&a\end{pmatrix}=aI_2. We then have \begin{pmatrix}a&b\\0&a\end{pmatrix}=aV^{-1}I_2V=aI_2=\begin{pmatrix}a&0\\0&a\end{pmatrix} which is a contradiction.
iv) \boxed{\begin{pmatrix}0&a\\0&0\end{pmatrix},\quad a\neq 0} Its only eigenvalue is 0 of multiplicity 2.
From this problem, we see that invertibility does not necessarily nor sufficiently imply diagonalisability! Another important fact that one can observe from (ii) and (iv) is that if a square matrix A is not invertible, then \lambda=0 is an eigenvalue of A. The converse also holds: if \lambda=0 is an eigenvalue of A, then A is not invertible.
Claim: A\in M_{n\times n}(\Bbb F) is singular iff \lambda=0 is an eigenvalue of A.
Proof: \lambda=0 is a solution of the characteristic equation \lambda^n+c_1\lambda^{n-1}+\cdots+c_n=0 iff c_n=0. By definition, \det(\lambda I-A)=\lambda^n+c_1\lambda^{n-1}+\cdots+c_n. When \lambda=0, we have \det(\lambda I-A)=\det(-A)=(-1)^n\det(A)=c_n. We thus have \det(A)=0 iff c_n=0 iff \lambda=0.\;\Box
Here's a summary:
i) diagonalisable, invertible: symmetric, upper triangular, diagonal
ii) diagonalisable, not invertible: nilpotent, scalar multiples of matrix of ones
iii) not diagonalisable, invertible: shear
iv) not diagonalisable, not invertible: nilpotent.
I tried to enumerate all possibilities but failed. I can't imagine the amount of work it takes to characterise diagonalisation for 3\times 3 matrices...
Questions:
\boxed{\begin{pmatrix}a&b\\c&d\end{pmatrix},\quad (a-d)^2=-4bc,\quad ad-bc\neq 0} has repeated eigenvalues, so I consider it fitting (iii), but can the kernel corresponding to that one eigenvalue be spanned by two eigenvectors? Similar questions for \boxed{\begin{pmatrix}a&a\\a&b\end{pmatrix},\quad 5a^2+b^2=2ab,\quad a(b-a)\neq 0}\\
\boxed{\begin{pmatrix}a&a\\b&c\end{pmatrix},\quad a^2+c^2=2ac-4ab,\quad a(c-b)\neq 0}\\
\boxed{\begin{pmatrix}0&a\\b&c\end{pmatrix},\quad c^2=-4ab,\quad ab\neq 0} I'm not even sure if there are other possibilities for (iii) and (iv).
No comments:
Post a Comment