2
$\begingroup$

a) Consider the transformation $T$ in the space $M^{2\times2}$ of $2\times2$ matrices, $T(A)=A^{\top}$. Find all its eigenvalues and eigenvectors. Is it possible to diagonalize this transformation?

b) Can you do the same problem but in the space of $n\times n$ matrices?

Since all linear transformation can be represented as a matrix, there should be a matrix that multiplies a $2 \times 2$ matrix that turns it into its transpose. But based on what I've read so far, there is no such matrix that exist. I might be misunderstanding this question, but if we can't find this matrix, how are we suppose to find its eigenvalues and eigenvectors?

$\endgroup$
15
  • 1
    $\begingroup$ Note that the space of $2\times2$ matrices is $4$-dimensional, so the matrix representing a transformation in the space of $2\times2$ matrices will be $4\times4$ $\endgroup$ Commented Mar 4, 2021 at 2:29
  • 1
    $\begingroup$ You've just got to think about how you would represent a $2\times2$ matrix as a column vector; I guess it would live in $\mathbb R^4$. Since the transpose swaps two elements in a matrix, the corresponding transformation of $\mathbb R^4$ would do the same. The $4\times4$ matrices that do this are called permutation matrices. If you study those, I think you'll find your answer. $\endgroup$ Commented Mar 4, 2021 at 3:01
  • 1
    $\begingroup$ If $(A+B)^t=A^t+B^t$, and $(cA)^t=cA^t$, then it's linear, right, William? $\endgroup$ Commented Mar 4, 2021 at 3:27
  • 1
    $\begingroup$ You have a map T from a vector space to itself. You have a definition of what it means for that map to be linear. You apply that definition to that map, using what you know about the transpose. Then, you tell me whether or not it's a linear transformation on that vector space. $\endgroup$ Commented Mar 4, 2021 at 3:32
  • 1
    $\begingroup$ You have only read the first half of that top answer. Please go back and read the second half, William. $\endgroup$ Commented Mar 4, 2021 at 3:34

2 Answers 2

2
$\begingroup$

For solving this problem, my suggestion is that you use isomorphism first. Since the domain is $M^{2\times2}$, the basis should be $\begin{bmatrix} 1 & 0\\ 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & 1\\ 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & 0\\ 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & 0\\ 0 & 1 \end{bmatrix}$, which I named as $\mathcal{B}=\{v_1,v_2,v_3,v_4\}$ orderly.

Then we hope $Tv_1=v_1,\;Tv_2=v_3,\;Tv_3=v_2,\;Tv_4=v_4$.

Thus $[T]_{\mathcal{B}} =\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\\end{bmatrix}$. Then $\det([T]_\mathcal{B}-\lambda I)=(\lambda-1)^3(\lambda+1)$.

Then compute the $\text{Ker}(T-\lambda I)$.

For $\lambda = 1$, $\text{Ker}(T-I)=\text{span}\left\{ \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 1 \\ 0 \end{bmatrix} \right\}$.

For $\lambda = -1$, $\text{Ker}(T+I)=\text{span}\left\{ \begin{bmatrix} 0 \\ 1 \\ -1 \\ 0 \end{bmatrix}\right\}$.

By written in this form, you can diagonalize it.

When problem goes to $M^{n\times n}$, you just write a basis in the same way and abstract it to a basis $\mathcal{B}={v_1,\dots,v_{n^2}}$, better in a relatively symmetric way (i.e. swap $v_i$ and $v_{n^2+1-i}$) so that your $[T]_{\mathcal{B}}$. In this way, you can compute the characteristic polynomial easier.

$\endgroup$
1
$\begingroup$

When you're treating matrices as objects in a vector space, instead of as representations of a transformation, then it's often useful to think of them as vectors with components.

A 2x2 matrix has four components, a 3x3 matrix has 9 components, a 4x4 matrix has 16 components, and so on.

Every 2x2 matrix can be written uniquely as:

$$\begin{bmatrix}a & b \\ c & d\end{bmatrix} = a\begin{bmatrix}1 & 0 \\ 0 &0 \end{bmatrix} + b\begin{bmatrix}0 & 1 \\ 0 &0 \end{bmatrix} + c\begin{bmatrix}0 & 0 \\ 1 &0 \end{bmatrix}+d\begin{bmatrix}0 & 0 \\ 0 &1 \end{bmatrix}$$

In other words, these four matrices on the right hand side form a basis for the space.


Because we're treating these matrices as objects in a vector space, you are free to write them as vectors if you like.

$$\begin{bmatrix}a & b \\ c & d\end{bmatrix} \leadsto \begin{bmatrix}a \\ b\\ c\\d\end{bmatrix}$$

Written in this form, transposition is the transformation:

$$T : \begin{bmatrix}a \\ b\\ c\\d\end{bmatrix} \mapsto \begin{bmatrix}a \\ c\\ b\\d\end{bmatrix}$$

which you can write in matrix form as:

$$\underbrace{\begin{bmatrix}1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 1& 0 & 0\\ 0&0&0&1\end{bmatrix}}_T\begin{bmatrix}a \\ b\\ c\\d\end{bmatrix} = \begin{bmatrix}a \\ c\\ b\\d\end{bmatrix}$$

and you can do the rest.


Of course, you can also get some intuition about transposition using intuition about eigenvectors, without relying on specific components.

If a linear transformation scales an object by a factor of $\lambda\neq 0$, then that object is an eigenvector with eigenvalue $\lambda$. (As a special case, if a linear transformation has no effect on an object, it's an eigenvector with eigenvalue $\lambda=1$.)

Let's exclude the zero matrix from consideration in everything that follows. There are three kinds of matrices that are unaffected by transposition $T$:

$$\begin{bmatrix}a & 0\\ 0 & 0\end{bmatrix}, \begin{bmatrix}0 & 0\\ 0 & b\end{bmatrix}, \begin{bmatrix}0 & c\\ c & 0\end{bmatrix}$$

As a result, these are eigenvectors of $T$, and they all have eigenvalue 1. The linear combinations of these matrices form a three dimensional space of matrices that are all eigenvectors with eigenvalue 1:

$$\begin{bmatrix}a & c \\ c & d\end{bmatrix}\qquad \forall a, b, c$$

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.