Tutorial on Numerical Computation doing maths with computer

Eigensystems

In this part we review the basic functions in Octave in order to obtain the eigenvectors and eigenvalues of matrices, i.e. to diagonalize matrices.

Summary of the basic theory

(Important note: this section is merely a reminder, not a substitute for a real discussion of eigensystems. For details please refer to a serious book, such as David Lay’s Introduction to Linear Algebra.)

An eigenvector of a matrix is a vector that, when multiplied by the matrix, results in a new vector parallel to itself. Let $A$ be a $N\times N$ matrix, then $v$ is an eigenvector if

The number $\lambda$ is called the associated eigenvalue. Thus, the action of $A$ an the eigenvectors is simple: they get multiplied by the corresponding eigenvalue, that’s all.

(Most) matrices can be diagonalized, i.e., they can be brought to this form

where $U$ is the matrix whose columns are the eigenvectors, $U^{-1}$ is its inverse, and $D$ is a diagonal matrix, whose entries are the eigenvalues, in the same order as the eigenvectors.

Why is that form useful? One of the main reasons is that powers of matrix $A$ are very easy to obtain. For example, $A^2$:

where we have used the fact that $UU^{-1}=U^{-1}U=I$, the identity matrix. But the square of a diagonal matrix is pretty easy to obtain, just square the eigenvalues. The same holds for any power of the matrix:

And when do we need powers of a matrix? For example, when iterating a linear map. Consider that you’re given the initial vector $v_0=(1,1)^T$ and you’re asked about

where

The first values of $n$ can be easily computed:

So, the final question: how to obtain $v_{10}$?

Eigensystems in Octave

Let’s play a bit with Octave. We define

> A = [ 1 1; 1 0 ];
> v0 = [ 1; 1];
> A^5*v0
ans =

   13
    8

One can readily observe that the vector entries are consecutive Fibonacci numbers, but we will leave the proof for the reader. :)

So, let us now ask Octave about the eigenvectors and eigenvalues of $A$.

> [U,D]=eig(A)
U =

   0.52573  -0.85065
  -0.85065  -0.52573

D =

Diagonal Matrix

  -0.61803         0
         0   1.61803

We asked Octave to store the answer into $U$ and $D$. The eigenvectors are given by the columns of $U$, [0.52573; -0.85065] and [-0.85065; -0.52573]. The corresponding eigenvalues are -0.61803 and 1.61803. We can check by applying $A$ on one of the eigenvectors:

> e1 = [ 0.52573; -0.85065 ];
> u1 = A * e1 
u1 =

    -0.32492
     0.52573

Did it work? How to know? We can divide the vector entries one by one, using ./

> u1 ./ e1
ans =

    -0.61803
    -0.61803

which means that both vectors are parallel, and the quotient between their entries is the corresponding eigenvalue. Things are working nicely.

Powers of a matrix

First of all, let us try to rebuild matrix $A$ from its eigenvectors and eigenvalues. As stated above, $A=UDU^{-1}$. This is the most important equation regarding eigensystems.

> U * D * inv(U)
ans =

           1           1
           1  -8.7076e-17

So, we get the original matrix back. That’s fine.

Now, let us raise the eigenvalues to the desired power

> U * D^10 * inv(U)
ans =

          89          55
          55          34

(which, interestingly, only contains Fibonacci numbers!) This compares well to a direct computation of $A^{10}$:

> A^10
ans =

          89          55
          55          34

And now we can use it to find $v_{10}$:

> U * D^10 * inv(U) * v0
ans =

         144
          89

Why eigensystems?

So, why all the fuss, when we might have computed $A^{10}$ all the way long? First of all, because diagonalizing is numerically faster than actually raising a matrix to a high power. Second (and most importantly), because eigenvalues appear in many different branches of mathematics, physics and engineering on their own. A brief list: they help solve differential equations, they yield the possible energy values of a quantum system, they give us the axes of an ellipse (or ellipsoid), they provide the natural frequencies of oscillation of any structure, they show the rates of growth in stochastic phenomena… and many other applications.

So, how does Octave find the eigenvalues and eigenvectors? The answer to this question goes beyond this tutorial. Numerical diagonalization is a difficult yet very useful mathematical problem, so numerical analysts have developed very efficient numerical techniques. In a very sketchy way: they proceed by performing geometrical transformations on the matrix, like rotations and reflections on arbitrary planes. It is one of the few cases where teachers will tell you: you don’t really need to take care (at least not until you know much more math!).