Finite Vector Space Representations

The scope of this document

Linear vector space concepts are very useful to the data assimilator! This document contains a derivation of some important and fundamental results in linear vector space representations. The vector representation of fields, and the matrix representation of linear operators is derived, together with how vectors and matrices are transformed to alternative representations, incuding their eigenrepresentation. Systems described by orthogonal basis members (in connection with hermitian operators) and skew basis members (in connection with non-hermitian operators) are discussed. The document ends with a description of operators that span two vector spaces, and the corresponding application of singular vectors.

Instructions to download and print

One of the following can be downloaded and printed.

A4 postscript gzipped download here,
A4 pdf file download here,
A5 postscript gzipped (booklet format) download here.

The booklet format is designed to be printed on a duplex postscript printer using the special option to flip along the short edge when printing the back of the paper. The result is a 21 page A5 booklet when folded down the middle. In Reading's Meteorology Dept., use the unix utility launched by typing
qtcups postscript-file-name
and choose the printer and the special duplex printing option from the window that will appear. The A4 versions can be printed as usual.

As a taster, the first section of the document is reproduced below.

What is a linear vector space?

The linear vector space representation of fields and operators is a powerful component in the toolbox of the mathematician, physicist, engineer and statistician. It enters in the analysis of a huge range of problems. Coming under alternative headings of "generalised fourier methods", "matrix methods", or "linear algebra", they are especially useful in the modern day as they allow problems to be formulated in a manner that is directly suitable for numerical solution by computer.

Vectors and matrices are structures that allow information to be represented in a systematic way. For our purposes a vector is a quantity that is a collection of numerical information. Along with the vector is its representation (the meanings attached to each element), which defines a linear vector space. Examples arise in basic physics, some quantities being fundamentally vector in nature, such as velocity or force that exist in two- or three-dimensional space. The representation in these examples is often of three cartesian directions of space (the bases), and the vector can be plotted, having a direction and a length. In such an example, the particular representation is not rigid. The representation can be changed by choosing different cartesian directions, by a rotation of the original axes. In the new representation, the components of the vector will have changed, not because the vector itself has changed, but because the representation has, and replotting the vector will yield the same absolute direction and length and it did before.

Vectors can have wider, but abstract, applications too. Commonly a field is represented as a vector. An example might be a two-dimensional field of some quantity constructed of a number of values on a finite grid covering the domain of the field. A possible representation of a vector might then be a sequence of kronecker delta functions at each position on the grid (these are a simple choice of bases). By associating each delta function with an element of the vector, the elements become the values of the field at each grid point. As in the case of the velocity or force, the vector will have a direction and length, but not in two- or three- dimensional space, but in an abstract -dimensional space, where is the number of grid points in the representation. This has a powerful conceptual value as all of the algebra that deals with vectors can now be applied to fields.

In this abstract application, there is the analogue of rotation of the basis. A new 'rotated' set of axes could be constructed by projecting the field onto an alternative basis set. A familiar example is the fourier decomposition of the field, which would yield a set of fourier coefficients that correspond to a representation of plane waves, each with a different wavenumber, instead of delta functions (the large dimensionality of the vector space makes it difficult to actually imagine the fourier transformation as a rotation!). Thus, by performing a fourier transform, we have represented the same field (i.e. vector) as an alternative set of coefficients. We can have either the vector in its real-space delta function representation, or the vector in its fourier-space representation.

Just as vectors can be used to represent fields, matrices are (linear) operators that act on these fields. For the purposes of this document, operators may be thought to come in two flavours. First there are the transformation or rotation operators - these are the kinds of operators that rotate the basis, transforming the representation. Acting with such a matrix does not fundamentally yield a new vector, but yields the same vector in a new representation, specified by the matrix. The fourier transform is an example of such an operator. The second kinds of operators might be referred to as physical or statistical operators, which perform specific physical or statistical roles. Many linear mathematical operations on the field have a matrix analogue, e.g. the laplacian operator, or the convolution operator, to name but two.

Contacts and links

Contacts

Useful links

Page navigation

See also