WHAT USE ARE ABSTRACT SPACES? W. W. Sawyer In school we learn classical mathematics, the mathematics known before the year 1900. It has a direct relation to the actual world. Arithmetic is related to counting and measurement, algebra to general truths about calculations, geometry and trigonometry to the sizes and shapes of objects, calculus to the velocities of moving objects and a host of other applications. Students have no difficulty in knowing what it is about and what it is for. Students who go on to modern mathematics, the mathematics of the present century, either in university or by private study, find it considerably different. There is no immediate contact with reality, and they may find it hard to visualize what is being discussed. The difficulty of understanding is increased by the fact that textbooks are often written in a form admirably suited to the needs of the research worker, but offering little help to someone approaching the subject for the first time. A student's difficulties can be greatly reduced or even eliminated if the formal study of the textbook is preceded by a preview of the subject and an account of the way in which the modern approach came into existence. The present article is an attempt to provide such assistance. The Beginnings of Modern Mathematics. Early in this century, mathematicians, looking back on the work of earlier centuries, were able to identify certain common ingredients in different branches of mathematics, in much the same way that modern chemists were able to recognize the same vitamin in many different foods. Like the chemists, they decided to isolate and study these ingredients. Vectors, Old and New. We can illustrate this by considering the modern use of the word, "vector". Originally "vector" had a very definite meaning. A vector could be represented by a line segment, AB, in 2 or 3 dimensions,with an arrow to indicate the direction from A to B. It had applications to displacements, velocities, accelerations and forces. Vectors had certain properties that made them interesting to study. In modern usage, the word "vector" is applied to anything that has these properties. Thus it does not indicate any particular type of object; it refers to a certain aspect of many situations. The properties we require are very few. A vector can be multiplied by a number; two vectors can be added. Any collection of mathematical objects in which these two operations can be defined is called "a vector space". Of course the definitions of addition and multiplication cannot be completely random. They must be such that, when working with them, we can forget that they are not our usual addition and multiplication, and even so, we shall not be led to incorrect results. In a formal treatise this requirement would be spelt out in a number of axioms. The position of a point is specified by the vector that comes from the origin to it. Often, when an origin has been agreed, we shall speak of "the point u", where u stands for a vector. This will mean the point at the end of the vector, u, originating at the origin. Optional Properties. In the original usage of vectors, lengths and angles played a part. It would be possible include in our description of what we meant by "vector space" the requirement that length and angle should be defined. Indeed we shall soon bring in such requirements, but these will then be regarded as singling out a special (though admittedly important) type of vector space. For the present, however,we are not doing this. We are agreeing to accept a situation in which these concepts are lacking as a vector space. An extreme example would be if we supposed that marking the point (x,y) on graph paper indicated that we were thinking of x cats and y dogs. It would not matter if the graph paper was marked out in squares or in parallelograms. There is no reason for saying that a cat must be perpendicular to a dog. There might be some objection to calling this example a vector space, since the numbers multiplying vectors are supposed to include fractions and negative numbers, which are not appropriate when applied to animals. However perhaps this rather crude illustration will serve to emphasise that the basic concept of vector space includes systems with no resemblance at all to anything in Euclidean geometry. Theorems on Vector Spaces. It might seem that there is so little information in the definition that there will be nothing at all to say about the most general vector space, but this is not so. We can, for instance, prove that in such a space the diagonals of a parallelogram bisect each other. To begin with, we can attach a meaning to two vectors having the same direction. We say that ku has the same direction as u, and that the line joining the origin to u consists of all the points ku. The line through v parallel to the line just mentioned consists of all the points v+ku. The points v, v+u, v+2u, v+3u, and so on, are evenly spaced along this line. In particular, if w=v+2u, then v+u is the midpoint between u and w, and is given by (1/2)(v+w). The points 0, a, b, and a+b are corners of a parallelogram. The midpoint of 0 and a+b is (1/2)(a+b), and this is also the midpoint of a and b, which proves the theorem mentioned above. Examples of Vector Spaces. We now look at some examples of vector spaces. We begin with two very familiar examples, then we have a less familiar example, and after that something totally unexpected. (i). Space of 2 dimensions. (ii). Space of 3 dimensions. (iii). Space of n dimensions. (iv). Function space. For example, if s stands for the function
s(x)= sin x, then it is natural to interpret t=3s as meaning that Thus the procedures we use, almost unconsciously, when making calculations with functions are of the type appropriate for work with vectors. We may speak of functions as vectors, and all the functions defined on a given interval as forming a vector space. Functions that can be expanded in a power series are known as analytic. For such functions there is a very close analogy with the ndimensional vectors considered in (iii). For example (1x)^{1} = 1 + x + x^{2} + x^{3} + x^{4} + x^{5} + .... (1x)^{2} = 1 + 2x + 3x^{2} + 4x^{3} + 5x^{4} + 6x^{5} + ... (1x)^{1} + (1x)^{2} = 2 + 3x + 4x^{2} + 5x^{3} + 6x^{4} + 7x^{5} In the top row we see the coefficients (1, 1, 1, 1, 1, 1, ....). In the second row we see (1, 2, 3, 4, 5, 6, ....). If we add these by the usual vector rule we get (2, 3, 4, 5, 6, 7, ...) as seen in the bottom row. Thus we may think of these series as specifying vectors in space of infinite dimensions, and as being added in the way usual for vectors. Distances. A decisive step in the development of mathematics was the publication in 1906 of the thesis of Maurice Frechet. It seems to have come about in the following way. In the 19th century and earlier, work had been done on the calculus of variations. This resembles ordinary calculus in that we are trying to make something a maximum or a minimum; it differs by the unknown being not a number but a function. For instance, a chain with its ends secured hangs in the way that makes its centre of gravity as low as possible. Its equation being y = f(x), our aim is to find the f(x) that will make this happen. Hadamard pointed out that we can imagine real numbers as inhabiting a line and complex numbers as inhabiting a plane, and these geometrical pictures help us in our reasoning. When we are trying to pick out a particular function, we have no geometrical picture of the multitude of possible functions from which it is to be chosen. It would be very helpful if we could arrive at such a picture. Frechet took up this question. He examined the classical work on real and complex numbers, and observed that distance played a decisive role. In analysis we are much concerned with questions about limits. For both real and complex numbers, a sequence of numbers zn is tending to a limit L if the distance of zn from L is tending to zero. He decided that, if it was possible to find a satisfactory definition of distance between two mathematical objects (of any kind), it would be possible to find theorems about these objects analogous to the theorems about real and complex numbers. The first question then is " what is a satisfactory definition of distance?" He looked at the traditional proofs and found the only properties of distance used were the following very simple ones; 1. Distance is measured by a real number, which is never negative. 2. A distance is zero if, and only if, it is the distance between a point and itself. 3. The distance from A to B is the same as the distance from B to A. 4. You cannot shorten your journey by breaking it. If you go from A to C, and then from C to B, the total distance cannot be less than the distance from A to B. (It may of course be equal, if C lies on the direct route from A to B.) This is known as the triangle axiom. It corresponds to Euclid's remark, that the sum of the lengths of two sides of a triangle must exceed the length of the third side. Frechet's investigation was extraordinarily fruitful. It was found possible to find a satisfactory definition for the distance between two matrices, two transformations, two functions, two operations that may involve differentiation and integration. At one blow, this opens the door to a whole series of results concerning the most varied situations. It is often possible to find more than one definition of distance for given objects. For instance, on a chessboard we can define distance as the minimum number of moves a king needs to get from one square to another. We get a different definition if we consider a rook instead of a king. Options can equally well arise in more serious mathematical contexts. The distance from A to B is the length of AB. We now consider defining length. Of the many possible definitions we shall here consider only those that lead to our usual geometry, or to a geometry very similar to it. In all the spaces now to be listed, Pythagoras Theorem is true in some sense. All these spaces are vector spaces. The symbol u will be used for the length of the vector u. As vu is the vector that goes from the point u to the point v, vu gives the distance of the point u from the point v. Length. 1. Euclidean space of 2 dimensions. u^{2} = u_{1}^{2} + u_{2}^{2}. With the help of Pythagoras Theorem. we can define "perpendicular" . We shall say v is perpendicular to u, if the points 0, u, v form a rightangled triangle with a rightangle at the origin, 0. Whether it is rightangled or not we test by using Pythagoras. The length of the hypotenuse is the distance from u to v, that is, vu. As vu is (v_{1}u_{1}, v_{2}u_{2}), c, the length of the hypotenuse is given by c^{2} = (v_{1}u_{1})^{2} + (v_{2}u_{2})^{2} .....................................................................(1) The sum of the squares on the other two sides is u^{2}+v^{2} = (u_{1}^{2}+u_{2}^{2}) + (v_{1}^{2}+v_{2}^{2})...............................(2) The condition for a rightangle is that the expressions in (1) and (2) are equal. On writing the equation, we find there is cancelling of all squared terms. Dividing what remains by 2 we reach the equation u_{1}v_{1}+u_{2}v_{2 }=0. We recognize the expression here as the dot product, u.v. Accordingly, in each geometry below we shall define u.v as the expression that turns up in this way. Thus, in each geometry, u.v = 0 will be the condition for u being perpendicular to v. 2. Euclidean space of 3 dimensions. u^{2} = u_{1}^{2} + u_{2}^{2 }+ u_{3}^{2}.The square on the hypotenuse will be c^{2} = (v_{1}u_{1})^{2} + (v_{s}u_{2})^{2} + (v_{3}u_{3})^{2} The sum of the squares on the other two sides is (u_{1}^{s} + u_{2}^{2} + u_{3}^{2}) + (v_{1}^{2} + v_{2}^{2} + v_{3}^{2}) . Equating these, cancelling the squares and dividing by 2 we obtain the definition of u.v as u.v = u_{1}v_{1} + u_{2}v_{2} + u_{3}v_{3} . 3. Euclidean space of n dimensions. u.v = u_{1}v_{1} + u_{2}v_{2} + ....+ u_{n}v_{n} . 4. Function space. f(x_{1})^{2} + f(x_{2})^{2} +....... f(x_{n})^{2}. The distances between the points so obtained for various functions might give us a useful way of measuring the distances between the functions. However, the method is rather rough and ready. How big should n be ? Why choose the midpoint of each interval? Now a sum resembling that just written would appear if we were making an estimate of the value
of This suggests that we might define the length of the function f by f^{2} = gp^{q} f(x)^{2} dx . This leads to something quite novel. We can define a dot product for this space, and thus find a meaning for one function being perpendicular to another. The work follows the same pattern as before, but with integrals instead of sums. When we were finding the condition for u to be perpendicular to v, the length of the hypotenuse was (vu), so the condition was (vu)^{2} = u^{2 }+ v^{2} . If u corresponds to f(x) and v to g(x) , vu will correspond to g(x)  f(x) . With the definition of length just found this condition will become c^{2} = a^{2} + b^{2} where c^{2}=gp^{q}[g(x)f(x)]^{2} dx a^{2} = gp^{q}f(x)^{2} dx , b2 = gp^{q}g(x)^{2} dx . We now need to multiply out the bracket that appears in c^{2}. c^{2}= gp^{q}g(x)^{2}  2f(x)g(x) + f(x)^{2} dx. The integrals of f(x)^{2 }and g(x)^{2 }appear
on both sides of the equation f.g = gp^{q }f (x) g(x) dx . We shall say that the functions are perpendicular if this dot product is zero. "Orthogonal" is a synonym for "perpendicular" and it is the custom today to speak of orthogonal functions rather than perpendicular ones. I do not know the reason for this. Perhaps the idea of functions being perpendicular is felt to be rather shocking, and the more learned word is used to lessen the shock. The usefulness of perpendicularity. It is often found in attacking a problem that the axes of coordinates in which we have started are not the best for further work, and it becomes necessary to go over to some other system. If u, v, w are to be unit vectors in the directions of the new axes, the point with coordinates X,Y,Z for the new system will be Xu+Yv+Zw. By equating this to (x,y,z), the specification of the point in the old system, we obtain 3 equations , by solving which we can find the new coordinates X,Y,Z. In 3 dimensions this may not be too bad; the corresponding problem in 10 dimensions, for instance, could be rather trying. However, if the vectors along the axes are perpendicular in both the old and the new system of axes, a much simpler method is available. Suppose, for example, that in 3 dimensions the new axes are to have the vectors u,v,w where u = (1,1,1), v =(0,1,1), and w = (2,1,1). It is easily verified from the dot products that these vectors are perpendicular to each other. Let the vector s be (x,y,z) in the old system. In the new system we are to have s = Xu +Yv + Zw . Take the dot product of this equation with u. On the righthand side u.v and u.w are zero, so the result s.u = X (u.u) .....................................................................(3) The dot products are easy to work out; s.u = x+y+z and u.u=3. We thus find x+y+z =3X, which gives X immediately. In the same way, by taking dot products with v and w we can single out Y and Z. Fourier Series. For many problems it is important to expand a function in a series consisting of sines. We may be given a function in the interval (0, q) and wish to find the constants c_{1}, c_{2}, c_{3},... in the series f(x) = c_{1} sin x + c_{2} sin 2x + c_{3} sin 3x + .................................(4) A device which has been known since about 1750 for finding these constants is the following. (There are certain logical difficulties in this procedure which will not be discussed here. It is not always allowable to integrate an infinite series term by term.) It was noticed that, if m and n are different whole numbers, then go^{q} sin mx sin nx dx = 0 .....................................................................(5) To find, say, c^{3}, we multiply equation (3) by sin 3x and integrate from 0 to q. This will wipe out all the terms except that containing c_{3}, and we shall have go^{q} f(x) sin 3x dx = c_{3} go^{q} sin^{2} 3x dx ...........................(6) As soon as we have worked out the integrals we shall have the value of c_{3}. Here we have singled out c_{3 }in much the same way that we singled out X in the 3dimensional problem. The analogy in fact is extremely close. If we take u_{n} as the vector representing sin nx, equation (5) can be written u_{m}.u_{n} = 0, which means that the sines are represented by mutually perpendicular vectors, while equation (6) says f.u_{3} = c_{3} u_{3}.u_{3}, which is of exactly the same form as equation (3) earlier. Thus finding the coefficients in a Fourier series turns out to be just the problem of expressing a vector in a new system of perpendicular axes. Now the modern approach starts to work for us. Fourier theory appears in a new light, as a method using projection onto orthogonal axes. But this immediately suggests a thought; there are infinitely many ways of choosing a set of perpendicular directions. To each choice of axes, there will correspond a way of finding the coefficients in a series by a procedure much like that used with Fourier series. Many such choices have been studied. One of them is particularly simple, and could be used to compose problems in elementary calculus. In it the orthogonal functions are not sines, as with Fourier, but simply polynomials. Orthogonal Polynomials. With [1,1] as the basic interval, let F_{o}(x) = 1, F_{1}(x) = x, F_{2}(x) = 3x_{2} 1, F_{3}(x) = 5x^{3} 3x, F_{4}(x) = 35x^{4} 30x^{2} + 3. You can check that these are orthogonal. The work can reduced by using the following observation; the dot product involves the vectors linearly. For example, f.(au + bv + cw) = a(f.u) + b(f.v) + c(f.w) . This means that, if f is perpendicular to u, to v and to w, it is bound to be perpendicular to au + bv + cw, for any a,b,c. This holds for any number of vectors in the bracket; if f is perpendicular to each of them, it is perpendicular to any linear mixture of them. So, if we check that F_{4}(x), for example, is perpendicular to 1, to x, to x^{2 }and to x^{3}, it is bound to be perpendicular to any linear combination of these, hence to any polynomial with degree less than 4, hence in particular to F_{o}(x), to F_{1}(x), to F_{2}(x) and to F_{3}(x). The same idea can be applied to F_{3}(x). Further polynomials in this sequence can be obtained by taking F_{n}(x) = (d/dx)^{n} (x^{2} 1)^{n}. That F_{n}(x) is perpendicular to x^{m }if m<n can be proved by integration by parts. Observe that If multiplied by certain constants, these polynomials give the Legendre polynomials, which play an important part in electromagnetic theory and other branches of science. They can be used to build series in much the same way that sines are for Fourier series. Like Fourier series, they are capable of representing functions that have discontinuities.  These notes have dealt with some parts of modern mathematics, showing the ideas that led to them and the applications that flow from them. It is hoped that these notes will make progress easier for anyone venturing into this unfamiliar region for the first time. W.W.Sawyer, Copyright © W. W. Sawyer & Mark Alder 2000 Version: 22nd March 2001

