linear math hw

start at excerise 1.4 question number four and finish all of the problems. The list of questions is included at a picture and the text book is included as a pdf.

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Undergraduate Texts in Mathematics
Serge Lang
Introduction to
Linear Algebra
Second Edition
• Springer

Springer
New York
Berlin
Heidelberg
Hong Kong
London
Milan
Paris
Tokyo
Undergraduate Texts In Mathematics
Editors
s. Axler
F. W. Gehring
K. A. Ribet

Springer Books on Elementary Mathematics by Serge Lang
MATH! Encounters with High School Students
1985, ISBN 96129-1
The Beauty of Doing Mathematics
1985, ISBN 96149-6
Geometry: A High School Course (with G. Murrow), Second Edition
1988, ISBN 96654-4
Basic Mathematics
1988, ISBN 96787-7
A First Course in Calculus, Fifth Edition
1986, ISBN 96201-8
Calculus of Several Variables, Third Edition
1987, ISBN 96405-3
Introduction to Linear Algebra, Second Edition
1986, ISBN 96205-0
Linear Algebra, Third Edition
1987, ISBN 96412-6
Undergraduate Algebra, Second Edition
1990, ISBN 97279-X
Undergraduate Analysis, Second Edition
1997, ISBN 94841-4
Complex Analysis, Fourth Edition
1999, ISBN 98592-1
Real and Functional Analysis, Third Edition
1993, ISBN 94001-4

Serge Lang
Introduction
to Linear Algebra
Second Edition
With 66 Illustrations
Springer

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Serge Lang
Department of Mathematics
Yale University
New Haven, CT 06520
U.S.A.
Editorial Board
S. Axler
Department of Mathematics
Michigan State University
East Lansing, MI 48824
U.S.A.
K.A. Ribet
Department of Mathelnatics
University of California
at Berkeley
Berkeley, CA 94720-3840
U.S.A.
F. W. Gehring
Department of Mathematics
University of Michigan
Ann Arbor. MI 48019
U.S.A.
Mathematics Subjects Classifications (2000): 15-01
Library of Congress Cataloging in Publication Data
Lang, Serge, 1927-
Introduction to linear algebra.
(Undergraduate texts in mathematics)
Includes index.
1. Algebras, Linear. I. Title. II. Series.
QA184.L37 1986 512′.5 85-14758
Printed on acid-free paper.
The first edition of this book was published by Addison-Wesley Publishing Company, Inc., in 1970.
© 1970, 1986 by Springer-Verlag New York Inc.
All rights reserved. This work may not be translated or copied in whole or in part without the written
permission of the publisher (Springer-Verlag, 175 Fifth Avenue, New York, New York 10010, U.S.A.),
except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any
form of information storage and retrieval, electronic adaptation, computer software, or by similar or
dissimilar methodology now known or hereafter developed is forbidden.
The use of general descriptive names, trade names, trademarks, etc., in this publication, even if the
former are not especially identified, is not to be taken as a sign that such names, as understood by
the Trade Marks and Merchandise Marks Act, may accordingly be used freely by anyone.
Printed in the United States of America (ASC/EB)
987 SPIN 10977149
Springer-Verlag IS a part of Springer Science+ Busmess Media
springeronlin e. com

Preface
This book is meant as a short text in linear algebra for a one-term
course. Except for an occasional example or exercise the text is logically
independent of calculus, and could be taught early. In practice, I expect
it to be used mostly for students who have had two or three terms of
calculus. The course could also be given simultaneously with, or im-
mediately after, the first course in calculus.
I have included some examples concerning vector spaces of functions,
but these could be omitted throughout without impairing the under-
standing of the rest of the book, for those who wish to concentrate
exclusively on euclidean space. Furthermore, the reader who does not
like n = n can always assume that n = 1, 2, or 3 and omit other interpre-
tations. However, such a reader should note that using n = n simplifies
some formulas, say by making them shorter, and should get used to this
as rapidly as possible. Furthermore, since one does want to cover both
the case n = 2 and n = 3 at the very least, using n to denote either
number avoids very tedious repetitions.
The first chapter is designed to serve several purposes. First, and
most basically, it establishes the fundamental connection between linear
algebra and geometric intuition. There are indeed two aspects (at least)
to linear algebra: the formal manipulative aspect of computations with
matrices, and the geometric interpretation. I do not wish to prejudice
one in favor of the other, and I believe that grounding formal manipula-
tions in geometric contexts gives a very valuable background for those
who use linear algebra. Second, this first chapter gives immediately
concrete examples, with coordinates, for linear combinations, perpendicu-
larity, and other notions developed later in the book. In addition to the
geometric context, discussion of these notions provides examples for

VI PREFACE
subspaces, and also gives a fundamental interpretation for linear equa-
tions. Thus the first chapter gives a quick overview of many topics in
the book. The content of the first chapter is also the most fundamental
part of what is used in calculus courses concerning functions of several
variables, which can do a lot of things without the more general ma-
trices. If students have covered the material of Chapter I in another
course, or if the instructor wishes to emphasize matrices right away, then
the first chapter can be skipped, or can be used selectively for examples
and motivation.
After this introductory chapter, we start with linear equations,
matrices, and Gauss elimination. This chapter emphasizes computational
aspects of linear algebra. Then we deal with vector spaces, linear maps
and scalar products, and their relations to matrices. This mixes both the
computational and theoretical aspects.
Determinants are treated much more briefly than in the first edition,
and several proofs are omitted. Students interested in theory can refer to
a more complete treatment in theoretical books on linear algebra.
I have included a chapter on eigenvalues and eigenvectors. This gives
practice for notions studied previously, and leads into material which is
used constantly in all parts of mathematics and its applications.
I am much indebted to Toby Orloff and Daniel Horn for their useful
comments and corrections as they were teaching the course from a pre-
liminary version of this book. I thank Allen Altman and Gimli Khazad
for lists of corrections.

Contents
CHAPTER I
Vectors ……………….. . 1
§ 1. Definition of Points in Space . . . . . . . . . . . . . . . . . . . . . . . . . . 1
§2. Located Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
§3. Scalar Prod uct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 12
§4. The Norm of a Vector. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 15
§5. Parametric Lines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
§6. Planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 34
CHAPTER II
Matrices and Linear Equations 42
§ 1. Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 43
§2. Multiplication of Matrices. . . . . . . . . . . . . . . . . . . . . . . . . . .. 47
§3. Homogeneous Linear Equations and Elimination. . . . . . . . . . . . .. 64
§4. Row Operations and Gauss Elimination . . . . . . . . . . . . . . . . . .. 70
§5 Row Operations and Elementary Matrices . . . . . . . . . . . . . . . . .. 77
§6. Linear Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 85
CHAPTER III
Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . 88
§ 1. Definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 88
§2. Linear Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 93
§3. Convex Sets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 99
§4. Linear Independence …………………………. 104
§5. Dimension. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 110
§6. The Rank of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 115

Vll1 CONTENTS
CHAPTER IV
Linear Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 123
§ 1. Mappings • • . • • • . • . . . . . . • . . • . . • . . • • • . . . . . . . . . . .. 123
§2. Linear Mappings. • . • . • • . • • • • . • . . • • • • • • • . . . • . . • . . .. 127
§3. The Kernel and Image of a Linear Map. . . . . . . . . . . . . . . . . .. 136
§4. The Rank and Linear Equations Again. . . . . . . . . . . • . . . . . . .. 144
§5. The Matrix Associated with a Linear Map. . . . . . . . . . . . . . . . .. 150
Appendix: Change of Bases ……. . . . . . . . . . . . . . . . . . . . . .. 154
CHAPTER V
Composition and Inverse Mappings . . . . . . . . . . . . . . . . . . . . . . . 158
§1. Composition of Linear Maps . . . . . . . . . . . . . . . . . . . . . . . . .. 158
§2. Inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 164
CHAPTER VI
Scalar Products and Orthogonality . . . . . . . . . . . . . . . . . . . . . .. 171
§ 1. Scalar Products. . . . • . . . . . . . . . • • . . . • • . . • • . . . . • . . . .. 171
§2. Orthogonal Bases . . . . . . • . . . . . . . . • . . . . . . . . . . . . . . . .. 180
§3. Bilinear Maps and Matrices. . . . . . . . . . . . . . . . . . . . . . . . . .. 190
CHAPTER VII
Determinants 195
§ 1. Determinants of Order 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 195
§2. 3 x 3 and n x n Determinants ……………………. 200
§3. The Rank of a Matrix and Subdeterminants. . . . . . . . . . . . . . . .. 210
§4. Cramer’s Rule …………………………….. 214
§5. Inverse of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 217
§6. Determinants as Area and Volume. . . . . . . . . . . . . . . . . . . . . .. 221
CHAPTER VIII
Eigenvectors and Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . .. 233
§1. Eigenvectors and Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . .. 233
§2. The Characteristic Polynomial ……………………. 238
§3. Eigenvalues and Eigenvectors of Symmetric Matrices ……….. 250
§4. Diagonalization of a Symmetric Linear Map. . . . . . . . . . . . . . . .. 255
Appendix. Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . .. 260
Answers to Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 265
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 291

CHAPTER
Vectors
The concept of a vector is basic for the study of functions of several
variables. It provides geometric motivation for everything that follows.
Hence the properties of vectors, both algebraic and geometric, will be
discussed in full.
One significant feature of all the statements and proofs of this part is
that they are neither easier nor harder to prove in 3-space than they are
in 2-space.
I, §1. Definition of Points in Space
We know that a number can be used to represent a point on a line,
once a unit length is selected.
A pair of numbers (i.e. a couple of numbers) (x, y) can be used to
represent a point in the plane.
These can be pictured as follows:
• • o x
(a) Point on a line
Figure 1
y —-, (x, y)
I
I
I
I
x
(b) Point in a plane
We now observe that a triple of numbers (x, y, z) can be used to
represent a point in space, that is 3-dimensional space, or 3-space. We
simply introduce one more axis. Figure 2 illustrates this.

2
x-aXIS
VECTORS
z-aXIS
” ” ” ” ” “-
” ” “-
” ”
Figure 2
[I, §I]
(x,y,z)
Instead of using x, y, z we could also use (Xl’ X 2 , X3). ‘The line could
be called I-space, and the plane could be called 2-space.
Thus we can say that a single number represents a point in I-space.
A couple represents a point in 2-space. A triple represents a point in 3-
space.
Although we cannot draw a picture to go further, there is nothing to
prevent us from considering a quadruple of numbers.
and decreeing that this is a point in 4-space. A quintuple would be a
point in 5-space, then would come a sextuple, septuple, octuple, ….
We let ourselves be carried away and define a point in n-space to be
an n-tuple of numbers
if n is a posItIve integer. We shall denote such an n-tuple by a capital
letter X, and try to keep small letters for numbers and capital letters for
points. We call the numbers Xl’ … ,xn the coordinates of the point X.
For example, in 3-space, 2 is the first coordinate of the point (2,3, -4),
and -4 is its third coordinate. We denote n-space by Rn.
Most of our examples will take place when n == 2 or n == 3. Thus the
reader may visualize either of these two cases throughout the book.
However, three comments must be made.
First, we have to handle n == 2 and n == 3, so that in order to a void a
lot of repetitions, it is useful to have a notation which covers both these
cases simultaneously, even if we often repeat the formulation of certain
results separately for both cases.

[I, § 1 ] DEFINITION OF POINTS IN SPACE 3
Second, no theorem or formula is simpler by making the assumption
that n == 2 or 3.
Third, the case n == 4 does occur in physics.
Example 1. One classical example of 3-space is of course the space we
live in. After we have selected an origin and a coordinate system, we can
describe the position of a point (body, particle, etc.) by 3 coordi-
nates. Furthermore, as was known long ago, it is convenient to extend
this space to a 4-dimensional space, with the fourth coordinate as time,
the time origin being selected, say, as the birth of Christ-although this
is purely arbitrary (it might be more convenient to select the birth of the
solar system, or the birth of the earth as the origin, if we could deter-
mine these accurately). Then a point with negative time coordinate is a
BC point, and a point with positive time coordinate is an AD point.
Don’t get the idea that “time is the fourth dimension “, however. The
above 4-dimensional space is only one possible example. In economics,
for instance, one uses a very different space, taking for coordinates, say,
the number of dollars expended in an industry. For instance, we could
deal with a 7-dimensional space with coordinates corresponding to the
following industries:
1. Steel 2. Auto
5. Chemicals 6. Clothing
3. Farm products
7. Transportation.
4. Fish
We agree that a megabuck per year is the unit of measurement. Then a
point
(1,000, 800, 550, 300, 700, 200, 900)
in this 7-space would mean that the steel industry spent one billion
dollars in the given year, and that the chemical industry spent 700 mil-
lion dollars in that year.
The idea of regarding time as a fourth dimension is an old one.
Already in the Encyclopedie of Diderot, dating back to the eighteenth
century, d’Alembert writes in his article on “dimension”:
Cette maniere de considerer les quantites de plus de trois dimensions est
aussi exacte que l’autre, car les lettres peuvent toujours etre regardees
com me representant des nombres rationnels ou non. J’ai dit plus haut qu’il
n’etait pas possible de concevoir plus de trois dimensions. Un homme
d’esprit de rna connaissance croit qu’on pourrait cependant regarder la
duree comme une quatrieme dimension, et que Ie produit temps par la
solidite serait en quelque maniere un produit de quatre dimensions; cette
idee peut etre contestee, mais elle a, ce me semble, quelque merite, quand
ce ne serait que celui de la nouveaute.
Encyclopedie, Vol. 4 (1754), p. 1010

4 VECTORS [I, ~ 1 ]
Translated, this means:
This way of considering quantItIes having more than three dimensions is
just as right as the other, because algebraic letters can always be viewed as
representing numbers, whether rational or not. I said above that it was
not possible to conceive more than three dimensions. A clever gentleman
with whom I am acquainted believes that nevertheless, one could view
duration as a fourth dimension, and that the product time by solidity
would be somehow a product of four dimensions. This idea may be chal-
lenged, but it has, it seems to me, some merit, were it only that of being
new.
Observe how d’Alembert refers to a “clever gentleman” when he appar-
ently means himeself. He is being rather careful in proposing what must
have been at the time a far out idea, which became more prevalent in
the twentieth century.
D’ Alembert also visualized clearly higher dimensional spaces as “prod-
ucts” of lower dimensional spaces. For instance, we can view 3-space as
putting side by side the first two coordinates (x l’ x 2 ) and then the third
x 3 . Thus we write
We use the product sign, which should not be confused with other
“products”, like the product of numbers. The word “product” is used in
two contexts. Similarly, we can write
There are other ways of expressing R4 as a product, namely
This means that we view separately the first two coordinates (x l’ x 2 ) and
the last two coordinates (X3′ x 4 ). We shall come back to such products
later.
We shall now define how to add points. If A, B are two points, say
in 3-space,
and
then we define A + B to be the point whose coordinates are
Example 2. In the plane, if A == (1, 2) and B == ( – 3, 5), then
A + B == ( – 2, 7).

[I, § 1 ] DEFINITION OF POINTS IN SPACE 5
In 3-space, if A == ( – 1, n, 3) and B == (j2, 7, – 2), then
A + B == (j2 – 1, n + 7, 1).
U sing a neutral n to cover both the cases of 2-space and 3-space, the
points would be written
and we define A + B to be the point whose coordinates are
We observe that the following rules are satisfied:
1. (A + B) + C == A + (B + C).
2. A + B == B + A.
3. If we let
o == (0, 0, … ,0)
be the point all of whose coordinates are 0, then
O+A==A+O==A
for all A.
4. Let A == (a l , … ,an) and let – A == ( – at, … ,- an). Then
A + (-A) == O.
All these properties are very simple, and are true because they are
true for numbers, and addition of n-tuples is defined in terms of addition
of their components, which are numbers.
Note. Do not confuse the number ° and the n-tuple (0, … ,0). We
usually denote this n-tuple by 0, and also call it zero, because no diffi-
cuI ty can occur in practice.
We shall now interpret addition and multiplication by numbers geo-
metrically in the plane (you can visualize simultaneously what happens
in 3-space).
Example 3. Let A == (2,3) and B == (-1, 1). Then
A + B == (1, 4).

6 VECTORS
The figure looks like a parallelogram (Fig. 3).
(1,4)
(2,3)
( -1,1)
Figure 3
Example 4. Let A = (3, 1) and B = (1,2). Then
A + B = (4,3).
[I, § 1 ]
We see again that the geometric representation of our addition looks like
a parallelogram (Fig. 4).
A+B
Figure 4
The reason why the figure looks like a parallelogram can be given in
terms of plane geometry as follows. We obtain B = (1, 2) by starting
from the origin 0 = (0, 0), and moving 1 unit to the right and 2 up. To
get A + B, we start from A, and again move 1 unit to the right and 2
up. Thus the line segments between 0 and B, and between A and A + B
are the hypotenuses of right triangles whose corresponding legs are of
the same length, and parallel. The above segments are therefore parallel
and of the same length, as illustrated in Fig. 5.
A+B
B
LJ
Figure 5

[I, § 1 ] DEFINITION OF POINTS IN SPACE 7
Example 5. If A == (3, 1) again, then – A == ( – 3, – 1). If we plot this
point, we see that – A has opposite direction to A. We may view – A
as the reflection of A through the origin.
A
-A
Figure 6
We shall now consider multiplication of A by a number. If c is any
number, we define cA to be the point whose coordinates are
Example 6. If A == (2, -1,5) and c == 7, then cA == (14, -7,35).
I t is easy to verify the rules:
5. c(A + B) == cA + cB.
6. If Cl~ C 2 are numbers, then
and
Also note that
(-I)A==-A.
What is the geometric representation of multiplication by a number?
Example 7. Let A == (1,2) and c == 3. Then
cA == (3,6)
as in Fig. 7(a).
Multiplication by 3 amounts to stretching A by 3. Similarly,!A
amounts to stretching A by ~, i.e. shrinking A to half its size. In general,
if t is a number, t > 0, we interpret tA as a point in the same direction
as A from the origin, butt times the distance. In fact, we define A and

8 VECTORS [I, §1]
B to have the same direction if there exists a number c > 0 such that
A = cB. We emphasize that this means A and B have the same direction
with respect to the origin. For simplicity of language, we omit the words
“with respect to the origin”.
Mulitiplication by a negative number reverses the direction. Thus
– 3A would be represented as in Fig. 7(b).
3A = (3,6)
3A
-3A
(a) (b)
Figure 7
We define A, B (neither of which is zero) to have opposite directions if
there is a number c < 0 such that cA = B. Thus when B = - A, then A, B have opposite direction. Exercises I, § 1 Find A + B, A - B, 3A, - 2B in each of the following cases. Draw the points of Exercises 1 and 2 on a sheet of graph paper. 1. A = (2, - 1), B = ( - 1, 1) 3. A = (2, -1,5), B = (-1,1,1) 5. A = (n, 3, -1), B = (2n, - 3,7) 2. A = ( -1, 3), B = (0, 4) 4. A = (-1, -2,3), B = (-1,3, -4) 6. A = (15, - 2,4), B = (n, 3, -1) 7. Let A = (1,2) and B = (3,1). Draw A + B, A + 2B, A + 3B, A - B, A - 2B, A - 3B on a sheet of graph paper. 8. Let A, B be as in Exercise 1. Draw the points A + 2B, A + 3B, A - 2B, A - 3B, A + tB on a sheet of graph paper. 9. Let A and B be as drawn in Fig. 8. Draw the point A-B. [I, §2] LOCATED VECTORS 9 A 13 B A (a) (b) B A A B ( C) (d) Figure 8 I, §2. Located Vectors We define a located vector to be an ordered pair of points which we -----. write AB. (This is not a product.) We visualize this as an arrow be- tween A and B. We call A the beginning point and B the end point of the located vector (Fig. 9). { r-------------~B b2 - a2 A~~------~ Figure 9 We observe that in the plane, Similarly, 10 VECTORS [I, ~2J This means that B == A + (B - A) ----+ ----+ Let AB and CD be two located vectors. We shall say that they are ----+ equivalent if B - A == D - C. Every located vector AB is equivalent to ----+ one whose beginning point is the origin, because AB is equivalent to I O(B - A). Clearly this is the only located vector whose beginning point ----+ is the origin and which is equivalent to AB. If you visualize the parallelo- gram law in the plane, then it is clear that equivalence of two located vectors can be interpreted geometrically by saying that the lengths of the line segments determined by the pair of points are equal, and that the ~~ directions" in which they point are the same. I In the next figures, we have drawn the located vectors O(B - A) , ----+ I ----+ AB , and O(A - B) , BA . A~B A~·B B-A o o A-B Figure 10 Figure 11 ----+ Example 1. Let P == (1, - 1, 3) and Q == (2, 4, 1). Then PQ is equiva- ----+ lent to OC , where C == Q - P == (1, 5, -2). If A == (4, -2,5) and B == (5, 3, 3), ----+ ----+ then PQ is equivalent to AB because Q - P == B - A == (1,5, -2). ----+ Given a located vector OC whose beginning point is the OrIgIn, we ---+ shall say that it is located at the origin. Given any located vector AB , we shall say that it is located at A. A located vector at the origin is entirely determined by its end point. In view of this, we shall call an n-tuple either a point or a vector, de- pending on the interpretation which we have in mind. ----+ ----+ Two located vectors AB and PQ are said to be parallel if there is a number c =1= 0 such that B - A == c(Q - P). They are said to have the [I, ~2J LOCATED VECTORS 11 same direction if there is a number c > ° such that B – A = c(Q – P),
and have opposite direction if there is a number c < ° such that B - A = c( Q - P). In the next pictures, we illustrate parallel located vectors. p B A Q (a) Same direction (b) Opposite direction Figure 12 Example 2. Let P = (3,7) and Q = (-4,2). Let A == (5, 1) and B = ( - 16, - 14). Then Q - P == ( -7, - 5) and B - A = (-21, -15). -----. -----. Hence PQ is parallel to AB, because B - A = 3(Q - P). Since 3 > 0,
—–. —–.
we even see that PQ and AB have the same direction.
In a similar manner, any definition made concerning n-tuples can be
carried over to located vectors. For instance, in the next section, we
shall define what it means for n-tuples to be perpendicular.
B~ Q
Q-P :/ B-A
o
Figure 13

12 VECTORS [I, §3]
——+- ——+-
Then we can say that two located vectors AB and PQ are perpendicular
if B – A is perpendicular to Q – P. In Fig. 13, we have drawn a picture
of such vectors in the plane.
Exercises I, §2
——+ ——+
In each case, determine which located vectors PQ and AB are equivalent.
1. P = (1, -1), Q = (4, 3), A = (-1, 5), B = (5, 2).
2. P = (1,4), Q = (-3,5), A = (5,7), B = (1, 8).
3. P = (1, -1,5), Q = (-2,3, -4), A = (3,1,1), B = (0, 5,10).
4. P = (2, 3, – 4), Q = ( – 1, 3, 5), A = ( – 2, 3, – 1), B = ( – 5, 3, 8).
——+ ——+
In each case, determine which located vectors PQ and AB are parallel.
5. P = (1, -1), Q = (4, 3), A = (-1, 5), B = (7, 1).
6. P = (1,4), Q = (-3,5), A = (5,7), B = (9,6).
7. P = (1, -1, 5), Q = (- 2, 3, -4), A = (3, 1, 1), B = ( – 3,9, -17).
8. P = (2,3, -4), Q = (-1,3,5), A = (-2,3, -1), B = (-11,3, -28).
9. Draw the located vectors of Exercises 1, 2, 5, and 6 on a sheet of paper to
illustrate these exercises. Also draw the located vectors QP and BA. Draw
the points Q – P, B – A, P – Q, and A-B.
I, §3. Scalar Product
It is understood that throughout a discussion we select vectors always in
the same n-dimensional space. You may think of the cases n == 2 and
n == 3 only.
In 2-space, let A == (aI’ a 2) and B == (b l , b2). We define their scalar
product to be
In 3-space, let A == (aI’ a2, a 3) and B == (b l , b2, b3). We define their
scalar product to be
In n-space, covering both cases with one notation, let A == (aI’ … ,an)
and B == (b l , ..• ,bn) be two vectors. We define their scalar or dot product
A·B to be

[I, §3] SCALAR PRODUCT 13
This product is a number. For instance, if
A == (1, 3, – 2) and B == ( – 1, 4, – 3),
then
A . B == – 1 + 12 + 6 == 17.
For the moment, we do not give a geometric interpretation to this scalar
product. We shall do this later. We derive first some important proper-
ties. The basic ones are:
SP t. We have A· B == B· A.
SP 2. If A, B, C are three vectors, then
A . (B + C) == A . B + A . C == (B + C)· A.
SP 3. If x is a number, then
(xA)·B == x(A·B) and A· (xB) == x(A . B).
SP 4. If A == 0 is the zero vector, then A· A == 0, and otherwise
A·A > O.
We shall now prove these properties.
Concerning the first, we have
because for any two numbers a, b, we have ab == ba. This proves the
first property.
For SP 2, let C == (c1, … ,cn). Then
and
Reordering the terms yields

14 VECTORS [I, ~3J
which is none other than A· B + A . c. This proves what we wanted.
We leave property SP 3 as an exercise.
Finally, for SP 4, we observe that if one coordinate ai of A is not
eq ual to 0, then there is a term af # ° and af > ° in the scalar prod uct
A . A = ai + … + a;.
Since every term IS > 0, it follows that the sum IS > 0, as was to be
shown.
In much of the work which we shall do concerning vectors, we shall
use only the ordinary properties of addition, multiplication by numbers,
and the four properties of the scalar product. We shall give a formal
discussion of these later. For the moment, observe that there are other
objects with which you are familiar and which can be added, subtracted,
and multiplied by numbers, for instance the continuous functions on an
interval [a, bJ (cf. Example 2 of Chapter VI, §1).
Instead of writing A· A for the scalar product of a vector with itself, it
will be convenient to write also A 2. (This is the only instance when we
allow ourselves such a notation. Thus A 3 has no meaning.) As an exer-
cise, verify the following identities:
A dot product A· B may very well be equal to ° without either A or
B being the zero vector. For instance, let
A = (1,2,3) and B = (2, 1, -~).
Then
A·B = °
We define two vectors A, B to be perpendicular (or as we shall also
say, orthogonal), if A· B = 0. For the moment, it is not clear that in the
plane, this definition coincides with our intuitive geometric notion of
perpendicularity. We shall convince you that it does in the next section.
Here we merely note an example. Say in R 3 , let
E1 = (1,0,0), E2 = (0, 1,0), E3 = (0,0,1)
be the three unit vectors, as shown on the diagram (Fig. t 4).

[I, §4] THE NORM OF A VECTOR 15
z
~—–t~– Y
x
Figure 14
Then we see that E 1 • E2 == 0, and similarly Ei • Ej == ° if i =1= j. And
these vectors look perpendicular. If A == (a l’ a2′ a3), then we observe that
the i-th component of A, namely
is the dot product of A with the i-th unit vector. We see that A is
perpendlcular to Ei (according to our definition of perpendicularity with
the dot product) if and only if its i-th component is equal to 0.
Exercises I, §3
1. Find A· A for each of the following n-tuples.
(a) A =(2, -1), B=(-I, 1) (b) A =(-1,3), B=(0,4)
(c) A =(2, -1,5), B=(-I, 1, 1) (d) A =(-1, -2,3), B=(-1,3, -4)
(e) A = (n, 3, -1), B = (2n, -3,7) (f) A = (15, -2,4), B = (n, 3, -1)
2. Find A· B for each of the above n-tuples.
3. Using only the four properties of the scalar product, verify in detail the identi-
ties given in the text for (A + B)2 and (A – B)2.
4. Which of the following pairs of vectors are perpendicular?
(a) (1, -1,1) and (2,1,5) (b) (1, -1,1) and (2,3,1)
(c) (-5,2,7) and (3, -1,2) (d) (n,2, 1) and (2, -n,O)
5. Let A be a vector perpendicular to every vector X. Show that A = o.
I, §4. The Norm of a Vector
We define the norm of a vector A, and denote by /lAII, the number
IIAII==~.

16 VECTORS [I, §4]
Since A· A >0, we can take the square root. The norm IS also some-
times called the magnitude of A.
When n = 2 and A = (a, b), then
as in the following picture (Fig. 15).
y
a
Figure 15
Example 1. If A = (1, 2), then
b
)
II A II = J1+4 = v’S.
IIAII = Jai + a~ + a~.
Example 2. If A = ( -1, 2, 3), then
IIAII = Jl + 4 + 9 = fo·
If n = 3, then the picture looks like Fig. 16, with A = (x, y, z).
A
z
7 , ./ , ,/ , ./
w’, //
” ,/./
———–~
(x, y)
Figure 16

[I, §4] THE NORM OF A VECTOR 17
If we first look at the two components (x, y), then the length of the
segment between (0, 0) and (x, y) is equal to w = J x 2 + y2, as indicated.
Then again the norm of A by the Pythagoras theorem would be
Thus when n = 3, our definition of norm is compatible with the geom-
etry of the Pythagoras theorem.
In terms of coordinates, A = (aI’ … ,an) we see that
IIAII = Jai + … + a;.
If A -:f. 0, then II A II -:f. ° because some coordinate ai -:f. 0, so that af > 0,
and hence ai + … + a; > 0, so II A II -:f. 0.
Observe that for any vector A we have
IIAII = II-All·
This is due to the fact that
because (- 1)2 = 1. Of course, this is as it should be from the picture:
A
-A
Figure 17
Recall that A and – A are said to have opposite direction. However,
they have the same norm (magnitude, as is sometimes said when speak-
ing of vectors).
Let A, B be two points. We define the distance between A and B to
be
IIA – BII = J(A – B)·(A – B).

18 VECTORS [I, §4]
This definition coincides with our geometric intuition when A, Bare
points in the plane (Fig. 18). It is the same thing as the length of the
–+ –+
located vector AB or the located vector BA .
B
A Length = IIA-BII = lIB-AIl
B-A
Figure 18
Example 3. Let A = ( -1, 2) and B = (3, 4). Then the length of the
–+
located vector AB is liB – All. But B – A = (4,2). Thus
liB – All = J16 + 4 = fo·
In the picture, we see that the horizontal side has length 4 and the
vertical side has length 2. Thus our definitions reflect our geometric
intuition derived from Pythagoras.
4 B
3
A 2-+——‘
1
-3 -2 -1 123
Figure 19
Let P be a point in the plane, and let a be a number > O. The set of
points X such that
IIX – PII < a will be called the open disc of radius a centered at P. The set of points X such that IIX - PII < a [I, §4] THE NORM OF A VECTOR 19 will be called the closed disc of radius a and center P. The set of points X such that IIX - PII == a is called the circle of radius a and center P. These are illustrated in Fig. 20. p a Circle Disc Figure 20 In 3-dimensional space, the set of points X such that IIX - PII < a will be called the open ball of radius a and center P. The set of points X such that IIX - PII < a will be called the closed ball of radius a and center P. The set of points X such that IIX - PII == a will be called the sphere of radius a and center P. In higher dimensional space, one uses this same terminology of ball and sphere. Figure 21 illustrates a sphere and a ball in 3-space. Sphere Ball Figure 21 20 VECTORS [I, §4] The sphere is the outer shell, and the ball consists of the region inside the shell. The open ball consists of the region inside the shell excluding the shell itself. The closed ball consists of the region inside the shell and the shell itself. From the geometry of the situation, it is also reasonable to expect that if c > 0, then IleA II = ellA II, i.e. if we stretch a vector A by multiply-
ing by a positive number e, then the length stretches also by that
amount. We verify this formally using our definition of the length.
Theorem 4.1 Let x be a number. Then
IlxA11 = Ixl IIAII
(absolute value of x times the norm of A).
Proof By definition, we have
IIxAI12 = (xA)·(xA),
which is equal to
by the properties of the scalar product. Taking the square root now
yields what we want.
Let S 1 be the sphere of radius 1, centered at the origin. Let a be a
number > O. If X is a point of the sphere S 1, then aX is a point of the
sphere of radius a, because
IlaX11 = a11X11 = a.
In this manner, we get all points of the sphere of radius a. (Proof?)
Thus the sphere of radius a is obtained by stretching the sphere of radius
1, through multiplication by a.
A similar remark applies to the open and closed balls of radius a,
they being obtained from the open and closed balls of radius 1 through
multiplication by a.
Disc of radius 1 Disc of radius a
Figure 22

[I, §4] THE NORM OF A VECTOR 21
We shall say that a vector E is a unit vector if IIEII = 1. Given any
vector A, let a = II A II. If a -:f. 0, then
is a unit vector, because
1
-A
a
1 1
-A =-a=1.
a a
We say that two vectors A, B (neither of which is 0) have the same
direction if there is a number c > ° such that cA = B. In view of this
definition, we see that the vector
1
-A
IIAII
is a unit vector in the direction of A (provided A -:f. 0).
Figure 23
A
1
E=-A
IIAII
If E is the unit vector in the direction of A, and II A II = a, then
A = aE.
Example 4. Let A = (1,2, -3). Then IIAII = Jl4. Hence the unit
vector in the direction of A is the vector
E=—–( 1 2 -3) Jl4′ Jl4′ Jl4 .
Warning. There are as many unit vectors as there are directions. The
three standard unit vectors in 3-space, namely
E1 = (1,0,0), E2 = (0, 1, 0), E3 = (0,0, 1)
are merely the three unit vectors in the directions of the coordinate axes.

22 VECTORS [I, §4]
We are also in the position to justify our definition of perpendicular-
ity. Given A, B in the plane, the condition that
IIA + BII = IIA – BII
(illustrated in Fig. 24(b)) coincides with the geometric property that A
should be perpendicular to B.
IIA+BII /1
-B
1
I
1
1
1
A
/
1
We shall prove:
IIA-BII
(a)
B
-B
Figure 24
/
/
/
/
/
;!
/
/
/
/
/
A
/
/
/
(b)
IIA + BII = IIA – BII if and only if A·B = o.
Let <=> denote “if and only if”. Then
IIA + BII = IIA – BII <=> IIA + BI12 = IIA – BI12
<=> A 2 + 2A . B + B2 = A 2 – 2A . B + B2
<=> 4A·B = 0
<=> A· B = O.
This proves what we wanted.
General Pythagoras theorem. If A and B are perpendicular, then
The theorem is illustrated on Fig. 25.
A+B
A
B
Figure 25

[I, §4] THE NORM OF A VECTOR
To prove this, we use the definitions, namely
IIA + BI12 = (A + B)·(A + B) = A2 + 2A·B + B2
= IIAI12 + IIBI12,
because A·B = 0, and A·A = IIAI12, B·B = IIBI12 by definition.
23
Remark. If A is perpendicular to B, and x is any number, then A is
also perpendicular to xB because
A . xB = xA . B = o.
We shall now use the notion of perpendicularity to derive the notion
of projection. Let A, B be two vectors and B =I- O. Let P be the point
—–+ —-+ —-+
on the line through DB such that PAis perpendicular to DB, as
shown on Fig. 26(a).
A A
A-cB
o o
(a) (b)
Figure 26
We can write
P = cB
for some number c. We want to find this number c explicitly in terms of
—–+ —-+
A and B. The condition P A ~ DB means that
A – P is perpendicular to B,
and since P = cB this means that
(A – cB)· B = 0,
in other words,
A . B – cB· B = o.
We can solve for c, and we find A· B = cB· B, so that
A·B
c=–.
B·B

24 VECTORS [I, §4]
Conversely, if we take this value for c, and then use distributivity, dot-
ting A – cB with B yields 0, so that A – cB is perpendicular to B.
Hence we have seen that there is a unique number c such that A – cB is
perpendicular to B, and c is given by the above formula.
A·B
Definition. The component of A along B is the number c = –.
B·B
Th .. fl’ h A . B e projection 0 A a ong B IS t e vector cB = — B.
B·B
Example 5. Suppose
B = Ei = (0, … ,0, 1, 0, … ,0)
is the i-th unit vector, with 1 in the i-th component and ° in all other
components.
Thus A· Ei is the ordinary i-th component of A.
M ore generally, if B is a unit vector, not necessarily one of the E i , then
we have simply
c = A·B
because B· B = 1 by definition of a unit vector.
Example 6. Let A = (1,2, -3) and B = (1,1,2). Then the component
of A along B is the number
A·B -3 1
c=–=-=
B·B 6 2
Hence the projection of A along B is the vector
cB = ( – t, – t, – 1).
Our construction gives an immediate geometric interpretation for the
scalar product. Namely, assume A =1= 0 and look at the angle f} between
A and B (Fig. 27). Then from plane geometry we see that
f} _ c11B11
cos – IIAII’
or substituting the value for c obtained above.
A·B= IIAII IIBII cos f} and
A·B
cos (J = IIA II IIBII’

[I, §4] THE NORM OF A VECTOR 25
A
Figure 27
In some treatments of vectors, one takes the relation
A·B = IIAII IIBII cos fJ
as definition of the scalar product. This is subject to the following disad-
vantages, not to say objections:
(a) The four properties of the scalar product SP 1 through SP 4 are
then by no means obvious.
(b) Even in 3-space, one has to rely on geometric intuition to obtain
the cosine of the angle between A and B, and this intuition is
less clear than in the plane. In higher dimensional space, it fails
even more.
(c) It is extremely hard to work with such a definition to obtain
further properties of the scalar product.
Thus we prefer to lay obvious algebraic foundations, and then recover
very simply all the properties. We used plane geometry to see the ex-
preSSIon
A·B = IIAII IIBII cos fJ.
After working out some examples, we shall prove the inequality which
allows us to justify this in n-space.
Example 7. Let A = (1, 2, – 3) and B = (2, 1, 5). Find the COSIne of
the angle () between A and B.
By definition,
A·B
cos () = IIAII IIBII
2 + 2 – 15
foJ30
-11
foG·
Example 8. Find the COSIne of the angle between the two located
——+ ——+
vectors PQ and P R where
p = (1, 2, – 3), Q = ( – 2, 1, 5), R = (1, 1, – 4).

26 VECTORS [I, §4]
The picture looks like this:
Q
p
Figure 28
We let
A = Q – P = ( – 3, – 1, 8) and B = R – P = (0, – 1, – 1).
–+ –+
Then the angle between PQ and P R is the same as that between A and
B. Hence its cosine is equal to
A·B
cos () = IIAII IIBII
0+1-8
fiJi
We shall prove further properties of the norm and scalar product
using our results on perpendicularity. First note a special case. If
Ei = (0, … ,0,1,0, … ,0)
is the i-th unit vector of R n, and
then
A·E· = a· I I
is the i-th component of A, i.e. the component of A along E i • We have
so that the absolute value of each component of A is at most equal to
the length of A.
We don’t have to deal only with the special unit vector as above. Let
E be any unit vector, that is a vector of norm 1. Let c be the compon-
ent of A along E. We saw that
c = A·E.

[I, §4] THE NORM OF A VECTOR 27
Then A – cE is perpendicular to E, and
A = A – cE + cEo
Then A – cE is also perpendicular to cE, and by the Pythagoras
theorem, we find
Thus we have the inequality c2 < IIAI12, and Icl < IIAII. In the next theorem, we generalize this inequality to a dot product A· B when B is not necessarily a unit vector. Theorem 4.2. Let A, B be two vectors in Rn. Then IA· BI < IIAII IIBII· Proof If B = 0, then both sides of the inequality are equal to 0, and so our assertion is obvious. Suppose that B i= O. Let c be the compon- ent of A along B, so c = (A· B)/(B· B). We write A = A - cB + cB. By Pythagoras, Hence c211BI12 < IIAI12. But 2 2 (A·B)2 2_IA.BI 2 2_IA.BI2 C IIBII = (B- B)2 IIBII - IIBI14 IIBII - IIBII2 - Therefore MUltiply by II BII2 and take the square root to conclude the proof. In view of Theorem 4.2, we see that for vectors A, B in n-space, the number A·B IIAIIIIBIl has absolute value < 1. Consequently, A·B -1< <1 - IIAIIIIBIl - , 28 VECTORS [I, §4] and there exists a unique angle () such that 0 < () < 7r, and such that A·B cos () = IIAII IIBII" We define this angle to be the angle between A and B. The inequality of Theorem 4.2 is known as the Schwarz inequality. Theorem 4.3. Let A, B be vectors. Then IIA + BII < IIAII + IIBII· Proof Both sides of this inequality are positive or O. Hence it will suffice to prove that their squares satisfy the desired inequality, in other words, (A + B)·(A + B) < (IIAII + IIBII)2. To do this, we consider (A + B)·(A + B) = A·A + 2A·B + B·B. In view of our previous result, this satisfies the inequality and the right-hand side is none other than Our theorem is proved. Theorem 4.3 is known as the triangle inequality. The reason for this is that if we draw a triangle as in Fig. 29, then Theorem 4.3 expresses the fact that the length of one side is < the sum of the lengths of the other two sides. A+B B IIA+BII o Figure 29 [I, §4] THE NORM OF A VECTOR 29 Remark. All the proofs do not use coordinates, only properties SP 1 through SP 4 of the dot product. Hence they remain valid in more gen- eral situations, see Chapter VI. In n-space, they give us inequalities which are by no means obvious when expressed in terms of coordinates. For instance, the Schwarz inequality reads, in terms of coordinates: Just try to prove this directly, without the "geometric" intuition of Pyth- agoras, and see how far you get. Exercises I, §4 1. Find the norm of the vector A in the following cases. ( a) A = (2, - 1), B = ( - 1, 1) (b) A = ( - 1, 3), B = (0, 4) (c) A = (2, -1,5), B = (-1,1,1) ( d) A = ( - 1, - 2, 3), B = ( - 1, 3, - 4) (e) A = (n, 3, - 1), B = (2n, - 3, 7) (f) A = (15, -2,4), B = (n, 3, -1) 2. Find the norm of vector B in the above cases. 3. Find the projection of A along B in the above cases. 4. Find the projection of B along A in the above cases. 5. Find the cosine between the following vectors A and B. (a) A = (1, -2) and B = (5,3) (b) A = (-3,4) and B = (2, -1) (c) A = (1, - 2, 3) and B = ( - 3, 1, 5) (d) A = (-2,1,4) and B = (-1, -1,3) (e) A = ( - 1, 1, 0) and B = (2, 1, - 1) 6. Determine the cosine of the angles of the triangle whose vertices are (a) (2, -1,1), (1, -3, -5), (3, -4, -4). (b) (3,1,1), (-1,2,1), (2, -2,5). 7. Let AI' ... ,Ar be non-zero vectors which are mutually perpendicular, in other words Ai· Aj = 0 if i i= j. Let C 1 , ... 'Cr be numbers such that Show that all Ci = O. 8. For any vectors A, B, prove the following relations: (a) IIA + BII2 + IIA - BII2 = 211AII2 + 211BII2. (b) IIA + BII2 = IIAII2 + IIBII2 + 2A·B. (c) IIA + BI12 -IIA - BII2 = 4A·B. Interpret (a) as a "parallelogram law". 30 VECTORS [I, ~5J 9. Show that if f) is the angle between A and B, then IIA - BI12 = IIAI12 + IIBI12 - 211AII IIBII cos o. 10. Let A, B, C be three non-zero vectors. If A· B = A· C, show by an example that we do not necessarily have B = C. I, §5. Parametric Lines We define the parametric equation or parametric representation of a straight line passIng through a point P in the direction of a vector A i= 0 to be x = P + tA, where t runs through all numbers (Fig. 30). Figure 30 When we give such a parametric representation, we may think of a bug starting from a point P at time t = 0, and moving in the direction of A. At time t, the bug is at the position P + tAo Thus we may interpret physically the parametric representation as a description of motion, in which A is interpreted as the velocity of the bug. At a given time t, the bug is at the point. X(t) = P + tA, which is called the position of the bug at time t. This parametric representation is also useful to describe the set of points lying on the line segment between two given points. Let P, Q be two points. Then the segment between P and Q consists of all the points S(t) = P + t(Q - P) with O 1. An array of numbers
all a l2 a l3 a ln
a 2l a22 a23 a2n
is called a matrix. We can abbreviate the notation for this matrix by
writing it (a ij ), i = 1, … ,m and j = 1, … ,no We say that it is an m by n
matrix, or an m x n matrix. The matrix has m rows and n columns. For
instance, the first column is
and the second row is (a 2l , a 22 , … ,a 2n ). We call aij the ij-entry or ij-
component of the matrix.
Look back at Chapter I, §1. The example of 7-space taken from eco-
nomics gives rise to a 7 x 7 matrix (aij) (i, j = 1, … ,7), if we define aij to
be the amount spent by the i-th industry on the j-th industry. Thus
keeping the notation of that example, if a25 = 50, this means that the
auto industry bought 50 million dollars worth of stuff from the chemical
ind ustry during the given year.
Example 1. The following is a 2 x 3 matrix:
It has two rows and three columns.
1
4
-2).
-5
The rows are (1,1, -2) and (-1,4, -5). The columns are
Thus the rows of a matrix may be viewed as n-tuples, and the columns
may be viewed as vertical m-tuples. A vertical m-tuple is also called a
column vector.

44 MATRICES AND LINEAR EQUATIONS [II, §1]
A vector (x l’ … ,xn ) is a 1 x n matrix. A column vector
is an n x 1 matrix.
When we write a matrix in the form (aij), then i denotes the row and
j denotes the column. In Example 1, we have for instance
all = 1, a23 = -5.
A single number (a) may be viewed as a 1 x 1 matrix.
Let (aij), i = 1, … ,m and j = 1, … ,n be a matrix. If m = n, then we
say that it is a square matrix. Thus
(-! ~) (~
-1
-~) and 1
1 -1
are both square matrices.
We define the zero matrix to be the matrix such that aij = 0 for all
i, j. It looks like this:
0 0 0 0
0 0 0 0
0 0 0 0
We shall write it o. We note that we have met so far with the zero
number, zero vector, and zero matrix.
We shall now define addition of matrices and multiplication of ma-
trices by numbers.
We define addition of matrices only when they have the same size.
Thus let m, n be fixed integers > 1. Let A = (aij) and B = (bij) be two
m x n matrices. We define A + B to be the matrix whose entry in the
i-th row and j-th column is aij + bij . In other words, we add matrices of
the same size componentwise.
Example 2. Let
A=G
-1
~) B=G -1) and -1 . 3
Then
A + B = (~ 0 -~} 4

[II, §1] MATRICES 45
If A, B are both t x n matrices, i.e. n-tuples, then we note that our
addition of matrices coincides with the addition which we defined in
Chapter I for n-tuples.
If 0 is the zero matrix, then for any matrix A (of the same size, of
course), we have 0 + A = A + 0 = A.
This is trivially verified. We shall now define the multiplication of a
matrix by a number. Let c be a number, and A = (aij) be a matrix. We
define cA to be the matrix whose ij-component is caij . We write
Thus we multiply each component of A by c.
Example 3. Let A, B be as in Example 2. Let c = 2. Then
2A = (~
We also have
-2
6 ~) and
(- t)A = -A =
(
-1
-2
(
10
2B = 4
1
-3
2
2
-2).
-2
In general, for any matrix A = (a ij) we let – A (minus A) be the matrix
( – aij). Since we have the relation a ij – aij = 0 for numbers, we also get
the relation
A+(-A)=O
for matrices. The matrix – A is also called the additive inverse of A.
We define one more notion related to a matrix. Let A = (a ij ) be an
m x n matrix. The n x m matrix B = (bjJ such that bji = aij is called the
transpose of A, and is also denoted by t A. Taking the transpose of a
matrix amounts to changing rows into columns and vice versa. If A is
the matrix which we wrote down at the beginning of this section, then tA
is the matrix
all a 21 a 31 amI
a l2 a 22 a 32 am2

46 MATRICES AND LINEAR EQUATIONS
To take a special case:
If A =G 1 3
If A = (2, 1, – 4) is a row vector, then
is a column vector.
then
[II, §1]
A matrix A which is equal to its transpose, that is A = t A, IS called
symmetric. Such a matrix is necessarily a square matrix.
Remark on notation. I have written the transpose sign on the left,
because in many situations one considers the inverse of a matrix written
A-I, and then it is easier to write t A-I rather than (A – 1 Y or (At) – 1,
which are in fact equal. The mathematical community has no consensus
as to where the transpose sign should be placed, on the right or left.
Exercises II, § 1
1. Let
A = ( 1
-1
2
o ~) and (
-1
B=
1
5 -2)
-1 .
Find A + B, 3B, – 2B, A + 2B, 2A + B, A – B, A – 2B, B – A.
2. Let
A=G and B= (
-1
o
Find A + B, 3B, – 2B, A + 2B, A – B, B – A.
3. (a) Write down the row vectors and column vectors of the matrices A, B in
Exercise 1.
(b) Write down the row vectors and column vectors of the matrices A, B in
Exercise 2.
4. (a) In Exercise 1, find t A and t B.
(b) In Exercise 2, find t A and t B.
5. If A, B are arbitrary m x n matrices, show that

[II, §2] MULTIPLICATION OF MATRICES 47
6. If c is a number, show that t(cA) = c t A.
7. If A = (aij) is a square matrix, then the elements aii are called the diagonal
elements. How do the diagonal elements of A and t A differ?
8. Find t( A + B) and t A + t B in Exercise 2.
9. Find A + tA and B + tB in Exercise 2.
10. (a) Show that for any square matrix, the matrix A + t A is symmetric.
(b) Define a matrix A to be skew-symmetric if t A = – A. Show that for any
square matrix A, the matrix A – t A is skew-symmetric.
(c) If a matrix is skew-symmetric, what can you say about its diagonal ele-
ments?
11. Let
E I = (1, 0, … ,0), E 2 = (0, 1, 0, … ,0), … , En = (0, … ,0, 1)
be the standard unit vectors of Rn. Let Xl’ … ,xn be numbers. What IS
xlE I + … + xnEn? Show that if
then Xi = ° for all i.
II, §2. Multiplication of Matrices
We shall now define the product of matrices. Let A = (aij ), i = 1, … ,m
and j = 1, … ,n be an m x n matrix. Let B = (b jk ), j = 1, … ,n and let
k = 1, … ,s be an n x s matrix:
(
all
A – . – .
am 1
We define the product AB to be the m x s matrix whose ik-coordinate is
n
L aijbjk = ai1 b 1k + a i2 b 2k + … + ainbnk ·
j=l
If A 1′ … ,Am are the row vectors of the matrix A, and if B 1 , ••• ,BS are the
column vectors of the matrix B, then the ik-coordinate of the product
AB is equal to Ai· Bk. Thus
~1 • B
S
).
A ·Bs m

48 MATRICES AND LINEAR EQUATIONS [II, §2]
Multiplication of matrices is therefore a generalization of the dot
product.
Example. Let
1
3 B= (-! D.
Then AB is a 2 x 2 matrix, and computations show that
AB=G
1
3
15)
12 .
Example. Let
Let A, B be as in Example 1. Then
and
A(Be) = G 1
3
Compute (AB)C. What do you find?
30)

If X = (Xl’ … ,xm ) is a row vector, i.e. a 1 x m matrix, then we can
form the product XA, which looks like this:
… a ln )
: = (y 1, … ,Y n),
amn
where
In this case, X A is a 1 x n matrix, i.e. a row vector.

[II, §2] MULTIPLICATION OF MATRICES 49
On the other hand, if X is a column vector,
x = (J:)
then AX = Y where Y IS also a column vector, whose coordinates are
given by
n
Yi = L aijxj = ailx 1 + … + ainxn·
j=l
Visually, the multiplication AX = Y looks like
Example. Linear equations. Matrices give a convenient way of writing
linear equations. You should already have considered systems of linear
equations. For instance, one equation like:
3x – 2y + 3z = 1,
with three unknowns x, y, z. Or a system of two equations In three
unknowns
3x – 2y + 3z = 1,
-x + 7y – 4z = -5.
In this example we let the matrix of coefficients be
A = ( 3
-1
-2
7
Let B be the column vector of the numbers appearing on the right-hand
side, so
Let the vector of unknowns be the column vector.

50 MATRICES AND LINEAR EQUATIONS [II, §2]
Then you can see that the system of two simultaneous equations can be
written in the form
AX=B.
Example. The first equation of (*) represents equality of the first
component of AX and B; whereas the second equation of (*) represents
equality of the second component of AX and B.
I n general, let A = (a ij) be an m x n matrix, and let B be a column
vector of size m. Let
X = x 2
be a column vector of size n. Then the system of linear equations
a11x 1 + … + a1nx n = b l ,
a2l x l + … + a 2n x n = b2 ,
can be written in the more efficient way
AX=B,
by the definition of multiplication of matrices. We shall see later how to
solve such systems. We say that there are m equations and n unknowns,
or n variables.
Example. Markov matrices. A matrix can often be used to represent
a practical situation. Suppose we deal with three cities, say Los Angeles,
Chicago, and Boston, denoted by LA, Ch, and Bo. Suppose that any
given year, some people leave each one of these cities to go to one of the
others. The pelcentages of people leaving and going is given as follows,
for each year.
* LA goes to Bo
! Ch goes to LA
i Bo goes to LA
and
and
and
~ LA goes to Ch.
t Ch goes to Bo.
k Bo goes to Ch.

[II, §2] MULTIPLICATION OF MATRICES 51
Let X n , Yn’ Zn be the populations of LA, Ch, and Bo, respectively, in the
n-th year. Then we can express the population in the (n + l)-th year as
follows.
In the (n + 1 )-th year, ~ of the LA population leaves for Boston, and
~ leaves for Chicago. The total fraction leaving LA during the year is
therefore
1 1 11
4 + ‘7 = 28·
Hence the total fraction remaining in LA is
1 – ~~ = ~~.
Hence the population in LA for the (n + 1 )-th year IS
Similarly the fraction leaving Chicago each year is
118
5 + “3 = IS,
so the fraction remaining is ?s. Finally, the fraction leaving Boston each
year IS
117
6 + “8 = 24,
so the fraction remaining in Boston is ~l. Thus
Let A be the matrix
171
Yn+ 1 = ‘7Xn + ISYn + gZn’
1 1 17
Zn+1 = 4 X n +”3 Yn + 24Zn·
(7 1 !} 28 5 A = 1 7 IS
1 17
“3 24
Then we can write down more simply the population shift by the expres-
SIon
where

52 MATRICES AND LINEAR EQUATIONS [II, §2]
The change from X n to X n + 1 is called a Markov process. This is due to
the special property of the matrix A, all of whose components are > 0,
and such that the sum of all the elements in each column is equal to 1.
Such a matrix is called a Markov matrix.
If A is a square matrix, then we can form the product AA, which will
be a square matrix of the same size as A. It is denoted by A2. Similarly,
we can form A 3 , A 4 , and in general, An for any positive integer n. Thus
An is the product of A with itself n times.
We can define the unit n x n matrix to be the matrix having diagonal
components all equal to 1, and all other components equal to O. Thus
the unit n x n matrix, denoted by In’ looks like this:
1 0 0 0
0 1 0 0
0 0 1 0
I == n
0 0 0 1 0
0 0 0
We can then define AO == I (the unit matrix of the same size as A). Note
that for any two integers r, s > 0 we have the usual relation
For example, in the Markov process described above, we may express
the population vector in the (n + 1 )-th year as
where X 1 is the population vector in the first year.
Warning. It is not always true that AB == BA. For instance, compute
AB and BA in the following cases:
A = (~ ~) B= (~ -1) 5 .
You will find two different values. This is expressed by saying that mul-
tiplication of matrices is not necessarily commutative. Of course, in some
special cases, we do have AB == BA. For instance, powers of A commute,
i.e. we have A r AS == AS Ar as already pointed out above.
We now prove other basic properties of multiplication.

[II, §2] MULTIPLICATION OF MATRICES 53
Distributive law. Let A, B, C be matrices. Assume that A, B can be
multiplied, and A, C can be multiplied, and B, C can be added. Then A,
B + C can be multiplied, and we have
A(B + C) = AB + AC.
If x is a number, then
A(xB) = x(AB).
Proof Let Ai be the i-th row of A and let Bk, Ck be the k-th column
of Band C, respectively…. Then Bk + Ck is the k-th column of B + C.
By definition, the ik-component of A(B + C) is Ai· (Bk + Ck). Since
our first assertion follows. As for the second, observe that the k-th
column of xB is XBk. Since
our second assertion follows.
Associative law. Let A, B, C be matrices such that A, B can be multi-
plied and B, C can be multiplied. Then A, BC can be multiplied. So
can AB, C, and we have
(AB)C = A(BC).
Proof Let A = (a ij) be an m x n matrix, let B = (b jk ) be an n x r
matrix, and let C = (C kl ) be an r x s matrix. The product AB is an m x r
matrix, whose ik-component is equal to the sum
We shall abbreviate this sum using our L notation by writing
n
L aijb jk ·
j= 1
By definition, the ii-component of (AB)C is equal to

54 MATRICES AND LINEAR EQUATIONS [II, §2]
The sum on the right can also be described as the sum of all terms
where j, k range over all integers 1 o. Since
R(8)rX = rR(8)X,
57
we see that multiplication by R(8) also has the effect of rotating r X by
an angle 8. Thus rotation by an angle 8 can be represented by the
matrix R(8).
R(8)X = t(cos(8 + qJ), sin(8 + qJ»
x = t(cos qJ, sin qJ)
Figure 1
Note that for typographical reasons, we have written the vector t X
horizontally, but have put a little t on the upper left superscript, to
denote transpose, so X is a column vector.

58 MATRICES AND LINEAR EQUATIONS [II, §2]
Example. The matrix corresponding to rotation by an angle of n/3 IS
given by
/ (
COS n/3
R(n 3) == . /3
SIn n
-sin n/3)
cos n/3
(
1/2
– fi/2
-fi/2).
1/2
Example. Let X == t(2, 5). If you rotate X by an angle of n/3, find the
coordinates of the rotated vector.
These coordinates are:
(
1/2 -J3/2)(2) R(n/3)X == ;; 5
y 3/2 1/2
== (1 – 5fi /2).
fi + 5/2
Warning. Note how we multiply the column vector on the left with
the matrix R(8). If you want to work with row vectors, then take the
transpose and verify directly that
(
1/2
(2, 5) ;;
-y3/2
fi/2) == (1 – 5fi/2, fi + 5/2).
1/2
So the matrix R(8) gets transposed. The minus sign is now in the lower
left-hand corner.
Exercises II, §2
The following exercises give mostly routine practice in the multiplication of ma-
trices. However, they also illustrate some more theoretical aspects of this multip-
lication. Therefore they should be all worked out. Specifically:
Exercises 7 through 12 illustrate multiplication by the standard unit vectors.
Exercises 14 through 19 illustrate multiplication of triangular matrices.
Exercises 24 through 27 illustrate how addition of numbers is transformed
into multiplication of matrices.
Exercises 27 through 32 illustrate rotations.
Exercises 33 through 37 illustrate elementary matrices, and should be worked
out before studying §5.
1. Let I be the unit n x n matrix. Let A be an n x r matrix. What is I A? If A
is an m x n matrix, what is AI?
2. Let D be the matrix all of whose coordinates are O. Let A be a matrix of a
size such that the product AD is defined. What is AD?

[II, §2] MULTIPLICATION OF MATRICES 59
3. In each one of the following cases, find (AB)C and A(BC).
(a) A = G ‘) B=(-‘ 1 ‘ 1 ~} c=G ~)
(b) A = G -~} B = (~ ~} C = G)
3 -1
(c) A = G _:} B = (~
1
-~} C = ( ~ 🙂
4
° 5 -1
4. Let A, B be square matrices of the same SIze, and assume that AB = BA.
Show that
and (A + B)(A – B) = A2 – B2,
using the distributive law.
5. Let
B=G
Find AB and BA.
6. Let
0) 7 .
Let A, B be as in Exercise 5. Find CA, AC, CB, and BC. State the general
rule including this exercise as a special case.
7. Let X = (1,0,0) and let
°
What is XA?
8. Let X = (0, 1, 0), and let A be an arbitrary 3 x 3 matrix. How would you
describe X A? What if X = (0, 0, I)? Generalize to similar statements con-
cerning n x n matrices, and their products with unit vectors.
9. Let
A = (! 3) 5 .
Find AX for each of the following values of X.
(a) X =(~) (b) X =(:) (c) X =(D

60
10. Let
MATRICES AND LINEAR EQUATIONS
7
-1
1
Find AX for each of the values of X given in Exercise 9.
11. Let
and
What is AX?
[II, §2]
12. Let X be a column vector having all its components equal to 0 except the
j-th component which is equal to 1. Let A be an arbitrary matrix, whose size
is such that we can form the product AX. What is AX?
13. Let X be the indicated column vector, and A the indicated matrix. Find AX
as a column vector.
X =~} A = (~
0
-:) X =(i} A =G ~) (a) 1 (b) 0
~c) X=(=} A = (~ 1 ~) (d) X = (=) A = G 0 ~) 0 0
14. Let A = (: !} Find the product AS for each one of the following ma-
trices S. Describe in words the effect on A of this product.
(a) S = G ~) (b) S = C 0) 1 .
15. Let A = G !) again. Find the product SA for each one of the following
matrices S. Describe in words the effect of this product on A.
(a) S = G ~) (b) S = C
16. (a) Let A be the matrix
(~
1
o
o
Find A 2 , A 3. Generalize to 4 x 4 matrices.

[II, §2] MULTIPLICATION OF MATRICES 61
(b) Let A be the matrix
(~
1
1
0
Compute A 2, A 3 , A4.
17. Let
A=G
0
2
0
Find A 2, A 3 , A4.
18. Let A be a diagonal matrix, with diagonal elements a 1 , ••• ,an. What IS A 2,
A 3, Ak for any positive integer k?
19. Let
Find A 3
A =(~ o o
(
-1
20. (a) Find a 2 x 2 matrix A such that A2 = -J = 0
(b) Determine all 2 x 2 matrices A such that A 2 = o.
21. Let A be a square matrix.
(a) If A 2 = 0 show that J – A is invertible.
(b) If A 3 = 0, show that J – A is invertible.
(c) In general, if An = 0 for some positive integer n, show that J – A IS
invertible. [Hint: Think of the geometric series.]
(d) Suppose that A2 + 2A + J = O. Show that A is invertible.
(e) Suppose that A 3 – A + J = O. Show that A is invertible.
22. Let A, B be two square matrices of the same size. We say that A is similar
to B if there exists an invertible matrix T such that B = T A T- 1. Suppose
this is the case. Prove:
(a) B is similar to A.
(b) A is invertible if and only if B is invertible.
(c) t A is similar to t B.
(d) Suppose An = 0 and B is an invertible matrix of the same sIze as A.
Show that (BAB- 1 )” = O.
23. Let A be a square matrix which is of the form
all * *
0 a 22 *

*
*
0 …….. 0 a””

62 MATRICES AND LINEAR EQUATIONS [II, §2]
The notation means that all elements below the diagonal are equal to 0,
and the elements above the diagonal are arbitrary. One may express this
property by saying that
if i > j.
Such a matrix is called upper triangular. If A, B are upper triangular
matrices (of the same size) what can you say about the diagonal elements of
AB?
Exercises 24 through 27 give examples where addition of numbers IS trans-
formed into multiplication of matrices.
24. Let a, b be numbers, and let
A=G ~) and
What is AB? What is A 2, A 3? What is An where n is a positive integer?
25. Show that the matrix A in Exercise 24 has an inverse. What is this inverse?
26. Show that if A, Bare n x n matrices which have inverses, then AB has an
Inverse.
27. Rotations. Let R(O) be the matrix given by
(
COS 0
R(O) = . 0
SIn
-sin 0).
cos 0
(a) Show that for any two numbers 01 , O2 we have
[You will have to use the addition formulas for sine and cosine.]
(b) Show that the matrix R(O) has an inverse, and write down this inverse.
(c) Let A = R(O). Show that
A2 = (COS 2fJ
sin 20
-sin 20).
cos 20
(d) Determine An for any positive integer n. Use induction.
28. Find the matrix R( 0) associated with the rotation for each of the following
values of O.
(a) n/2 (b) n/4 (c) n (d) -n (e) -n/3
(f) n/6 (g) 5n/4
29. In general, let 0 > O. What is the matrix associated with the rotation by an
angle – 0 (i.e. clockwise rotation by O)?

[II, §2] MULTIPLICATION OF MATRICES 63
30. Let X = l( 1, 2) be a point of the plane. If you rotate X by an angle of n/4,
what are the coordinates of the new point?
31. Same question when X = t( -1,3) and the rotation is by an angle of n/2.
32. For any vector X in R2 let Y = R(O)X be its rotation by an angle o. Show
that II YII = IIXII·
The following exercises on elementary matrices should be done before study-
ing §5.
33. Elementary matrices. Let
A = ( ~
-1
1
3 -1
4 2
3
2 3
-~)
-5
4
Let U be the matrix as shown. In each case find U A.
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
~)
~)
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
1
o
34. Let E be the matrix as shown. Find EA where A is the same matrix as in
the preceding exercise.
(a) (~
(c) (~
o
o
o
o
o
5
o
o
o
o
o
o
o
o
o
~)
(b) (~
(d) (~
o
o
o
o
-2
o
o
o
o
o
o
1
o
~)
~)

64
35.
36.
MATRICES AND LINEAR EQUATIONS [II, §3]
Let E be the matrix as shown. Find EA where A is the same matrix as in
the preceding exercise and Exercise 33.
(a) (~
0 0 0
(b) (~
0 3 0
1 0 0 0 0
0 1 0 0 0
0 0 0 0 t
( 1
0 0
~) (d) (~
0 0
~) -2 1 0 1 0 (c) ~ 0 -2 0 0 0 0
Let A = (aij) be an m x n matrix,
e;1 … a~”).
am! amn
Let 1 ~ r ~ m and 1 ~ s ~ m. Let Irs be the matrix whose rs-component is t
and such that all other components are equal to o.
(a) What is Irs A ?
(b) Suppose r =1= s. What is (Irs + Isr)A?
(c) Suppose r =1= s. Let I jj be the matrix whose jj-component is 1 and such
that all other components are O. Let
Ers = Irs + Isr + sum of all I jj for j =1= r, j =1= s.
37. Again let r =1= s.
(a) Let E = I + 3Irs . What is EA?
(b) Let c be any number. Let E = I + cIrs . What is EA?
The rest of the chapter will be mostly concerned with linear equations,
and especially homogeneous ones. We shall find three ways of interpret-
ing such equations, illustrating three dfferent ways of thinking about
matrices and vectors.
II, §3. Homogeneous Linear Equations and Elimination
In this section, we look at linear equations by one method of elimina-
tion. In the next section, we shall discuss another method.
We shall be interested in the case when the number of unknowns is
greater than the number of equations, and we shall see that in that case,
there always exists a non-trivial solution.
Before dealing with the general case, we shall study examples.

[II, ~3J HOMOGENEOUS LINEAR EQUATIONS AND ELIMINATION 65
Example t. Suppose that we have a single equation, like
2x + y – 4z = o.
We wish to find a solution with not all of x, y, z equal to O. An
equivalent equation is
2x = – y + 4z.
To find a non-trivial solution, we give all the variables except the first a
special value i= 0, say y = 1, z = 1. We than solve for x. We find
2x = – y + 4z = 3,
whence x =~.
Example 2. Consider a pair of equations, say
(1 ) 2x + 3y – z = 0,
(2) x + y + z = O.
We redute the problem of solving these simultaneous equations to the
preceding case of one equation, by eliminating one variable. Thus we
multiply the second equation by 2 and subtract it from the first equa-
tion, getting
(3) y – 3z = o.
Now we meet one equation in more than one variable. We give zany
value i= 0, say z = 1, and solve for y, namely y = 3. We then solve for x
from the second equation, namely x = – y – z, and obtain x = – 4. The
values which we have obtained for x, y, z are also solutions of the first
equation, because the first equation is (in an obvious sense) the sum of
equation (2) multiplied by 2, and equation (3).
Example 3. We wish to find a solution for the system of equations
3x – 2y + z + 2w = 0,
x + y – z – w = 0,
2x – 2y + 3z = o.
Again we use the elimination method. Multiply the second equation by
2 and subtract it from the third. We find
-4y+5z+2w=0.

66 MATRICES AND LINEAR EQUATIONS [II, §3]
Multiply the second equation by 3 and subtract it from the first. We
find
– 5y + 4z + 5w == O.
We have now eliminate~ x from our equations, and find two equations
in three unknowns, y, z, w. We eliminate y from these two equations as
follows: Multiply the top one by 5, multiply the bottom one by 4, and
subtract them. We get
9z – lOw == O.
Now give an arbitrary value =I- 0 to w, say w == 1. Then we can solve for
z, namely
z == 10/9.
Going back to the equations before that, we solve for y, using
4y == 5z + 2w.
This yields
y == 17/9.
Finally we solve for x USIng say the second of the original set of three
equations, so that
x == – y + z + w,
or numerically,
x == -49/9.
Thus we have found:
w == 1, z == 10/9, y == 68/9, x == -49/9.
Note that we had three equations in four unknowns. By a successive
elimination of variables, we reduced these equations to two equations in
three unknowns, and then one equation in two unknowns.
Using precisely the same method, suppose that we start with three
equations in five unknowns. Eliminating one variable will yield two
equations in four unknowns. Eliminating another variable will yield one
equation in three unknowns. We can then solve this equation, and pro-
ceed backwards to get values for the previous variables just as we have
shown in the examples.

[II, §3] HOMOGENEOUS LINEAR EQUATIONS AND ELIMINATION 67
In general, suppose that we start with m equations with n unknowns,
and n > m. We eliminate one of the variables, say Xl’ and obtain a
system of m – 1 equations in n – 1 unknowns. We eliminate a second
variable, say X 2 , and obtain a system of m – 2 equations in n – 2 un-
knowns. Proceeding stepwise, we eliminate m – 1 variables, ending up
with 1 equation in n – m + 1 unknowns. We then give non-trivial arbi-
trary values to all the remaining variables but one, solve for this last
variable, and then proceed backwards to solve successively for each one
of the eliminated variables as we did in our examples. Thus we have an
effective way of finding a non-trivial solution for the original system.
We shall phrase this in terms of induction in a precise manner.
Let A == (aij), i == 1, … ,m and j == 1, … ,n be a matrix. Let bb … ,bm be
numbers. Equations like
. ..
a m1 x 1 + … + amnxn == bm
are called linear equations. We also say that (*) is a system of linear
equations. The system is said to be homogeneous if all the numbers
b1 , ••• ,bm are equal to o. The number n is called the number of un-
knowns, and m is the number of equations.
The system of equations
will be called the homogeneous system associated with (*). In this section,
we study the homogeneous system (**).
The system (**) always has a solution, namely the solution obtained
by letting all Xi == o. This solution will be called the trivial solution. A
solution (x l’ … ,xn ) such that some Xi is i= 0 is called non-trivial.
Consider our system of homogeneous equations (**). Let AI’ … ,Am
be the row vectors of the matrix (aij). Then we can rewrite our equa-
tions (**) in the form
Am·X == o.
Therefore a solution of the system of linear equations can be interpreted
as the set of all n-tuples X which are perpendicular to the row vectors of
the matrix A. Geometrically, to find a solution of (**) amounts to find-
ing a vector X which is perpendicular to AI’ … ,Am. Using the notation
of the dot product will make it easier to formulate the proof of our main
theorem, namely:

68 MATRICES AND LINEAR EQUATIONS [II, §3]
Theorem 3.1. Let
be a system of m linear equations in n unknowns, and assume that
n > m. Then the system has a non-trivial solution.
Proof The proof will be carried out by induction.
Consider first the case of one equation in n unknowns, n > 1:
If all coefficients a l , … ,an are equal to 0, then any value of the variables
will be a solution, and a non-trivial solution certainly exists. Suppose
that some coefficient ai is =I- o. After renumbering the variables and the
coefficients, we may assume that it is a l . Then we give X 2 , … ‘X n arbi-
trary values, for instance we let X 2 == … == Xn == 1, and solve for Xl’ let-
ting
In that manner, we obtain a non-trivial solution for our system of equa-
tions.
Let us now assume that our theorem is true for a system of m – 1
equations in more than m – 1 unknowns. We shall prove that it is true
for m equations in n unknowns when n > m. We consider the system
(**).
If all coefficients (aij) are equal to 0, we can give any non-zero value
to our variables to get a solution. If some coefficient is not equal to 0,
then after renumbering the equations and the variables, we may assume
that it is all. We shall subtract a multiple of the first equation from the
others to eliminate Xl. Namely, we consider the system of equations
which can also be written in the form

[II, §3] HOMOGENEOUS LINEAR EQUATIONS AND ELIMINATION 69
In this system, the coefficient of Xl is equal to O. Hence we may VIew
(***) as a system of m – 1 equations in n – 1 unknowns, and we have
n-l>m-1.
According to our assumption, we can find a non-trivial solution
(x 2 , … ,xn ) for this system. We can then solve for Xl in the first equa-
tion, namely
In that way, we find a solution of A 1 . X == O. But according to (***), we
have
for i == 2, … ,me Hence Ai· X == 0 for i == 2, … ,m, and therefore we have
found a non-trivial solution to our original system (**).
The argument we have just given allows us to proceed stepwise from
one equation to two equations, then from two to three, and so forth.
This concludes the proof.
Exercises II, §3
1. Let
E 1 = (1, 0, … ,0), E2 = (0, 1, 0, … ,0), … , En = (0, … ,0, 1)
be the standard unit vectors of Rn. Let X be an n-tuple. If X . Ei = ° for all i,
show that X = o.
2. Let AI’ … ,Am be vectors in Rn. Let X, Y be solutions of the system of equa-
tions
X·Ai=O and for i = 1, … ,me
Show that X + Y is also a solution. If c is a number, show that cX IS a
solution.
3. In Exercise 2, suppose that X is perpendicular to each one of the vectors
A l ,.·· ,Am· Let c l , … ,em be numbers. A vector
is called a linear combination of AI’ … ,Am. Show that X is perpendicular to
such a vector.

70 MATRICES AND LINEAR EQUATIONS [II, §4]
4. Consider the inhomogeneous system (*) consisting of all X such that X . Ai =
bi for i = 1, … ,m. If X and X’ are two solutions of this system, show that
there exists a solution Y of the homogeneous system (**) such that X’ =
X + Y. Conversely, if X is any solution of (*), and Y a solution of (**), show
that X + Y is a solution of (*).
5. Find at least one non-trivial solution for each one of the following systems of
equations. Since there are many choices involved, we don’t give answers.
(a) 3x + y + z = 0
(c) 2x – 3y + 4z = 0
3x + y + z = 0
(e) -x + 2y – 4z + W = 0
x + 3y + z – W = 0
(b) 3x + y + z = 0
x+y+z=O
(d) 2x + y + 4z + w = 0
– 3x + 2y – 3z + w = 0
x+y+z=O
(f) -2x+3y+z+4w=O
x + y + 2z + 3w = 0
2x + y + z – 2w = 0
6. Show that the only solutions of the following systems of equations are trivial.
(a) 2x + 3y = 0
x-y=O
(c) 3x + 4y – 2z = 0
x+y+z=O
– x – 3y + 5z = 0
(e) 7x – 2y + 5z + w = 0
x-y+z=O
Y – 2z + w = 0
x+z+w=O
(b) 4x + 5y = 0
-6x + 7y = 0
(d) 4x – 7y + 3z = 0
x+y=O
Y – 6z = 0
(f) -3x+y+z=O
x – Y + z – 2w = 0
x-z+w=O
-x + y – 3w = 0
II, §4. Row Operations and Gauss Elimination
Consider the system of linear equations
3x – 2y + z + 2w == 1,
x + y – z – w == – 2,
2x – y + 3z == 4.
The matrix of coefficients is
-2
1
-1
1
-1
3 -D·

[II, §4] ROW OPERATIONS AND GAUSS ELIMINATION 71
By the augmented matrix we shall mean the matrix obtained by inserting
the column
(-!)
as a last column, so the augmented matrix IS
G
-2 1 2
-i). -1 -1 -1 3 0
In general, let AX = B be a system of m linear equations In n un-
knowns, which we write in full:
aIIx l + … + aInXn = bl ,
a 2I x i + … + a2n X n = b2 ,
Then we define the augmented matrix to be the m by n + 1 matrix:
In the examples of homogeneous linear equations of the preceding
section, you will notice that we performed the following operations,
called elementary row operations:
Multiply one equation by a non-zero number.
Add one equation to another.
Interchange two equations.
These operations are reflected in operations on the augmented matrix of
coefficients, which are also called elementary row operations:
Multiply one row by a non-zero number.
Add one row to another.
Interchange two rows.
Suppose that a system of linear equations is changed by an elemen-
tary row operation. Then the solutions of the new system are exactly the

72 MATRICES AND LINEAR EQUATIONS [II, §4]
same as the solutions of the old system. By making row operations, we
can hope to simplify the shape of the system so that it is easier to find
the solutions.
Let us define two matrices to be row equivalent if one can be obtained
from the other by a succession of elementary row operations. If A is the
matrix of coefficients of a system of linear equations, and B the column
vector as above, so that
(A, B)
is the augmented matrix, and if (A’, B’) is row-equivalent to (A, B) then
the solutions of the system
AX == B
are the same as the solutions of the system
A’X == B’.
To obtain an equivalent system (A’, B’) as simple as possible we use a
method which we first illustrate in a concrete case.
Example. Consider the augmented matrix in the above example. We
have the following row equivalences:
G
-2 1 2
-!) 1 -1 -1 -1 3 0
Subtract 3 times second row from first row
G
-5 4 5
-!) 1 -1 -1 -1 3 0
Subtract 2 times second row from third row
(!
-5
1
-3
4
-1
5
5
-1
2
Interchange first and second row; multiply second row by – 1.
(~
1 1 – 1 – 1 – 2)
5 -4 -5 -7
-3 5 2 8

[II, §4] ROW OPERATIONS AND GAUSS ELIMINATION
Multiply second row by 3; mUltiply third row by 5.
(
1 1 -1 -1 -2)
o 15 -12 -15 -21
o -15 25 10 40
Add second row to third row.
(~
1
15
o
-1 -1
-12 -15
13 -5
-2) -21
19
73
What we have achieved is to make each successive row start with a non-
zero entry at least one step further than the preceding row. This makes
it very simple to solve the equations. The new system whose augmented
matrix is the matrix obtained last can be written in the form:
x + y – z – w = – 2,
15y – 12z – 15w = -21,
13z – 5w = 19.
This is now in a form where we can solve by giving w an arbitrary value
in the third equation, and solve for z from the third equation. Then we
solve for y from the second, and x from the first. With the formulas, this
gIves:
19 + 5w
z=
13
-21 + 12z + 15w
y = 15
x = – 1 – y + z + w.
We can give w any value to start with, and then determine values for
x, y, z. Thus we see that the solutions depend on one free parameter.
Later we shall express this property by saying that the set of solutions
has dimension 1.
For the moment, we give a general name to the above procedure. Let
M be a matrix. We shall say that M is in row echelon form if it has the
following property:
Whenever two successive rows do not consist entirely of zeros, then the
second row starts with a non-zero entry at least one step further to the
right than the first row. All the rows consisting entirely of zeros are
at the bottom of the matrix.
In the previous example we transformed a matrix into another which
IS in row echelon form. The non-zero coefficients occurring furthest to

74 MATRICES AND LINEAR EQUATIONS [II, §4]
the left in each row are called the leading coefficients. In the above
example, the leading coefficients are 1, 15, 13. One may perform one
more change by dividing each row by the leading coefficient. Then the
above matrix is row equivalent to
(~
1
1
o
-1 -1 -;) 4 -s -1 -5 .
1 5 19 -13 13
In this last matrix, the leading coefficient of each row is equal to 1. One
could make further row operations to insert further zeros, for instance
subtract the second row from the first, and then subtract % times the
third row from the second. This yields:
o
1
o
7
S
0
1
6
S
1 2
S + 13
5
-13
-2 )
1 – ~~ .
19
13
Unless the matrix is rigged so that the fractions do not look too hor-
rible, it is usually a pain to do this further row equivalence by hand, but
a machine would not care.
Example. The following matrix is in row echelon form.
o
o
o
o
2 -3
o 0
o 0
o 0
417
5 2-4
o -3 1
000
Suppose that this matrix is the augmented matrix of a system of linear
equations, then we can solve the linear equations by giving some varI-
ables an arbitrary value as we did. Indeed, the equations are:
Then the solutions are
2y – 3z + 4w + t = 7,
5w + 2t = -4,
– 3t = 1.
t = -1/3,
-4 – 2t
W=
5
z = any arbitrarily given value,
7 + 3z – 4w – t
y=
2
x = any arbitrarily given value.

[II, §4] ROW OPERATIONS AND GAUSS ELIMINATION 75
The method of changing a matrix by row equivalences to put it In row
echelon form works in general.
Theorem 4.1. Every matrix is row equivalent to a matrix in row echelon
form.
Proof Select a non-zero entry furthest to the left in the matrix. If this
entry is not in the first column, this means that the matrix consists
entirely of zeros to the left of this entry, and we can forget about them.
So suppose this non-zero entry is in the first column. After an inter-
change of rows, we can find an equivalent matrix such that the upper
left-hand corner is not O. Say the matrix is
all a 12 a ln
a2l a22 a2n
and all =1= O. We multiply the first row by a 2l /a ll and subtract from the
second row. Similarly, we multiply the first row by ail/all and subtract
it from the i-th row. Then we obtain a matrix which has zeros in the
first column except for all. Thus the original matrix is row equivalent
to a matrix of the form
We then repeat the procedure with the smaller matrix
We can continue until the matrix is In row echelon form (formally by
induction). This concludes the proof.
Observe that the proof is just another way of formulating the elimina-
tion argument of §3.
We give another proof of the fundamental theorem:
Theorem 4.2. Let

76 MA TRICES AND LINEAR EQUATIONS [II, §4]
be a system of m homogeneous linear equations in n unknowns with
n > m. T hen there exists a non-trivial solution.
Proof Let A = (a ij) be the matrix of coefficients. Then A IS equIva-
lent to A’ in row echelon form:
ak1xk1 + Sk/X) = 0,
ak2 x k2 + Sk2(X) = 0,
where ak1 =1= 0, … ,akr =1= 0 are the non-zero coefficients of the variables
occurring furthest to the left in each successive row, and Ski (X), … ,Skr(X)
indicate sums of variables with certain coefficients, but such that if a
variable Xj occurs in Sk/X), then j > kl and similarly for the other sums.
If Xj occurs in Ski then j > k i • Since by assumption the total number of
variables n is strictly greater than the number of equations, we must
have r < n. Hence there are n - r variables other than Xk1 ' ... ,xkr and n - r > O. We give these variables arbitrary values, which we can of
course select not all equal to O. Then we solve for the variables x kr’
Xkr _ l’ ••• ,Xk1 starting with the bottom equation and working back up, for
instance
Xkr = – Skr(x)/akr ,
Xkr _ 1 = – Skr_l(x)/akr _1, and so forth.
This gives us the non-trivial solution, and proves the theorem.
Observe that the pattern follows exactly that of the examples, but with
a notation dealing with the general case.
Exercises II, §4
In each of the following cases find a row equivalent matrix in row echelon form.
1. (a) ( 6
-4
1
2. (a) ~ -2
-1
1
3
1
2
-4) -6
-5
~ -i)
1
1
3
3
-4
2
-2) 3 .
-1

[II, §5]
3. (a) (i
ROW OPERATIONS AND ELEMENTARY MATRICES
2 -1
4 1
6 2
2
-2
-6
3
11
-5
-1
-5
3
77
4. Write down the coefficient matrix of the linear equations of Exercise 5 in §3,
and in each case give a row equivalent matrix in echelon form. Solve the
linear equations in each case by this method.
II, §5. Row Operations and Elementary Matrices
Before reading this section, work out the numerical examples given in
Exercises 33 through 37 of §2.
The row operations which we used to solve linear equations can be
represented by matrix operations. Let 1 < r < m and 1 < s < m. Let Irs be the square m x m matrix which has component 1 in the rs place, and o elsewhere: 0·········0 . . Irs = 0···1 ... 0 rs 0······ ···0 Let A = (aij) be any m x n matrix. What IS the effect of mUltiplying IrsA? r{ 0···· ·····0 a 11 ... a In ls = 0 ···0 Jr. 0···1 ···0 as1 ... asn rs as1 ... asn 0······ ···0 0 ···0 ~ S amI'" amn The definition of mUltiplication of matrices shows that I rsA is the matrix obtained by putting the s-th row of A in the r-th row, and zeros else- where. If r = s then Irr has a component 1 on the diagonal place, and 0 elsewhere. Multiplication by I rr then leaves the r-th row fixed, and re- places all the other rows by zeros. 78 MATRICES AND LINEAR EQUATIONS [II, §5] If r # s let Then Then IrsA puts the s-th row of A in the r-th place, and IsrA puts the r-th row of A in the s-th place. All other rows are replaced by zero. Th us J rs interchanges the r- th row and the s- th row, and replaces all other rows by zero. Example. Let 1 o o ~) and ( 3 2 -1) A= 1 4 2. -2 5 1 If you perform the matrix multiplication, you will see directly that J A interchanges the first and second row of A, and replaces the third row by zero. On the other hand, let E=(! 1 o o Then EA is the matrix obtained from A by interchanging the first and second row, and leaving the third row fixed. We can express E as a sum: where Irs is the matrix which has rs-component 1, and all other compon- ents 0 as before. Observe that E is obtained from the unit matrix by interchanging the first two rows, and leaving the third row unchanged. Thus the operation of interchanging the first two rows of A is carried out by mUltiplication with the matrix E obtained by doing this operation on the unit matrix. This is a special case of the following general fact. Theorem 5.1. Let E be the matrix obtained from the unit n x n matrix by interchanging two rows. Let A be an n x n matrix. Then EA is the matrix obtained from A by interchanging these two rows. [II, §5] ROW OPERATIONS AND ELEMENTARY MATRICES 79 Proof The proof is carried out according to the pattern of the exam- ple, it is only a question of which symbols are used. Suppose that we interchange the r-th and s-th row. Then we can write E = Irs + Isr + sum of the matrices I jj with j 1= r, j 1= s. Thus E differs from the unit matrix by interchanging the r-th and s-th rows. Then with j 1= r, j 1= s. By the previous discussion, this is precisely the matrix obtained by interchanging the r-th and s-th rows of A, and leaving all the other rows unchanged. The same type of discussion also yields the next result. Theorem 5.2. Let E be the matrix obtained from the unit n x n matrix by multiplying the r-th row with a number c and adding it to the s-th row, r =1= s. Let A be an n x n matrix. Then EA is obtained from A by multiplying the r-th row of A by c and adding it to the s-th row of A. Proof We can write E = I + cIsr . Then EA = A + cIsrA. We know that IsrA puts the r-th row of A in the s-th place, and multiplication by c mUltiplies this row by c. All other rows besides the s-th row in cI srA are equal to O. Adding A + cIsrA therefore has the effect of adding c times the r-th row of A to the s-th row of A, as was to be shown. Example. Let 1 0 4 0 0 1 0 0 E= 0 0 1 0 0 0 0 1 Then E is obtained from the unit matrix by adding 4 times the third row to the first row. Take any 4 x n matrix A and compute EA. You will find that EA is obtained by multiplying the third row of A by 4 and adding it to the first row of A. 80 MATRICES AND LINEAR EQUATIONS [II, §5] More generally, we can let Ers{c) for r =I s be the elementary matrix. Ers{c) = I + cIrs. r ~ 1 0 0 0 s 0 1 0 0 0 c 1 0 0 0 0 1 It differs from the unit matrix by having rs-component equal to c. The effect of multiplication on the left by Ers{c) is to add c times the s-th row to the r-th row. By an elementary matrix, we shall mean anyone of the following three types: (a) A matrix obtained from the unit matrix by multiplying the r-th diagonal component with a, number c =I O. (b) A matrix obtained from the unit matrix by interchanging two rows (say the r-th and s-th row, r =I s). (c) A matrix Ers{c) = I + cIrs with r =I shaving rs-component c for r =I s, and all other components 0 except the diagonal components which are equal to 1. These three types reflect the row operations discussed in the preceding section. Multiplication by a matrix of type (a) mUltiplies the r-th row by the number c. Multiplication by a matrix of type (b) interchanges the r-th and s-th row. Multiplication by a matrix of type (c) adds c times the s-th row to the r-th row. Proposition 5.3. An elementary matrix is invertible. Proof For type (a), the inverse matrix has r-th diagonal component c - 1, because multiplying a row first by c and then by c - 1 leaves the row unchanged. For type (b), we note that by interchanging the r-th and s-th row twice we return to the same matrix we started with. F'or type (c), as in Theorem 5.2, let E be the matrix which adds c times the s-th row to the r-th row of the unit matrix. Let D be the matrix which adds - c times the s-th row to the r-th row of the unit [II, §5] ROW OPERATIONS AND ELEMENTARY MATRICES 81 matrix (for r =1= s). Then DE IS the unit matrix, and so IS ED, so E IS invertible. Example. The following elementary matrices are Inverse to each other: 1 0 4 0 1 0 -4 0 E= 0 1 0 0 E- 1 = 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 We shall find an effective way of finding the inverse of a square ma- trix if it has one. This is based on the following properties. If A, B are square matrices of the same size and have inverses, then so does the product AB, and This is immediate, because Similarly, for any number of factors: Proposition 5.4. If A 1, ... ,Ak are invertible matrices of the same size, then their product has an inverse, and Note that in the right-hand side, we take the product of the Inverses In reverse order. Then A ···AA-1···A-1-I 1 k k 1- because we can collapse AkA;1 to I, then A k_ 1A;_11 to I and so forth. Since an elementary matrix has an inverse, we conclude that any pro- duct of elementary matrices has an inverse. Proposition 5.5. Let A be a square matrix, and let A' be row equivalent to A. Then A has an inverse if and only if A' has an inverse. Proof There exist elementary matrices E b ... ,Ek such that Suppose that A has an inverse. Then the right-hand side has an inverse by Proposition 5.4 since the right-hand side is a product of invertible matrices. Hence A' has an inverse. This proves the proposition. 82 MATRICES AND LINEAR EQUATIONS [II, §5] We are now in a position to find an inverse for a square matrix A if it has one. By Theorem 4.1 we know that A is row equivalent to a matrix A' in echelon form. If one row of A' is zero, then by the defini- tion of echelon form, the last row must be zero, and A' is not invertible, hence A is not invertible. If all the rows of A' are non-zero, then A' is a triangular matrix with non-zero diagonal components. It now suffices to find an inverse for such a matrix. In fact, we prove: Theorem 5.6. A square matrix A is invertible if and only if A is row equivalent to the unit matrix. Any upper triangular matrix with non- zero diagonal elements is invertible. Proof. Suppose that A is row equivalent to the unit matrix. Then A is invertible by Proposition 5.5. Suppose that A is invertible. We have just seen that A is row equivalent to an upper triangular matrix with non- zero elements on the diagonal. Suppose A is such a matrix: all a 12 a l " o a22 a2n By assumption we have all··· ann i= O. We multiply the i-th row with aii 1. We obtain a triangular matrix such that all the diagonal compon- ents are equal to 1. Thus to prove the theorem, it suffices to do it in this case, and we may assume that A has the form o 0 1 We multiply the last row by ain and subtract it from the i-th row for i = 1, ... ,n - 1. This makes all the elements of the last column equal to o except for the lower right-hand corner, which is 1. We repeat this procedure with the next to the last row, and continue upward. This means that by row equivalences, we can replace all the components which lie strictly above the diagonal by o. We then terminate with the unit matrix, which is therefore row equivalent with the original matrix. This proves the theorem. Corollary 5.7. Let A be an invertible matrix. Then A can be expressed as a product of elementary matrices. [II, §5] ROW OPERATIONS AND ELEMENTARY MATRICES 83 Proof This is because A is row equivalent to the unit matrix, and row operations are represented by multiplication with elementary ma- trices, so there exist E l' ... ,Ek such that Then A = Ell ... Ei: 1, thus proving the corollary. When A is so expressed, we also get an expression for the inverse of A, namely The elementary matrices E 1 , ••• ,Ek are those which are used to change A to the unit matrix. Example. Let A =(~ -3 1 o o 1 o We want to find an inverse for A. We perform the following row opera- tions, corresponding to the multiplication by elementary matrices as shown. Interchange first two rows. 1 -1) (O~ 1 -3 1, 0 o 1 0 Subtract 2 times first row from second row. Subtract 2 times first row from third row. (~ 1 -1) G 1 ~} -5 3 , -2 -2 3 -2 Subtract 2/5 times second row from third row. 1 -1) -5 3, o 9/5 1 -2 -6/5 84 MATRICES AND LINEAR EQUATIONS [II, §5] Subtract 5/3 of third row from second row. Add 5/9 of third row to first row. G 1 0) (-2/9 1/3 5/9) -5 o , 5/3 0 -~/3 . 0 9/5 -2/5 -6/5 Add 1/5 of second row to first row. (~ 0 0) ( 1/9 1/3 2/9) -5 o , 5/3 0 -~/3 . 0 9/5 -2/5 -6/5 Multiply second row by -1/5. Multiply third row by 5/9. (~ 0 ~} ( 1/9 1/3 2/9) 1 -1/3 0 1/3 . 0 -2/9 -2/3 5/9 Then A - 1 is the matrix on the right, that is ( 1/9 A -1 = -1/3 -2/9 1/3 o -2/3 2/9) 1/3 . 5/9 You can check this by direct multiplication with A to find the unit matrix. If A is a square matrix and we consider an inhomogeneous system of linear equations AX=B, then we can use the inverse to solve the system, if A is invertible. In- deed, in this case, we multiply both sides on the left by A - 1 and we find This also proves: Proposition 5.S. Let AX = B be a system of n linear equations in n unknowns. Assume that the matrix of coefficients A is invertible. Then there is a unique solution X to the system, and [II, §6] LINEAR COMBINATIONS 85 Exercises II, §5 1. Using elementary row operations, find inverses for the following matrices. (a) (~ -:) (b) (_ ~ -1 🙂 3 2 -2 4 (c) ( 2 4 ~) (d) (~ 2 -~) -1 3 0 2 2 (e) (-~ 5 n (f) (_: -:) 0 5 7 2 Note: For another way of finding inverses, see the chapter on determinants. 2. Let r -=F s. Show that I;s = O. 3. Let r -=F s. Let Ers(c) = I + clrs ' Show that Ers (c )Ers (c') = Ers( c + c'). II, §6. Linear Combinations Let AI, ... ,An be m-tuples in Rm. Let XI""'Xn be numbers. Then we call a linear combination of A I, ... ,An; and we call Xl"" ,xn the coefficients of the linear combination. A similar definition applies to a linear combina- tion of row vectors. The linear combination is called non-trivial if not all the coefficients Xl' ... ,xn are equal to O. Consider once more a system of linear homogeneous equations Our system of homogeneous equations can also be written in the form all a 12 0 a 2l a2 2 0 Xl + X 2 + ... + Xn - amI 0 86 MATRICES AND LINEAR EQUATIONS [II, §6] or more concisely: where A 1, ... ,A n are the column vectors of the matrix of coefficients, which is A = (aij)' Thus the problem of finding a non-trivial solution for the system of homogeneous linear equations is equivalent to finding a non-trivial linear combination of A 1, ... ,An which is equal to O. Vectors A 1, ... ,An are called linearly dependent if there exist numbers x l' ... ,xn not all equal to 0 such that Thus a non-trivial solution (x l' ... ,xn) is an n-tuple which gives a linear combination of A 1, ... ,An equal to 0, i.e. a relation of linear dependence between the columns of A. We may thus summarize the description of the set of solutions of the system of homogeneous linear equations in a table. (a) It consists of those vectors X giving linear relations between the columns of A. (b) It consists of those vectors X perpendicular to the rows of A, that is X· Ai = 0 for all i. (c) It consists of those vectors X such that AX = O. Vectors A 1, ... ,A n are called linearly independent if, gIven any linear combination of them which is equal to 0, i.e. then we must necessarily have x j = 0 for all j = 1, ... ,no This means that there is no non-trivial relation of linear dependence among the vectors A 1, ... ,An. Example. The standard unit vectors E1 = (1, 0, ... ,0), ... ,En = (0, ... ,0, 1) of Rn are linearly independent. Indeed, let x l' ... 'X n be numbers such that [II, §6] LINEAR COMBINATIONS 87 The left-hand side is just the n-tuple (Xl' ... ,Xn). If this n-tuple is 0, then all components are 0, so Xi = 0 for all i. This proves that E b ... ,En are linearly independent. We shall study the notions of linear dependence and independence more systematically in the next chapter. They were mentioned here just to have a complete table for the three basic interpretations of a system of linear equations, and to introduce the notion in a concrete special case before giving the general definitions in vector spaces. Exercise II, §6 1. (a) Let A = (a ij ), B = (b jk ) and let AB = C with C = (C ik ). Let C k be the k-th column of C. Express Ck as a linear combination of the columns of A. Describe precisely which are the coefficients, coming from the matrix B. (b) Let AX = Ck where X is some column of B. Which column is it? CHAPTER III Vector Spaces As usual, a collection of objects will be called a set. A member of the collection is also called an element of the set. It is useful in practice to use short symbols to denote certain sets. For instance we denote by R the set of all numbers. To say that "x is a number" or that "x is an element of R" amounts to the same thing. The set of n-tuples of numbers will be denoted by Rn. Thus" X is an element of Rn" and" X is an n-tuple" mean the same thing. Instead of saying that u is an element of a set S, we shall also frequently say that u lies in S and we write u E S. If Sand S' are two sets, and if every element of S' is an element of S, then we say that S' is a subset of S. Thus the set of rational numbers is a subset of the set of (real) numbers. To say that S is a subset of S' is to say that S is part of S'. To denote the fact that S is a subset of S', we write S c S'. If S b S 2 are sets, then the intersection of S 1 and S 2, denoted by S1 n S2, is the set of elements which lie in both S1 and S2. The union of S 1 and S 2, denoted by S 1 U S 2, is the set of elements which lie in S 1 or S2· III, §1. Definitions In mathematics, we meet several types of objects which can be added and multiplied by numbers. Among these are vectors (of the same dimension) and functions. It is now convenient to define in general a notion which includes these as a special case. A vector space V is a set of objects which can be added and multi- plied by numbers, in such a way that the sum of two elements of V is [III, §l] DEFINITIONS 89 again an element of V, the product of an element of V by a number is an element of V, and the following properties are satisfied: VS 1. Given the elements u, v, w of V, we have (u + v) + w = u + (v + w). VS 2. There is an element of V, denoted by 0, such that for all elements u of V. VS 3. Given an element u of V, the element {- 1)u is such that u + (-l)u = o. VS 4. F or all elements u, v of V, we have u + v = v + u. VS 5. If c is a number, then c{u + v) = cu + cv. VS 6. If a, b are two numbers, then {a + b)v = av + bv. VS 7. If a, b are two numbers, then (ab)v = a{bv). VS 8. For all elements u of V, we have 1· u = u (1 here is the number one). We have used all these rules when dealing with vectors, or with func- tions but we wish to be more systematic from now on, and hence have made a list of them. Further properties which can be easily deduced from these are given in the exercises and will be assumed from now on. The algebraic properties of elements of an arbitrary vector space are very similar to those of elements of R2, R3 , or Rn. Consequently it is customary to call elements of an arbitrary vector space also vectors. If u, v are vectors (i.e. elements of the arbitrary vector space V), then the sum u + {-1)v is usually written u - v. We also write - v instead of ( - 1 )v. Example 1. Fix two positive integers m, n. Let V be the set of all m x n matrices. We also denote V by Mat{m x n). Then V is a vector 90 VECTOR SPACES [III, §1] space. It is easy to verify that all properties VS 1 through VS 8 are satisfied by our rules for addition of matrices and multiplication of matrices by numbers. The main thing to observe here is that addition of matrices is defined in terms of the components, and for the addition of components, the conditions analogous to VS 1 through VS 4 are satisfied. They are standard properties of numbers. Similarly, VS 5 through VS 8 are true for multiplication of matrices by numbers, because the corresponding properties for the multiplication of numbers are true. Example 2. Let V be the set of all functions defined for all numbers. If f, g are two functions, then we know how to form their sum f + g. It is the function whose value at a number t is f(t) + g(t). We also know how to multiply f by a number c. It is the function cf whose values at a number t is cf(t). In dealing with functions, we have used properties VS 1 through VS 8 many times. We now realize that the set of functions is a vector space. The function f such that f(t) = 0 for all t is the zero function. We emphasize the condition for all t. If a function has some of its values equal to zero, but other values not equal to 0, then it is not the zero function. In practice, a number of elementary properties concerning addition of elements in a vector space are obvious because of the concrete way the vector space is given in terms of numbers, for instance as in the previous two examples. We shall now see briefly how to prove such properties just from the axioms. It is possible to add several elements of a vector space. Suppose we wish to add four elements, say u, v, w, z. We first add any two of them, then a third, and finally a fourth. Using the rules VS 1 and VS 4, we see that it does not matter in which order we perform the additions. This is exactly the same situation as we had with vectors. For example, we have (u + v) + w) + z = (u + (v + w») + z = (v + w) + u) + z = (v + w) + (u + z), etc. Thus it is customary to leave out the parentheses, and write simply u + v + w + z. The same remark applies to the sum of any number n of elements of V. We shall use 0 to denote the number zero, and 0 to denote the element of any vector space V satisfying property VS 2. We also call it [III, §1] DEFINITIONS 91 zero, but there is never any possibility of confusion. We observe that this zero element 0 is uniquely determined by condition VS 2. Indeed, if v + w = v then adding - v to both sides yields -v + v + w = -v + v = 0, and the left-hand side is just 0 + w = w, so w = o. Observe that for any element v in V we have Ov = o. Proof o = v + ( - l)v = (1 - l)v = Ov. Similarly, if c is a number, then cO = o. Proof We have cO = c(O + 0) = cO + cO. Add - cO to both sides to get cO = o. Subspaces Let V be a vector space, and let W be a subset of V. Assume that W satisfies the following conditions. (i) If v, ware elements of W, their sum v + w is also an element of W (ii) If v is an element of Wand c a number, then cv is an element of W (iii) The element 0 of V is also an element of W Then W itself is a vector space. Indeed, properties VS 1 through VS 8, being satisfied for all elements of V, are satisfied also for the elements of W. We shall call W a subspace of V. Example 3. Let V = Rn and let W be the set of vectors in V whose last coordinate is equal to O. Then W is a subspace of V, which we could identify with Rn - 1. Example 4. Let A be a vector in R3. Let W be the set of all elements B in R3 such that B· A = 0, i.e. such that B is perpendicular to A. Then W is a subspace of R3. To see this, note that O· A = 0, so that 0 is in W Next, suppose that B, C are perpendicular to A. Then (B + C)· A = B· A + C· A = 0, 92 VECTOR SPACES [III, §1] so that B + C is also perpendicular to A. Finally, if x is a number, then (xB)·A = x(B·A) = 0, so that xB IS perpendicular to A. This proves that W is a subspace of R3. More generally, if A is a vector in Rn, then the set of all elements B in Rn such that B· A = 0 is a subspace of Rn. The proof is the same as when n = 3. Example 5. Let Sym(n x n) be the set of all symmetric n x n matrices. Then Sym(n x n) is a subspace of the space of all n x n matrices. Indeed, if A, B are symmetric and c is a number, then A + B and cA are symmetric. Also the zero matrix is symmetric. Example 6. If f, g are two continuous functions, then f + g is con- tinuous. If c is a number, then cf is continuous. The zero function is continuous. Hence the continuous functions form a subspace of the vector space of all functions. If f, g are two differentiable functions, then their sum f + g is differen- tiable. If c is a number, then cf is differentiable. The zero function is differentiable. Hence the differentiable functions form a subspace of the vector space of all functions. Furthermore, every differentiable function is continuous. Hence the differentiable functions form a subspace of the vector space of continuous functions. Example 7. Let V be a vector space and let U, W be subspaces. We denote by U n W the intersection of U and W, i.e. the set of elements which lie both in U and W. Then U n W is a subspace. For instance, if U, Ware two planes in 3-space passing through the origin, then in general, their intersection will be a straight line passing through the ori- gin, as shown in Fig. 1. [III, §2] LINEAR COMBINATIONS 93 Example 8. Let U, W be subspaces of a vector space V. By U+W we denote the set of all elements u + w with U E U and WE W. Then we leave it to the reader to verify that U + W is a subspace of V, said to be generated by U and W, and called the sum of U and W. Exercises III, § 1 1. Let A l' ... ,A, be vectors in Rn. Let W be the set of vectors B in Rn such that B· Ai = 0 for every i = 1, ... ,r. Show that W is a subspace of Rn. 2. Show that the following sets of elements in R2 form subspaces. (a) The set of all (x, y) such that x = y. (b) The set of all (x, y) such that x - y = o. (c) The set of all (x, y) such that x + 4y = O. 3. Show that the following sets of elements in R 3 form subspaces. (a) The set of all (x, y, z) such that x + y + z = O. (b) The set of all (x, y, z) such that x = y and 2y = z. (c) The set of all (x, y, z) such that x + y = 3z. 4. If U, Ware subspaces of a vector space V, show that U n Wand U + Ware subspaces. 5. Let V be a subspace of Rn. Let W be the set of elements of Rn which are perpendicular to every element of V. Show that W is a subspace of Rn. This subspace W is often denoted by V 1., and is called V perp, or also the orthogonal complement of V. III, §2. Linear Combinations Let V be a vector space, and let v1 , ••• ,Vn be elements of V. We shall say that V 1 , ••. ,Vn generate V if given an element v E V there exist numbers Xl' ... ,xn such that Example 1. Let E 1 , ••• ,En be the standard unit vectors in R n , so Ei has component 1 in the i-th place, and component 0 in all other places. 94 VECTOR SPACES [III, §2] Then E l , ... ,En generate Rn. Proof: gIven X = (Xl' ... ,xn) ERn. Then n X = L xiEi, i= 1 so there exist numbers satisfying the condition of the definition. Let V be an arbitrary vector space, and let V l , ... ,Vn be elements of V. Let Xl' ... ,xn be numbers. An expression of type is called a linear combination of v l , ... ,Vn • The numbers Xl'." ,Xn are then called the coefficients of the linear combination. The set of all linear combinations of Vl' ... ,Vn is a subspace of V. Proof Let W be the set of all such linear combinations. Let Yl'··· ,Yn be numbers. Then (X 1 V 1 + ... + Xn vn) + (y 1 V 1 + . .. + Y n vn) = (Xl + Yl)V l + ... + (xn + Yn)v n· Thus the sum of two elements of W is again an element of W, i.e. a linear combination of Vl' ... ,Vn. Furthermore, if c is a number, then is a linear combination of V l , ... ,Vn , and hence is an element of W Finally, o == OV l + ... + OVn is an element of W This proves that W is a subspace of V. The subspace W consisting of all linear combinations of V l , ... ,Vn IS called the subspace generated by V l ,.·. ,Vn • Example 2. Let V l be a non-zero element of a vector space V, and let w be any element of V. The set of elements with tER [III, §2] LINEAR COMBINATIONS 95 is called the line passing through w in the direction of V 1 • We have al- ready met such lines in Chapter I, §5. If w = 0, then the line consisting of all scalar multiples tV 1 with t E R is a subspace, generated by V1 • Let VI' v2 be elements of a vector space V, and assume that neither is a scalar multiple of the other. The subspace generated by V 1 , V 2 is called the plane generated by VI' V 2 • It consists of all linear combinations with t 1, t2 arbitrary numbers. This plane passes through the origin, as one sees by putting t 1 = t2 = o. Plane passing through the origin Figure 2 We obtain the most general notion of a plane by the following opera- tion. Let S be an arbitrary subset of V. Let P be an element of V. If we add P to all elements of S, then we obtain what is called the translation of S by P. It consists of all elements P + V with V in S. Example 3. Let V 1, V2 be elements of a vector space V such that neither is a scalar multiple of the other. Let P be an element of V. We define the plane passing through P, parallel to V 1, V 2 to be the set of all elements where t 1, t2 are arbitrary numbers. This notion of plane is the analogue, with two elements VI' v2 , of the notion of parametrized line considered in Chapter I. Warning. Usually such a plane does not pass through the orIgIn, as shown on Fig. 3. Thus such a plane is not a subspace of V. If we take P = 0, however, then the plane is a subspace. 96 VECTOR SPACES o~ ______________ __ Plane not passing through the origin Figure 3 [III, §2] Sometimes it is interesting to restrict the coefficients of a linear com- bination. We give a number of examples below. Example 4. Let V be a vector space and let v, u be elements of V. We define the line segment between v and v + u to be the set of all points v + tu, 0 < t < 1. This line segment is illustrated in the following picture. v+u v+tu v Figure 4 For instance, if t = !, then v + !u is the point midway between v and v + u. Similarly, if t = t, then v + tu is the point one third of the way between v and v + u (Fig. 5). v+u v+u v+!u v+iu v+tu v v (a) (b) Figure 5 [III, §2] LINEAR COMBINATIONS 97 If v, ware elements of V, let u = w - v. Then the line segment between v and w is the set of all points v + tu, or v + t(w - v), o < t < 1. w v+t(w-v) v Figure 6 Observe that we can rewrite the expression for these points in the form (1) (1 - t)v + tw, o < t < 1, and letting s = 1 - t, t = 1 - s, we can also write it as sv + (1 - s)w, O 6
determine two half planes; one of them lies below the line and the other
lies above the line, as shown on Fig. 12.
Figure 12
Let A = (2, – 3). We can, and should write the linear inequalities in
the form
A·X> 6 and A·X < 6, where X = (x, y). Prove as Exercise 2 that each half plane is convex. This is clear intuitively from the picture, at least in R2, but your proof should be valid for the analogous situation in Rn. Theorem 3.1. Let P l' ... ,P n be points oj' a vector space V. Let S be the set of all linear combinations with 0 < ti and t1 + ... + tn = 1. Then S is convex. Proof Let and t1 + ... + tn = 1, Sl + ... + Sn = 1. 102 VECTOR SPACES Let 0 < t < 1. Then: We have 0 < (1 - t)ti + tSi for all i, and (1 - t)t1 + tS 1 + ... + (1 - t)tn + tSn = (1 - t)( t 1 + ... + t n) + t( S 1 + . . . + Sn) = (1 - t) + t = 1. This proves our theorem. [III, §3] In the next theorem, we shall prove that the set of all linear combina- tions with is the smallest convex set containing P 1, ... ,Pn • For example, suppose that P 1, P 2, P 3 are three points in the plane not on a line. Then it is geometrically clear that the smallest convex set containing these three points is the triangle having these points as vertices. Figure 13 Thus it is natural to take as definition of a triangle the following pro- perty, valid in any vector space. Let P 1, P 2, P 3 be three points in a vector space V, not lying on a line. Then the triangle spanned by these points is the set of all combina- tions When we deal with more than three points, then the set of linear combinations as in Theorem 3.1 looks as in the following figure. [III, §3] CONVEX SETS 103 Figure 14 We shall call the convex set of Theorem 3.1 the convex set spanned by P 1, ... ,P n. Although we shall not need the next result, it shows that this convex set is the smallest convex set containing all the points P 1, ... ,P n. Omit the proof if you can't handle the argument by induction. Theorem 3.2. Let P l' ... ,P n be points of a vector space V. Any convex set which contains P l' ... ,P n also contains all linear combinations with 0 ~ ti for all i and t 1 + ... + tn == 1. Proof We prove this by induction. If n == 1, then t 1 == 1, and our assertion is obvious. Assume the theorem proved for some integer n - 1 ~ 1. We shall prove it for n. Let t l' ... ,tn be numbers satisfying the conditions of the theorem. Let S' be a convex set containing p 1, ... ,Pn • We must show that S' contains all linear combinations If tn == 1, then our assertion is trivial because t1 == ... == tn- 1 == o. Sup- pose that tn i= 1. Then the linear combination t 1 P 1 + ... + tn P n is equal to Let t· 1 s· ==-- 1 1 - t. 1 for i == 1, ... ,n - 1. Then Si ~ 0 and s 1 + ... + sn _ 1 == 1 so that by induction, we conclude that the point 104 VECTOR SPACES [III, §4] lies in S'. But then lies in S' by definition of a convex set, as was to be shown. Exercises III, §3 1. Let S be the parallelogram consIstIng of all linear combinations t 1 V 1 + t 2 v2 with 0 ~ t 1 ~ 1 and 0 ~ t2 ~ 1. Prove that S is convex. 2. Let A be a non-zero vector in Rn and let c be a fixed number. Show that the set of all elements X in Rn such that A· X ~ c is convex. 3. Let S be a convex set in a vector space. If c is a number, denote by cS the set of all elements cv with v in S. Show that cS is convex. 4. Let S 1 and S2 be convex sets. Show that the intersection SIn S2 is convex. 5. Let S be a convex set in a vector space V. Let w be an arbitrary element of V. Let w + S be the set of all elements w + v with v in S. Show that w + S is convex. III, §4. Linear Independence Let V be a vector space, and let Vb ... ,vn be elements of V. We shall say that Vb ... ,Vn are linearly dependent if there exist numbers a b ... ,an not all equal to Osuch that If there do not exist such numbers, then we say that VI' ... ,Vn are linearly independent. In other words, vectors Vb ... ,Vn are linearly independent if and only if the following condition is satisfied: Let a 1 , ••• ,an be numbers such that then ai = ° for all i = 1, ... ,no Example 1. Let V = Rn and consider the vectors El =(1,0, ... ,0) En = (0, 0, ... ,1). [III, §4] LINEAR INDEPENDENCE 105 Then E 1 , ... ,En are linearly independent. Indeed, let a b ... ,an be numbers such that a 1 E 1 + ... + anEn = O. Since it follows that all ai = O. Example 2. Show that the vectors (1, 1) and (- 3, 2) are linearly inde- pendent. Let a, b be two numbers such that a( 1, 1) + b( - 3, 2) = O. Writing this equation in terms of components, we find a - 3b = 0, a + 2b = O. This is a system of two equations which we solve for a and b. Sub- tracting the second from the first, we get - 5b = 0, whence b = o. Substituting in either equation, we find a = O. Hence, a, b are both 0, and our vectors are linearly independent. If elements v1 , ••• ,Vn of V generate V and in addition are linearly inde- pendent, then {v 1 , .•. ,vn } is called a basis of V. We shall also say that the elements V 1 , ••• ,Vn constitute or form a basis of V. Example 3. The vectors E b ... ,En of Example 1 form a basis of Rn. To prove this we have to prove that they are linearly independent, which was already done in Example 1; and that they generate Rn. Given an element A = (a 1 , ••• ,an) of Rn we can write A as a linear combination so by definition, E 1 , ••• ,En generate Rn. Hence they form a basis. However, there are many other bases. Let us look at n = 2. We shall find out that any two vectors which are not parallel form a basis of R2. Let us first consider an exam pIe. If VI' V2 are as drawn, they form a basis of R2. Figure 15 106 VECTOR SPACES [III, §4] Example 4. Show that the vectors (1, 1) and (- 1, 2) form a basis of R2. We have to show that they are linearly independent and that they generate R2. To prove linear independence, suppose that a, bare numbers such that a(l, 1) + b( -1, 2) = (0, 0) Then a - b = 0, a + 2b = O. Subtracting the first equation from the second yields 3b = 0, so that b = O. But then from the first equation, a = 0, thus proving that our vectors are linearly independent. Next, we must show that (1,1) and (-1,2) generate R2. Let (s, t) be an arbitrary element of R2. We have to show that there exist numbers x, y such that x(l, 1) + y( -1,2) = (s, t). In other words, we must solve the system of equations x - y = s, x + 2y = t. Again subtract the first equation from the second. We find whence and finally 3y = t - s, t-s y=-3-' t - s x = y + s = -3- + s. This proves that (1, 1) and (-1,2) generate R2, and concludes the proof that they form a basis of R2. The general story for R 2 is expressed in the following theorem. Theorem 4.1. Let (a, b) and (c, d) be two vectors in R2. (i) They are linearly dependent if and only if ad - bc = O. (ii) If they are linearly independent, then they form a basis of R2. [III, §4] LINEAR INDEPENDENCE 107 Proof First work it out as an exercise (see Exercise 4). If you can't do it, you will find the proof in the answer section. It parallels closely the procedure of Example 4. Let V be a vector space, and let {v 1, ••• ,v n ) be a basis of V. The elements of V can be represented by n-tuples relative to this basis, as follows. If an element v of V is written as a linear combination of the basis elements, then we call (x l' ... ,xn ) the coordinates of v with respect to our basis, and we call Xi the i-th coordinate. The coordinates with respect to the usual basis E 1, ... ,En of Rn are simply the coordinates as defined in Chapter I, §1. The following theorem shows that there can only be one set of co- ordinates for a given vector. Theorem 4.2. Let V be a vector space. Let v 1 , ••• ,Vn be linearly inde- pendent elements of V. Let X b ... ,Xn and Yl' ... ,Yn be numbers such that Then we must have Xi == Yi for all i == 1, ... ,no Proof Subtract the right-hand side from the left-hand side. We get We can write this relation also in the form By definition, we must have Xi - Yi == 0 for all i == 1, ... ,n, thereby prov- ing our assertion. The theorem expresses the fact that when an element is written as a linear combination of v 1, ... ,vn , then its coefficients Xl' ... ,X n are uniquely determined. This is true only when V 1 , ... ,Vn are linearly independent. Example 5. Find the coordinates of (1,0) with respect to the two vectors (1, 1) and (-1,2). We must find numbers a, b such that a(l, 1) + b( -1,2) == (1,0). 108 VECTOR SPACES [III, §4] Writing this equation in terms of coordinates, we find a - b = 1, a + 2b = O. Solving for a and b in the usual manner yields b = - k and a = ~. Hence the coordinates of (1,0) with respect to (1,1) and (-1,2) are (~, -1)· Example 6. The two functions et , e2t are linearly independent. To prove this, suppose that there are numbers a, b such that aet + be 2t = 0 (for all values of t). Differentiate this relation. We obtain aet + 2be 2t = o. Subtract the first from the second relation. We obtain be 2t = 0, and hence b = O. From the first relation, it follows that aet = 0, and hence a = O. Hence et , e2t are linearly independent. Example 7. Let V be the vector space of all functions of a variable t. Let i1' ... ,in be n functions. To say that they are linearly dependent is to say that there exist n numbers a1 , ••• ,an not all equal to 0 such that for all values of t. Warning. We emphasize that linear dependence for functions means that the above relation holds for all values of t. For instance, consider the relation a sin t + b cos t = 0, where a, b are two fixed numbers not both zero. There may be some values of t for which the above equation is satisfied. For instance, if a =f=. 0 we then can solve sin t b ----, a cos t or in other words, tan t = b/a to get at least one solution. However, the above relation cannot hold for all values of t, and consequently sin t, cos t are linearly independent, as functions. [III, §4] LINEAR INDEPENDENCE 109 Example 8. Let V be the vector space of functions generated by the two functions el , e2l • Then the coordinates of the function 3el + 5e 2l with respect to the basis {e l , e2l } are (3, 5). When dealing with two vectors v, w there is another convenient way of expressing linear independence. Theorem 4.3. Let v, w be elements of a vector space V. They are linearly dependent if and only if one of them is a scalar multiple of the other, i.e. there is a number c =f=. 0 such that we have v = cw or w = cv. Proof Left as an exercise, cf. Exercise 5. In the light of this theorem, the condition imposed in various examples in the preceding section could be formulated in terms of two vectors being linearly independent. Exercises III, §4 1. Show that the following vectors are linearly independent. (a) (1,1,1) and (0,1, -2) (b) (1,0) and (1,1) (c) (-1, 1,0) and (0, 1, 2) (d) (2, -1) and (1, 0) (e) (n,O) and (0, 1) (f) (1, 2) and (1, 3) (g) (1,1,0), (1,1,1), (h) (0, 1, 1), (0,2,1), and (0, 1, -1) and (1, 5, 3) 2. Express the given vector X as a linear combination of the given vectors A, B, and find the coordinates of X with respect to A, B. (a) X = (1, 0), A = (1, 1), B = (0, 1) (b) X = (2, 1), A = (1, - 1), B = (1, 1) (c) X = (1, 1), A = (2, 1), B = ( -1,0) (d) X = (4,3), A = (2,1), B = (-1,0) 3. Find the coordinates of the vector X with respect to the vectors A, B, C. (a) X = (1,0,0), A = (1, 1, 1), B = (-1, 1,0), C = (1,0, -1) (b) X = (1, 1, 1), A = (0, 1, -1), B = (1, 1, 0), C = (1, 0, 2) (c) X = (0,0, 1), A = (1, 1, 1), B = (-1, 1,0), C = (1,0, -1) 4. Let (a, b) and (e, d) be two vectors in R2. (i) If ad - be #- 0, show that they are linearly independent. (ii) If they are linearly independent, show that ad - be #- 0. (iii) If ad - be #- ° show that they form a basis of R 2 • 5. (a) Let v, w be elements of a vector space. If v, ware linearly dependent, show that there is a number e such that w = ev, or v = ew. (b) Conversely, let v, w be elements of a vector space, and assume that there exists a number e such that w = ev. Show that v, ware linearly depen- dent. 110 VECTOR SPACES [III, §5] 6. Let A l , •.. ,A, be vectors in Rn, and assume that they are mutually perpendi- cular, in other words Ai..l Aj if i #- j. Also assume that none of them is O. Prove that they are linearly independent. 7. Consider the vector space of all functions of a variable t. Show that the following pairs of functions are linearly independent. (a) 1,t (b) t,t 2 (c) t,t4 (d) et,t (e) tet,e2t (f) sint,cost (g) t, sin t (h) sin t, sin 2t (i) cos t, cos 3t 8. Consider the vector space of functions defined for t > O. Show that the fol-
lowing pairs of functions are linearly independent.
(a) t, lit (b) et, log t
9. What are the coordinates of the function 3 sin t + 5 cos t = f(t) with respect
to the basis {sin t, cos t}?
10. Let D be the derivative dldt. Let J(t) be as in Exercise 9. What are the
coordinates of the function DJ(t) with respect to the basis of Exercise 9?
In each of the following cases, exhibit a basis for the given space, and prove
that it is a basis.
11. The space of 2 x 2 matrices.
12. The space of m x n matrices.
13. The space of n x n matrices all of whose components are 0 except possibly
the diagonal components.
14. The upper triangular matrices, i.e. matrices of the following type:
all a l2 … a l )
o a22 … a 2n .. . .
o 0 ann
15. ( a) The space of symmetric 2 x 2 matrices.
(b) The space of symmetric 3 x 3 matrices.
16. The space of symmetric n x n matrices.
III, §5. Dimension
We ask the question: Can we find three linearly independent elements in
R 2 ? For instance, are the elements
A = (1, 2), B = ( – 5, 7), C = (10,4)
linearly independent? If you write down the linear equations expressing
the relation
xA + yB + zC = 0,

[III, §5] DIMENSION 111
you will find that you can solve them for x, y, z not equal to O. Namely,
these equations are:
x – 5y + 10z = 0,
2x + 7 y + 4z = O.
This is a system of two homogeneous equations in three unknowns, and
we know by Theorem 2.1 of Chapter II that we can find a non-trivial
solution (x, y, z) not all equal to zero. Hence A, B, C are linearly depen-
dent.
We shall see in a moment that this is a general phenomenon. In Rn,
we cannot find more than n linearly independent vectors. Furthermore,
we shall see that any n linearly independent elements of Rn must gener-
ate Rn, and hence form a basis. Finally, we shall also see that if one
basis of a vector space has n elements, and another basis has m elements,
then m = n. In short, two bases must have the same number of elements.
This property will allow us to define the dimension of a vector space
as the number of elements in any basis. We now develop these ideas
systematically.
Theorem 5.1. Let V be a vector space, and let {VI’ … ,Vm} generate V.
Let WI’ … ,wn be elements of V and assume that n > m. Then WI’· .. ‘Wn
are linearly dependent.
Proof Since {VI’ … ,Vm} generate V, there exist numbers (a ij) such that
we can write
If Xl’ … ,Xn are numbers, then
XIW I + … + XnWn
= (Xl all + … + xnaln)V I + … + (Xl amI + … + Xnamn)V m
(just add up the coefficients of VI’ … ,Vm vertically downward). According
to Theorem 2.1 of Chapter II, the system of equations
xla ll + … + xna ln = 0 . .
has a non-trivial solution, because n > m. In VIew of the preceding
remark, such a solution (Xl’ … ,Xn ) is such that
as desired.

112 VECTOR SPACES [III, §5]
Theorem 5.2. Let V be a vector space and suppose that one basis has n
elements, and another basis has m elements. Then m = n.
Proof We apply Theorem 5.1 to the two bases. Theorem 5.1 implies
that both alternatives n > m and m > n are impossible, and hence m = n.
Let V be a vector space having a basis consisting of n elements. We
shall say that n is the dimension of V. If V consists of 0 alone, then V
does not have a basis, and we shall say that V has dimension O.
We may now reformulate the definitions of a line and a plane in
an arbitrary vector space V. A line passing through the origin is
simply a one-dimensional subspace. A plane passing through the origin
is simply a two-dimensional subspace.
An arbitrary line is obtained as the translation of a one-dimensional
subspace. An arbitrary plane is obtained as the translation of a two-
dimensional subspace. When a basis {VI} has been selected for a one-
dimensional space, then the points on a line are expressed in the usual
form
P + t1v 1 with all possible numbers t 1 •
When a basis {v 1 , v2 } has been selected for a two-dimensional space, then
the points on a plane are expressed in the form
Let {VI’ … ,vn } be a set of elements of a vector space V. Let r be a
positive integer < n. We shall say that {v 1 , ••• ,vr } is a maximal subset of linearly independent elements if V 1 , ••• ,Vr are linearly independent, and if in addition, given any Vi with i > r, the elements V 1, … ,Vr , Vi are linearly
dependent.
The next theorem gives us a useful criterion to determine when a set
of elements of a vector space is a basis.
Theorem 5.3. Let {V l’ … ,vn } be a set of generators of a vector space V.
Let {v 1 , ••• ,vr } be a maximal subset of linearly independent elements.
Then {v 1 , .•• ,vr } is a basis of v.
Proof We must prove that v 1, ••. ,Vr generate V. We shall first prove
that each Vi (for i > r) is a linear combination of V 1, .•• ,Vr • By hypothe-
sis, given Vb there exists numbers Xl’ … ,Xr , Y not all 0 such that

[III, §5] DIMENSION 113
Furthermore, y =1= 0, because otherwise, we would have a relation of lin-
ear dependence for V 1 , ••• ,Vr • Hence we can solve for Vi’ namely
thereby showing that Vi is a linear combination of V 1 , ••• ,Vr •
Next, let V be any element of V. There exist numbers c 1, … ‘C n such
that
In this relation, we can replace each Vi (i > r) by a linear combination of
V 1 , ••• ,Vr • If we do this, and then collect terms, we find that we have
expressed V as a linear combination of Vb … ,vr • This proves that
v1 , … ,vr generate V, and hence form a basis of V.
We shall now give criteria which allow us to tell when elements of a
vector space constitute a basis.
Let v1, … ,Vn be linearly independent elements of a vector space V. We
shall say that they form a maximal set of linearly independent elements of
V if given any element w of V, the elements w, V 1, ••• ,Vn are linearly
dependent.
Theorem 5.4. Let V be a vector space, and {v 1, … ,vn} a maximal set of
linearly independent elements of V. Then {v 1 , ••• ,vn } is a basis of v.
Proof We must now show that v1 , ••• ,Vn generate V, i.e. that every
element of V can be expressed as a linear combination of V 1 , ••• ,vn • Let w
be an element of V. The elements w, V 1 , ••• ,Vn of V must be linearly
dependent by hypothesis, and hence there exist numbers xo, X b … ,Xn not
all ° such that
We cannot have Xo = 0, because if that were the case, we would obtain a
relation of linear dependence among Vb”. ,vn • Therefore we can solve
for w in terms of V 1 , ••• ,Vn , namely
Xl Xn
W = – – V 1 – ••• – – Vn •
Xo Xo
This proves that w is a linear combination of V 1 , ••• ,Vn , and hence that
{V1′ … ,vn } is a basis.
Theorem 5.5. Let V be a vector space of dimension n, and let V 1, … ,Vn
be linearly independent elements of V. Then V 1 , ••• ,vn constitute a basis
of v.

114 VECTOR SPACES [III, §5]
Proof According to Theorem 5.1., {VI’ … ,Vn} is a maximal set of
linearly independent elements of V. Hence it is a basis by Theorem 5.4.
Theorem 5.6. Let V be a vector space of dimension n and let W be a
subspace, also of dimension n. Then W = V.
Proof. A basis for W must also be a basis for V.
Theorem 5.7. Let V be a vector space of dimension n. Let r be a
positive integer with r < n, and let VI' ... ,vr be linearly independent ele- ments of V. Then one can find elements Vr + I' ... ,Vn such that is a basis of v. Proof Since r < n we know that {VI' ... ,Vr} cannot form a basis of V, and thus cannot be a maximal set of linearly independent elements of V. In particular, we can find vr + I in V such that are linearly independent. If r + 1 < n, we can repeat the argument. We can thus proceed stepwise (by induction) until we obtain n linearly inde- pendent elements {VI' ... 'Vn}. These must be a basis by Theorem 5.4, and our corollary is proved. Theorem 5.S. Let V be a vector space having a basis conslstlng of n elements. Let W be a subspace which does not consist of 0 alone. Then W has a basis, and the dimension of W is < n. Proof Let WI be a non-zero element of W. If {w I} is not a maximal set of linearly independent elements of W, we can find an element W2 of W such that WI' W2 are linearly independent. Proceeding in this manner, one element at a time, there must be an integer m :s n such that we can find linearly independent elements WI' W 2 , ... 'Wm' and suc!'l that is a maximal set of linearly independent elements of W (by Theorem 5.1 we cannot go on indefinitely finding linearly independent elements, and the number of such elements is at most n). If we now use Theorem 5.4, we conclude that {w1, ... ,wm} is a basis for W. [III, §6] THE RANK OF A MATRIX 115 Exercises III, §5 1. What is the dimension of the following spaces (refer to Exercises 11 through 16 of the preceding section): (a) 2 x 2 matrices (b) m x n matrices (c) n x n matrices all of whose components are 0 expect possibly on the diagonal. (d) Upper triangular n x n matrices. (e) Symmetric 2 x 2 matrices. (f) Symmetric 3 x 3 matrices. (g) Symmetric n x n matrices. 2. Let V be a subspace of R2. What are the possible dimensions for V? Show that if V:# R 2 , then either V = {O}, or V is a straight line passing through the origin. 3. Let V be a subspace of R 3 . What are the possible dimensions for V? Show that if V:# R 3 , then either V = {O}, or V is a straight line passing through the origin, or V is a plane passing through the origin. III, §6. The Rank of a Matrix Let be an m x n matrix. The columns of A generate a vector space, which is a subspace of Rm. The dimension of that subspace is called the column rank of A. In light of Theorem 5.4, the column rank is equal to the maximum number of linearly independent columns. Similarly, the rows of A generate a subspace of Rn, and the dimension of this subspace is called the row rank. Again by Theorem 5.4, the row rank is equal to the maximum number of linearly independent rows. We shall prove below that these two ranks are equal to each other. We shall give two proofs. The first in this section depends on certain operations on the rows and columns of a matrix. Later we shall give a more geometric proof using the notion of perpendicularity. We define the row space of A to be the subspace generated by the rows of A. We define the column space of A to be the subspace gener- ated by the columns. Consider the following operations on the rows of a matrix. Row 1. Adding a scalar multiple of one row to another. Row 2. Interchanging rows. Row 3. Multiplying one row by a non-zero scalar. 116 VECTOR SPACES [III, §6] These are called the row operations (sometimes, the elementary row operations). We have similar operations for columns, which will be denoted by Coil, Col 2, Col3 respectively. We shall study the effect of these operations on the ranks. First observe that each one of the above operations has an inverse operation in the sense that by performing similar operations we can revert to the original matrix. For instance, let us change a matrix A by adding c times the second row to the first. We obtain a new matrix B whose rows are If we now add -cA 2 to the first row of B, we get back A l . A similar argument can be applied to any two rows. If we interchange two rows, then interchange them again, we revert to the original matrix. If we multiply a row by a number c =1= 0, then multiplying again by c - 1 yields the original row. Theorem 6.1. Rowand column operations do not change the row rank of a matrix, nor do they change the column rank. Proof First we note that interchanging rows of a matrix does not affect the row rank since the subspace generated by the rows is the same, no matter in what order we take the rows. Next, suppose we add a scalar multiple of one row to another. We keep the notation before the theorem, so the new rows are Any linear combination of the rows of B, namely any linear combination of is also a linear combination of A l , A 2 , ••• ,Am. Consequently the row space of B is contained in the row space of A. Hence by Theorem 5.6, we have row rank of B < row rank of A. Since A is also obtained from B by a similar operation, we get the reverse inequality row rank of A < row rank of B. Hence these two row ranks are equal. Third, if we multiply a row Ai by c =1= 0, we get the new row cA i. But Ai = c-l(cA i), so the row spaces of the matrix A and the new matrix [III, §6] THE RANK OF A MATRIX 117 obtained by mUltiplying the row by c are the same. Hence the third operation also does not change the row rank. We could have given the above argument with any pair of rows Ai' A j (i =1= j), so we have seen that row operations do not change the row rank. We now prove that they do not change the column rank. Again consider the matrix obtained by adding a scalar multiple of the second row to the first: B= Let B 1 , ••• ,Bn be the columns of this new matrix B. We shall see that the relation of linear dependence between the columns of B are precisely the same as the relations of linear dependence between the columns of A. In other words: A vector X = (x l' ... ,xn ) gives a relation of linear dependence between the columns of B if and only if X gives a relation of linear dependence between the columns of A. Proof We know from Chapter II, §2 that a relation of linear depen- dence among the columns can be written in terms of the dot product with the rows of the matrix. So suppose we have a relation This is equivalent with the fact that X·B·=O I for i = 1, ... ,me Therefore X· A2 = 0, ... , The first equation can be written 118 VECTOR SPACES [III, §6] Since X . A2 = 0 we conclude that X· A 1 = O. Hence X is perpendicular to the rows of A. Hence X gives a linear relation among the columns of A. The converse is proved similarly. The above statement proves that if r among the columns of Bare linearly independent, then r among the columns of A are also linearly independent, and conversely. Therefore A and B have the same column rank. We leave the verification that the other row operations do not change the column ranks to the reader. Similarly, one proves that the column operations do not change the row rank. The situation is symmetric between rows and columns. This concludes the proof of the theorem. Theorem 6.2. Let A be a matrix of row rank r. By a succession of row and column operations, the matrix can be transformed to the matrix having components equal to 1 on the diagonal of the first r rows and columns, and 0 everywhere else. r 1 0 o 1 o 0 o 0 o 0 r o 0 o 0 1 0 o 0 o 0 o o o o o In particular, the row rank is equal to the column rank. Proof Suppose r =1= 0 so the matrix is not the zero matrix. Some component is not zero. After interchanging rows and columns, we may assume that this component is in the upper left-hand corner, that is this component is equal to all =1= O. Now we go down the first column. We multiply the first row by a21 /a 11 and subtract it from the second row. We then obtain a new matrix with 0 in the first place of the second row. Next we multiply the first row by a31 /a 11 and subtract it from the third row. Then our new matrix has first component equal to 0 in the third row. Proceeding in the same way, we can transform the matrix so that it is of the form all a 12 a 1n o a22 a2n [III, §6] THE RANK OF A MATRIX 119 Next, we subtract appropriate multiples of the first column from the second, third, ... , n-th column to get zeros in the first row. This trans- forms the matrix to a matrix of type Now we have an (m - 1) x (n - 1) matrix in the lower right. If we perform row and column operations on all but the first row and column, then first we do not disturb the first component all; and second we can repeat the argument, in order to obtain a matrix of the form all 0 0 0 o a22 0 0 o 0 a33 a3n Proceeding stepwise by induction we reach a matrix of the form o 0 o o o 0 with diagonal elements all' ... ,ass which are =1= O. We divide the first row by all' the second row by a22 , etc. We then obtain a matrix 1 0 0 0 o 1 0 o 0 1 0 o 0 0 0 Thus we have the unit s x s matrix in the upper left-hand corner, and zeros everywhere else. Since row and column operations do not change the row or column rank, it follows that r =- s, and also that the row rank is equal to the column rank. This proves the theorem. 120 VECTOR SPACES [III, §6] Since we have proved that the row rank is equal to the column rank, we can now omit "row" or "column" and just speak of the rank of a matrix. Thus by definition the rank of a matrix is equal to the dimen- sion of the space generated by the rows. Remark. Although the systematic procedure provides an effective method to find the rank, in practice one can usually take shortcuts to get as many zeros as possible by making row and column operations, so that at some point it becomes obvious what the rank of the matrix is. Of course, one can also use the simple mechanism of linear equations to find the rank. Example. Find the rank of the matrix 1 1 There are only two rows, so the rank is at most 2. On the other hand, the two columns (~) and G) are linearly independent, for if a, b are numbers such that then 2a + b = 0, b = 0, so that a = O. Therefore the two columns are linearly independent, and the rank is equal to 2. Later we shall also see that determinants give a computation way of determining when vectors are linearly independent, and thus can be used to determine the rank. Example. Find the rank of the matrix. 1 2-3 210 -2 -1 3 -1 4-2 [III, §6] THE RANK OF A MATRIX 121 We subtract twice the first column from the second and add 3 times the first column to the third. This gives 1 0 0 2 -3 6 -2 3 -3 -1 6 -5 We add 2 times the second column to the third. This gives 100 2 -3 0 -2 3 3 -1 6 7 This matrix is in column echelon form, and it is immediate that the first three rows or columns are linearly independent. Since there are only three columns, it follows that the rank is 3. Exercises III, §6 1. Find the rank of the following matrices. (a) G 1 ~) (b) (- ~ 2 -2) 2 4 -5 (c) G 2 -;) (d) 1 2 -3 4 -1 -2 3 4 8 -12 0 0 0 (e) C 0) (f) (-~ 0 ~) o -5 2 0 (g) ( 2 0 -D (h) 1 2 -3 -5 1 -1 -2 3 3 8 4 8 -12 1 -1 5 (i) (: 1 0 🙂 2 2 4 2 122 VECTOR SPACES [III, §6] 2. Let A be a triangular matrix ( a: l :. 1 a l2 ... al") ar ... :~~ Assume that none of the diagonal elements is equal to O. What is the rank of A? 3. Let A be an m x n matrix and let B be an n x r matrix, so we can form the product AB. (a) Show that the columns of AB are linear combinations of the columns of A. Thus prove that rank AB ~ rank A. (b) Prove that rank AB ~ rank B. [Hint: Use the fact that rank AB = rank tCAB) and rank B = rank tB.] CHAPTER IV Li near Mappi ngs We shall first define the general notion of a mapping, which generalizes the notion of a function. Among mappings, the linear mappings are the most important. A good deal of mathematics is devoted to reducing questions concerning arbitrary mappings to linear mappings. For one thing, they are interesting in themselves, and many mappings are linear. On the other hand, it is often possible to approximate an arbitrary mapping by a linear one, whose study is much easier than the study of the original mapping. This is done in the calculus of several variables. IV, §1. Mappings Let S, S' be two sets. A mapping from S to S' is an association which to every element of S associates an element of S'. Instead of saying that F is a mapping from S into S', we shall often write the symbols F: S ~ S'. A mapping will also be called a map, for the sake of brevity. A function is a special type of mapping, namely it is a mapping from a set into the set of numbers, i.e. into R. We extend to mappings some of the terminology we have used for functions. For instance, if T: S ~ S' is a mapping, and if u is an element of S, then we denote by T(u), or Tu, the element of S' associated to u by T. We call T(u) the value of T at u, or also the image of u under T. The symbols T(u) are read" T of u". The set of all elements T(u), when u ranges over all elements of S, is called the image of T. If W is a subset of S, then the set of elements T(w), when w ranges over all elements of W, is called the image of Wunder T, and is denoted by T(W). 124 LINEAR MAPPINGS [IV, §1] Let F: S ~ S' be a map from a set S into a set S'. If x is an element of S, we often write x~F(x) with a special arrow ~ to denote the image of x under F. Thus, for instance, we would speak of the map F such that F(x) = x 2 as the map X~X2. Example 1. For any set S we have the identity mapping I: S ~ S. It is defined by I(x) = x for all x. Example 2. Let Sand S' be both equal to R. Let J: R ~ R be the function f(x) = x 2 (i.e. the function whose value at a number x is x 2 ). Then f is a mapping from R into R. Its image is the set of numbers > o.
Example 3. Let S be the set of numbers > 0, and let S’ = R. Let
g: S ~ S’ be the function such that g(x) = X 1/ 2 • Then g is a mapping
from S into R.
Example 4. Let S be the set of functions having derivatives of all
orders on the interval 0 < t < 1, and let S' = S. Then the derivative D = d/dt is a mapping from S into S. Indeed, our map D associates the function df/dt = Df to the function f According to our terminology, Df is the value of the mapping D at f Example 5. Let S be the set R 3 , i.e. the set of 3-tuples. Let A = (2,3, -1). Let L: R3 ~ R be the mapping whose value at a vector X = (x,y,z) is A·X. Then L(X) = A·X. If X = (1, 1, -1), then the value of L at X is 6. Just as we did with functions, we describe a mapping by gIvIng its values. Thus, instead of making the statement in Example 5 describing the mapping L, we would also say: Let L: R3 ~ R be the mapping L(X) = A . X. This is somewhat incorrect, but is briefer, and does not usually give rise to confusion. More correctly, we can write X ~ L(X) or X ~ A . X with the special arrow ~ to denote the effect of the map L on the element X. Example 6. Let F: R2 ~ R2 be the mapping given by F(x, y) = (2x, 2y). Describe the image under F of the points lying on the circle x 2 + y2 = 1. [IV, § IJ MAPPINGS 125 Let (x, y) be a point on the circle of radius 1. Let u = 2x and v = 2y. Then u, v satisfy the relation (U/2)2 + (V/2)2 = 1 or in other words, Hence (u, v) is a point on the circle of radius 2. Therefore the image under F of the circle of radius 1 is a subset of the circle of radius 2. Conversely, given a point (u, v) such that let x = u/2 and y = v/2. Then the point (x, y) satisfies the equation and hence is a point on the circle of radius 1. Furthermore, F(x, y) = (u, v). Hence every point on the circle of radius 2 is the image of some point on the circle of radius 1. We conclude finally that the image of the circle of radius 1 under F is precisely the circle of radius 2. Note. In general, let S, Sf be two sets. To prove that S = Sf, one frequently proves that S is a subset of Sf and that Sf is a subset of S. This is what we did in the preceding argument. Example 7. This example is particularly important in geometric appli- cations. Let V be a vector space, and let u be a fixed element of V. We let be the map such that Tu(v) = v + u. We call Tu the translation by u. If S is any subset of V, then Tu(S) is called the translation of S by u, and consists of all vectors v + u, with v E S. We often denote it by S + u. In the next picture, we draw a set S and its translation by a vector u. 126 LINEAR MAPPINGS [IV, §1] s u o Figure 1 Example 8. Rotation counterclockwise around the origin by an angle f) is a mapping, which we may denote by Ro. Let f) = n/2. The image of the point (1, 0) under the rotation Rn/2 is the point (0, 1). We may write this as R n / 2(1, 0) = (0, 1). Example 9. Let S be a set. A mapping from S into R will be called a function, and the set of such functions will be called the set of functions defined on S. Let f, g be two functions defined on S. We can define their sum just as we did for functions of numbers, namely J + g is the function whose value at an element t of S is f(t) + g(t). We can also define the product of f by a number c. It is the function whose value at t is cf(t). Then the set of mappings from S into R is a vector space. Example 10. Let S be a set and let V be a vector space. Let F, G be two mappings from S into V. We can define their sum in the same way as we defined the sum of functions, namely the sum F + G is the map- ping whose value at an element t of S is F(t) + G(t). We also define the product of F by a number c to be the mapping whose value at an element t of S is cF(t). It is easy to verify that conditions VS 1 through VS 8 are satisfied. Exercises IV, § 1 1. In Example 4, give Df as a function of x when f is the function: (a) f(x) = sin x (b) f(x) = eX (c) f(x) = log x 2. Let P = (0, 1). Let R be rotation by n/4. Give the coordinates of the image of P under R, i.e. give R(P). 3. In Example 5, give L(X) when X is the vector: (a) (1, 2, - 3) (b) (- 1, 5, 0 ) (c) (2, 1, 1) 4. Let F: R ~ R2 be the mapping such that F(t) = (e t , t). What is F(I), F(O), F( -I)? [IV, §2] LINEAR MAPPINGS 127 5. Let G: R ~ R2 be the mapping such that G(t) = (t, 2t). Let F be as in Exer- cise 4. What is (F + G) (1), (F + G) (2), (F + G) (O)? 6. Let F be as in Exercise 4. What is (2F) (0), (nF) (I)? 7. Let A = (1, 1, -1, 3). Let F: R4 ~ R be the mapping such that for any vector X = (Xl' x 2 , x 3 , x 4 ) we have F(X) = X· A + 2. What is the value of F(X) when (a) X = (1, 1,0, -1) and (b) X = (2, 3, -1, I)? In Exercises 8 through 12, refer to Example 6. In each case, to prove that the image is equal to a certain set S, you must prove that the image is contained in S, and also that every element of S is in the image. 8. Let F: R2 ~ R2 be the mapping defined by F(x, y) = (2x, 3y). Describe the image of the points lying on the circle x 2 + y2 = 1. 9. Let F: R2 ~ R2 be the mapping defined by F(x, y) = (xy, y). Describe the image under F of the straight line X = 2. 10. Let F be the mapping defined by F(x, y) = (eX cos y, eX sin y). Describe the image under F of the line X = 1. Describe more generally the image under F of a line x = c, where c is a constant. 11. Let F be the mapping defined by F( t, u) = (cos t, sin t, u). Describe geometri- cally the image of the (t, u)-plane under F. 12. Let F be the mapping defined by F(x, y) = (xI3, yI4). What is the image under F of the ellipse IV, §2. Linear Mappings Let V, W be two vector spaces. A linear mapping L:V~W is a mapping which satisfies the following two properties. First, for any elements u, v in V, and any scalar c, we have: LM 1. L(u + v) = L(u) + L(v). LM2. L(cu) = cL(u). Example 1. The most important linear mapping of this course IS de- scribed as follows. Let A be a given m x n matrix. Define by the formula 128 LINEAR MAPPINGS [IV, §2] Then LA is linear. Indeed, this is nothing but a summary way of express- ing the properties A(X + Y) = AX + A Y and A(cX) = cAX for any vertical X, Y in Rn and any number c. Example 2. The dot product is essentially a special case of the first example. Let A = (a l , ... ,an) be a fixed vector, and define Then LA is a linear map from Rn into R, because A . (X + Y) = A . X + A· Y and A·(cX) = c(A·X). Note that the dot product can also be viewed as multiplication of ma- trices if we view A as a row vector, and X as a column vector. Example 3. Let V be any vector space. The mapping which associates to any element u of V this element itself is obviously a linear mapping, which is called the identity mapping. We denote it by I. Thus I(u) = u. Example 4. Let V, W be any vector spaces. The mapping which asso- ciates the element 0 in W to any element u of V is called the zero mapping and is obviously linear. Example 5. Let V be the set of functions which have derivatives of all orders. Then the derivative D: V ~ V is a linear mapping. This is simply a brief way of summarizing standard properties of the derivative, namely. D(f + g) = Df + Dg, D( cf) = cD(f)· Example 6. Let V = R 3 be the vector space of vectors in 3-space. Let V' = R 2 be the vector space of vectors in 2-space. We can define a mappIng. by the projection, namely F(x, y, z) = (x, y). We leave it to you to check that the conditions LM 1 and LM 2 are satisfied. More generally, suppose n = r + s is expressed as a sum of two posit- ive integers. We can separate the coordinates (Xl' ... ,xn ) into two [IV, §2] LINEAR MAPPINGS 129 bunches (Xl' ... ,Xr , X r + 1' ... ,Xr + s), namely the first r coordinates, and the last s coordinates. Let be the map such that F(x l' ... ,Xn ) = (X l' ... ,xr ). Then you can verify easily that F is linear. We call F the projection on the first r coordinates. Similarly, we would have a projection on the last s coordinates, by means of the linear map L such that Example 7. In the calculus of several variables, one defines the grad- ient of a function f to be ( of Of) grad f(X) = ox 1 "'" oXn . Then for two functions f, g, we have grad(f + g) = grad f + grad g and for any number c, grad( cf) = c· grad f Thus grad is a linear map. Let L: V ~ W be a linear mapping. Let u, v, w be elements of V. Then L(u + v + w) = L(u) + L(v) + L(w). This can be seen stepwise, using the definition of linear mappings. Thus L(u + v + w) = L(u + v) + L(w) = L(u) + L(v) + L(w). Similarly, given a sum of more than three elements, an analogous property is satisfied. For instance, let u 1 , ••• ,Un be elements of V. Then The sum on the right can be taken in any order. A formal proof can easily be given by induction, and we omit it. If aI' ... ,an are numbers, then 130 LINEAR MAPPINGS We show this for three elements. L(a1u + a 2 v + a 3 w) = L(a1u) + L(a 2 v) + L(a 3 w) = a1L(u) + a 2 L(v) + a 3 L(w). With the notation of summation signs, we would write [IV, §2] In practice, the following properties will be obviously satisfied, but it turns out they can be proved from the axioms of linear maps and vector spaces. LM 3. Let L: V ~ W be a linear map. Then L(O) = O. Proof We have L(O) = L(O + 0) = L(O) + L(O). Subtracting L(O) from both sides yields 0 = L(O), as desired. LM 4. Let L: V ~ W be a linear map. Then L( -v) = -L(v). Proof We have o = L(O) = L(v - v) = L(v) + L( -v). Add - L( v) to both sides to get the desired assertion. We observe that the values of a linear map are determined by know- ing the values on the elements of a basis. Example 8. Let L: R2 ~ R2 be a linear map. Suppose that L( 1, 1) = (1, 4) and L( 2, - 1) = ( - 2, 3). Find L(3, - 1). To do this, we write (3, -1) as a linear combination of (1,1) and (2, -1). Thus we have to solve (3, -1) = x(l, 1) + y(2, -1). [IV, §2] This amounts to solving LINEAR MAPPINGS x + 2y = 3, x- y=-1. The solution is x = !, y = 1. Hence 131 1 4 (-7 16) L(3, -1) = xL(1, 1) + yL(2, -1) = 3 (1,4) + 3 (-2,3) = -3-' 3 . Example 9. Let V be a vector space, and let L: V ~ R be a linear map. We contend that the set S of all elements v in V such that L(v) < 0 IS convex. Proof Let L(v) < 0 and L(w) < O. Let 0 < t < 1. Then L(tv + (1 - t)w) = tL(v) + (1 - t)L(w). Then tL(v) < 0 and (1 - t)L(w) < 0 so tL(v) + (1 - t)L(w) < 0, whence tv + (1 - t)w lies in S. If t = 0 or t = 1, then tv + (1 - t)w is equal to v or wand this also lies in S. This proves our assertion. For a generalization of this example, see Exercise 14. The coordinates of a linear map Let first be any mapping. Then each value F(v) is an element of R n, and so has coordinates. Thus we can write or Each Fi is a function of V into R, which we write Example 10. Let F: R2 ~ R3 be the mapping F(x, y) = (2x - y, 3x + 4y, x - 5y). Then F 1(x, y) = 2x - y, F 2 (x, y) = 3x + 4y, F 3(X, y) = x - 5y. 132 LINEAR MAPPINGS [IV, §2] Observe that each coordinate function can be expressed in terms of a dot product. For instance, let Ai = (2, -1), A2 = (3,4), A3=(I,-5). Then for i = 1, 2, 3. Each function is linear. Quite generally: Proposition 2.1. Let F: V ~ Rn be a mapping of a vector space V into Rn. Then F is linear if and only if each coordinate function Fi: V ~ R is linear, for i = 1, ... ,no Proof For v, WE V we have F(v + w) = (F l(V + w), ... ,Fn(v + w)), F(v) = (F l(V), ... ,Fn(v)), F(w) = (F l(W), ... ,Fn(w)). Thus F(v + w) = F(v) + F(w) if and only if Fi (v + w) = Fi (v) + Fi (w) for all i = 1, ... ,n by the definition of addition of n-tuples. The same argument shows that if c E R, then F( cv) = cF( v) if and only if for all i = 1, ... ,no This proves the proposition. Example 10 (continued). The mapping of Example 10 is linear because each coordinate function is linear. Actually, if you write the vector (x, Y) vertically, you should realize that the mapping F is in fact equal to LA for some matrix A. What is this matrix A? The vector space of linear maps Let V, W be two vector spaces. We consider the set of all linear map- pings from V into W, and denote this set by 2(V, W), or simply 2 if the reference to V and W is clear. We shall define the addition of linear mappings and their multiplication by numbers in such a way as to make 2 into a vector space. Let L: V ~ Wand let F: V ~ W be two linear mappings. We define their sum L + F to be the map whose value at an element u of V is L(u) + F(u). Thus we may write (L + F)(u) = L(u) + F(u). [IV, §2] LINEAR MAPPINGS 133 The map L + F is then a linear map. Indeed, it is easy to verify that the two conditions which define a linear map are satisfied. For any elements U, v of V, we have (L + F)(u + v) = L(u + v) + F(u + v) = L(u) + L(v) + F(u) + F(v) = L(u) + F(u) + L«v) + F(v) = (L + F)(u) + (L + F)(v). Furthermore, if c is a number, then (L + F)(cu) = L(cu) + F(cu) = cL(u) + cF(u) = c[L(u) + F(u)] = c[(L + F)(u)]. Hence L + F is a linear map. If a is a number, and L: V ~ W is a linear map, we define a map aL from V into W by giving its value at an element u of V, namely (aL)(u) = aL(u). Then it is easily verified that aL is a linear map. We leave this as an exercise. We have just defined operations of addition and multiplication by numbers in our set.P. Furthermore, if L: V ~ W is a linear map, i.e. an element of .If, then we can define - L to be (- l)L, i.e. the product of the number - 1 by L. Finally, we have the zero-map, which to every element of V associates the element 0 of W. Then.If is a vector space. In other words, the set of linear maps from V into W is itself a vector space. The verification that the rules VS t through VS 8 for a vector space are satisfied is easy and is left to the reader. Example t 1. Let V = W be the vector space of functions which have derivatives of all orders. Let D be the derivative, and let 1 be the iden- tity. If f is in V, then (D + l)f = Df + f Thus, when f(x) = eX, then (D + l)f is the function whose value at x is eX + eX = 2ex . If f(x) = sin x, then (D + 31)f is the function such that «D + 3I)f)(x) = (Df)(x) + 3If(x) = cos x + 3 sin x. 134 LINEAR MAPPINGS [IV, §2] We note that 3·/ is a linear map, whose value at f is 3f. Thus (D + 3 ·/)f = Df + 3f. At any number x, the value of (D + 3· 1)f is Df(x) + 3f(x). We can also write (D + 31)f = Df + 3f. Exercises IV, §2 1. Determine which of the following mappings F are linear. (a) F: R3 --+ R2 defined by F(x, y, z) = (x, z). (b) F: R4 --+ R4 defined by F(X) = -x. (c) F: R3 --+ R3 defined by F(X) = X + (0, -1, 0). (d) F: R2 --+ R2 defined by F(x, y) = (2x + y, y). (e) F: R2 --+ R2 defined by F(x, y) = (2x, Y - x). (f) F: R2 --+ R2 defined by F(x, y) = (y, x). (g) F: R2 --+ R defined by F(x, y) = xy. 2. Which of the mappings in Exercises 4, 7, 8, 9, of § 1 are linear? 3. Let V, W be two vector spaces and let F: V --+ W be a linear map. Let U be the subset of V consisting of all elements v such that F(v) = O. Prove that U is a subspace of V. 4. Let L: V --+ W be a linear map. Prove that the image of L is a subspace of W. [This will be done in the next section, but try it now to give you prac- tice.] 5. Let A, B be two m x n matrices. Assume that AX=BX for all n-tuples X. Show that A = B. This can also be stated in the form: If LA = LB then A = B. 6. Let Tu; V --+ V be the translation by a vector u. For which vectors u is Tu a linear map? 7. Let L: V --+ W be a linear map. (a) If S is a line in V, show that the image L(S) is either a line in W or a point. (b) If S is a line segment in V, between the points P and Q, show that the image L(S) is either a point or a line segment in W. Between which points in W? (c) Let Vi' v2 be linearly independent elements of V. Assume that L(v i ) and L(v 2 ) are linearly independent in W. Let P be an element of V, and let S be the parallelogram with 0 ~ ti ~ 1 for i = 1, 2. Show that the image L(S) is a parallelogram in W. (d) Let v, w be linearly independent elements of a vector space V. Let F: V --+ W be a linear map. Assume that F(v), F(w) are linearly depen- [IV, §2] LINEAR MAPPINGS 135 dent. Show that the image under F of the parallelogram spanned by v and W is either a point or a line segment. 8. Let E1 = (1,0) and E2 = (0,1) as usual. Let F be a linear map from R2 into itself such that and F(E 2 ) = (-1, 2). Let S be the square whose corners are at (0,0), (1, 0), (1, 1), and (0, 1). Show that the image of this square under F is a parallelogram. 9. Let A, B be two non-zero vectors in the plane such that there is no constant c i= 0 such that B = cA. Let L be a linear mapping of the plane into itself such that L(E I ) = A and L(E 2 ) = B. Describe the image under L of the rectangle whose corners are (0, 1), (3, 0), (0, 0), and (3, 1). 10. Let L: R2 ~ R2 be a linear map, having the following effect on the indicated vectors: (a) L(3, 1) = (1, 2) and L( -1, 0) = (1, 1) (b) L(4, 1) = (1, 1) and L(l, 1) = (3, -2) (c) L(l, 1) = (2, 1) and L( -1, 1) = (6, 3). In each case compute L(l, 0). 11. Let L be as in (a), (b), (c), of Exercise 10. Find L(O, 1). 12. Let V, W be two vector spaces, and F: V ~ W a linear: map. Let W 1, ... ,Wn be elements of W which are linearly independent, and let v l' ... 'V n be elements of V such that F(vJ = Wi for i = 1, ... ,no Show that v 1' ••• ,vn are linearly inde- pendent. 13. (a) Let V be a vector space and F: V ~ R a linear map. Let W be the subset of V consisting of all elements v such that F(v) = o. Assume that W i= V, and let Vo be an element of V which does not lie in W. Show that every element of V can be written as a sum W + evo, with some W in Wand some number c. (b) Show that W is a subspace of V. Let {v 1 , ••• ,vn } be a basis of W. Show that {vo, VI' ... ,vn } is a basis of V. Convex sets 14. Show that the image of a convex set under a linear map is convex. 15. Let L: V ~ W be a linear map. Let T be a convex set in Wand let S be the set of elements VE V such that L(V)E T. Show that S is convex. Remark. Why do these exercises give a more general proof of what you should already have worked out previously? For instance: Let A ERn and let e be a number. Then the set of all X E Rn such that X· A ~ c is convex. Also if S is a convex set and c is a number, then cS is convex. How do these statements fit as special cases of Exercises 14 and 15? 16. Let S be a convex set in V and let U E V. Let 7;,: V ~ V be the translation by u. Show that the image 7;,(S) is convex. 136 LINEAR MAPPINGS [IV, §3] Eigenvectors and eigenvalues. Let V be a vector space, and let L: V ~ V be a linear map. An eigenvector v for L is an element of V such that there exists a scalar C with the property. L(v) = CV. The scalar c is called an eigenvalue of v with respect to L. If v =1= 0 then c is uniquely determined. When V is a vector space whose elements are functions, then an eigenvector is also called an eigenfunction. 17. (a) Let V be the space of differentiable functions on R. Let f(t) = eet, where c is some number. Let L be the derivative d/dt. Show that f is an eigenfunction for L. What is the eigenvalue? (b) Let L be the second derivative, that is for any function f. Show that the functions sin t and cos t are eigenfunc- tions of L. What are the eigenvalues? 18. Let L: V ~ V be a linear map, and let W be the subset of elements of V consisting of all eigenvectors of L with a given eigenvalue c. Show that W is a subspace. 19. Let L: V ~ V be a linear map. Let VI"" ,vn be non-zero eigenvectors for L, with eigenvalues c l , ... 'Cn respectively. Assume that c l' ... ,cn are distinct. Prove that VI"" 'Vn are linearly independent. [Hint: Use induction.] IV, §3. The Kernel and Image of a Linear Map Let F: V ~ W be a linear map. The image of F is the set of elements w In W such that there exists an element v of V such that F(v) = w. The image of F is a subspace of W Proof. Observe first that F(O) = 0, and hence 0 is in the image. Next, suppose that w l' w 2 are in the image. Then there exist elements v1 , V 2 of V such that F(v 1 ) = W 1 and F(v 2 ) = w2 • Hence thereby proving that w 1 + W 2 is in the image. If c is a number, then F(cv 1 ) = cF(v 1 ) = cw 1 • Hence cW 1 is in the image. This proves that the Image is a subspace of W Let V, W be vector spaces, and let F: V ~ W be a linear map. The set of elements v E V such that F(v) = 0 is called the kernel of F. [IV, §3] THE KERNEL AND IMAGE OF A LINEAR MAP 137 The kernel of F is a subspace of V. Proof Since F(O) = 0, we see that 0 is in the kernel. Let v, w be in the kernel. Then F(v + w) = F(v) + F(w) = 0 + 0 = 0, so that v + w is in the kernel. If c is a number, then F(cv) = cF(v) = 0 so that cv is also in the kernel. Hence the kernel is a subspace. Example 1. Let L: R3 ~ R be the map such that L(x, y, z) = 3x - 2y + z. Thus if A = (3, - 2, 1), then we can write L(X) = X·A = A·X. Then the kernel of L is the set of solutions of the equation. 3x - 2y + z = 0. Of course, this generalizes to n-space. If A is an arbitrary vector in Rn, we can define the linear map such that LA(X) = A· X. Its kernel can be interpreted as the set of all X which are perpendicular to A. Example 2. Let P: R3 ~ R2 be the projection, such that P(x, y, z) = (x, y). Then P is a linear map whose kernel consists of all vectors in R3 whose first two coordinates are equal to 0, i.e. all vectors (0, 0, z) with arbitrary component z. Example 3. Let A be an m x n matrix, and let be the linear map such that LA(X) = AX. Then the kernel of LA is precisely the subspace of solutions X of the linear equations AX=O. 138 LINEAR MAPPINGS [IV, §3] Example 4. Differential equations. Let D be the derivative. If the real variable is denoted by x, then we may also write D = d/dx. The deriva- tive may be iterated, so the second derivative is denoted by D2 (or (d/dx)2). When applied to a function, we write D2f, so that Similarly for D3 , D4 , ... ,Dn for the n-th derivative. Now let V be the vector space of functions which admit derivatives of all orders. Let al, ... ,am be numbers, and let 9 be an element of V, that is an infinitely differentiable function. Consider the problem of finding a solution f to the differential equation We may rewrite this equation without the variable x, in the form Each derivative Dk is a linear map from V to itself. Let Then L is a sum of linear maps, and is itself a linear map. Thus the differential equation may be rewritten in the form L(f) = g. This is now in a similar notation to that used for solving linear equa- tions. Furthermore, this equation is in "non-homogeneous" form. The associated homogeneous equation is the equation L(f) = 0, where the right-hand side is the zero function. Let W be the kernel of L. Then W is the set (space) of solutions of the homogeneous equation If there exists one solution fo for the non-homogeneous equation Lef) = g, then all solutions are obtained by the translation fo + W = set of all functions fo + f with f in W See Exercise 5. [IV, §3] THE KERNEL AND IMAGE OF A LINEAR MAP 139 In several previous exercises we looked at the image of lines, planes, parallelograms under a linear map. For example, if we consider the plane spanned by two linearly independent vectors Vi' V2 in V, and L:V~W is a linear map, then the image of that plane will be a plane provided L(v l ), L(v 2 ) are also linearly independent. We can give a criterion for this in terms of the kernel, and the criterion is valid quite generally as follows. Theorem 3.1. Let F: V ~ W be a linear map whose kernel is {O}. If Vi'··. ,Vn are linearly independent elements of V, then F(v l ), ... ,F(vn) are linearly independent elements of W Proof Let Xl' ... ,xn be numbers such that By linearity, we get Hence XlVi + ... + xnvn = O. Since Vl, ... ,Vn are linearly independent it follows that Xi = 0 for i = 1, ... ,no This proves our theorem. We often abbreviate kernel and image by writing Ker and 1m respec- tively. The next theorem relates the dimensions of the kernel and image of a linear map, with the dimension of the space on which the map is defined. Theorem 3.2 Let V be a vector space. Let L: V ~ W be a linear map of V into another space W Let n be the dimension of V, q the dimension of the kernel of L, and s the dimension of the image of L. Then n = q + s. In other words, dim V = dim Ker L + dim 1m L. Proof If the image of L consists of 0 only, then our assertion is trivial. We may therefore assume that s > o. Let {Wi’ … ,Ws } be a basis
of the image of L. Let Vi’ … ,vs be elements of V such that L(vi) = Wi for
i = 1, … ,so If the kernel is not {O}, let {u l , … ,uq } be a basis of the
kernel. If the kernel is {O}, it is understood that all reference to
{u l , … ,uq } is to be omitted in what follows. We contend that

140 LINEAR MAPPINGS [IV, §3]
is a basis of V. This will suffice to prove our assertion. Let v be any
element of V. Then there exist numbers Xl’ … ,xs such that
because {Wl’ … ,ws } is a basis of the image of L. By linearity,
and again by linearity, subtracting the right-hand side from the left-hand
side, it follows that
Hence v – X 1 V 1 – ••• – Xs v s lies in the kernel of L, and there exist
numbers Y l’ … ,Yq such that
Hence
is a linear combination of v l , … ‘Vs ‘ U l , … ,uq • This proves that these
s + q elements of V generate V.
We now show that they are linearly independent, and hence that they
constitute a basis. Suppose that there exists a linear relation:
Applying L to this relation, and using the fact that L(uj ) = 0 for
j = 1, … ,q, we obtain
But L(v l ), … ,L(vs) are none other than W l , … ‘ws’ which have been as-
sumed linearly independent. Hence Xi = 0 for i = 1, … ,so Hence
But U l , … ,uq constitute a basis of the kernel of L, and in particular, are
linearly independent. Hence all Yj = 0 for j = 1, … ,q. This concludes the
proof of our assertion.
Example 1 (continued). The linear map L: R3 ~ R of Example 1 IS
given by the formula
L(x, y, z) = 3x – 2y + z.

[IV, §3] THE KERNEL AND IMAGE OF A LINEAR MAP 141
Its kernel consists of all solutions of the equation
3x – y + z = O.
Its image is a subspace of R, is not {O}, and hence consists of all of R.
Thus its image has dimension 1. Hence its kernel has dimension 2.
Example 2 (continued). The image of the projection
in Example 2 is all of R2, and the kernel has dimension 1.
Exercises IV, §3
Let L: V ~ W be a linear map.
1. (a) If S is a one-dimensional subspace of V, show that the image L(S) IS
either a point or a line.
(b) If S is a two-dimensional subspace of V, show that the image L(S) IS
either a plane, a line or a point.
2. (a) If S is an arbitrary line in V (cf. Chapter III, §2) show that the image of
S is either a point or a line.
(b) If S is an arbitrary plane in V, show that the image of S is either a plane,
a line or a point.
3. (a) Let F: V ~ W be a linear map, whose kernel is {O}. Assume that V and
W have both the same dimension n. Show that the image of F is all of
W
(b) Let F: V ~ W be a linear map and assume that the image of F is all of
W Assume that V and W have the same dimension n. Show that the
kernel of F is {O}.
4. Let L: V ~ W be a linear map. Assume dim V > dim W. Show that the
kernel of L is not O.
5. Let L: V ~ W be a linear map. Let w be an element of W. Let Vo be an
element of V such that L(vo) = w. Show that any solution of the equation
L(X) = w is of type Vo + u, where U is an element of the kernel of L.
6. Let V be the vector space of functions which have derivatives of all orders,
and let D: V ~ V be the derivative. What is the kernel of D?
7. Let D2 be the second derivative (i.e. the iteration of D taken twice). What is
the kernel of D2? In general, what is the kernel of Dn (n-th derivative)?
8. (a) Let V, D be as in Exercise 6. Let L = D – I, where I is the identity
mapping of V. What is the kernel of L?
(b) Same question of L = D – aI, where a is a number.

142 LINEAR MAPPINGS [IV, §3]
9. (a) What is the dimension of the subspace of Rn consisting of those vectors
A = (a 1 , ••• ,an) such that a 1 + … + an = O?
(b) What is the dimension of the subspace of the space of n x n matrices (aij)
such that
n
a 11 +···+a = “a .. =O? nn i..J n •
i= 1
10. An n x n matrix A is called skew-symmetric if tA = -A. Show that any
n x n matrix A can be written as a sum
A = B + C,
where B is symmetric and C is skew-symmetric. [Hint: Let B = (A + tA)/2.J
Show that if A = B 1 + C l’ where B 1 is symmetric and C 1 is skew-symmetric,
then B = Bland C = C 1.
11. Let M be the space of all n x n matrices. Let
P: M –+ M
be the map such that
A+tA
P(A) = 2 .
(a) Show that P is linear.
(b) Show that the kernel of P consists of the space of skew-symmetric ma-
trices.
(c) Show that the image of P consists of all symmetric matrices. [Watch out.
You have to prove two things: For any matrix A, P(A) is symmetric.
Conversely, given a symmetric matrix B, there exists a matrix A such
that B = P(A). What is the simplest possibility for such A?J
(d) You should have determined the dimension of the space of symmetric
matrices previously, and found n(n + 1)/2. What then is the dimension of
the space of skew-symmetric matrices?
(e) Exhibit a basis for the space of skew-symmetric matrices.
12. Let M be the space of all n x n matrices. Let
be the map such that
(a) Show that Q is linear.
Q: M –+ M
A-tA
Q(A) = 2
(b) Describe the kernel of Q, and determine its dimension.
(c) What is the image of Q?

[IV, §3] THE KERNEL AND IMAGE OF A LINEAR MAP 143
13. A function (real valued, of a real variable) is called even if f( -x) = f(x). It
is called odd if f( – x) = – f(x).
( a) Verify that sin x is an odd function, and cos x is an even function.
(b) Let V be the vector space of all functions. Define the map
P: V –+ V
by (Pf)(x) = 0, and 0 if v =1= o.
A scalar product satisfying this condition is called positive definite.
F or the rest of this section we assume that V is a vector space with a
positive definite scalar product.

172 SCALAR PRODUCTS AND ORTHOGONALITY [VI, §1]
Example 1. Let V = Rn, and define
o.
Then
( I
n )1/2
IIfII = J = -1[ sin 2 kx dx
=In.
If g is any continuous function on [- n, n], then the component of g
along f is also called the Fourier coefficient of g with respect to f, and is
equal to
= -; -1[ g(x) sm kx dx.

[VI, §1] SCALAR PRODUCTS 175
As with the case of n-space, we define the projection of v along w to
be the vector ew, because of our usual picture:
w
v-cw
Figure 1
Exactly the same arguments which we gave in Chapter I can now be
used to get the Schwarz inequality, namely:
Theorem 1.1 For all v, WE V we have
I1 < IIvll IIwll· Proof If w = 0, then both sides are equal to 0 and our inequality is obvious. Next, assume that w =1= O. Let e be the component of v along w. We write v = v - ew + ew. Then v - ew is perpendicular to ew, so by Pythagoras, IIvl12 = Ilv - ewl12 + IIewl12 = IIv - ewll2 + lel 2 IIw1l2. Therefore lel 2 IIwll2 < IIvl12 and taking square roots yields But e = O.
The Fourier coefficient of f with respect to gk is
1 f21t
Ck = – f(x) cos kx, dx,
n 0
for k > O.
If we take Vk = gk for k = 1, … ,n then Theorem 1.3 tells us that the
linear combination
Co + C I cos X + C2 cos 2x + … + Cn cos nx
gives the best approximation to the function f among all possible linear
combinations
ao + a l cos x + … + an cos nx
with arbitrary real numbers ao, a1 , ••• ,an’ Such a sum is called a partial
sum of the Fourier series.
Similarly, we could take linear combinations of the functions sin kx.
This leads into the theory of Fourier series. We do not go into this
deeper here. We merely wanted to point out the analogy, and the useful-
ness of the geometric language and formalism in dealing with these
objects.
The next theorem is known as the Bessel inequality.
Theorem 1.4. If VI’ •.• ,Vn are mutually perpendicular unit vectors, and if
ci is the Fourier coefficient of v with respect to Vi’ then
Proof We have
n
L cf < Ilv112. i= I o < 0 for all X =1= 0, then the matrix A is called positive
definite.
(c) Give an example of a 2 x 2 matrix which is symmetric and positive
definite.
(d) Let a > 0, and let
Prove that A is positive definite if and only if ad – b2 > O. [Hint: Let
X = t(x, y) and complete the square in the expression tXAX.]
( e) If a < 0 show that A is not positive definite. 3. Determine whether the following matrices are positive definite. (a) ( 3 -1) -1 2 (b) (-~ ~) (c) G ~) (d) (: ~) (e) G 1~) (f) (_ ~ -1) 10 The trace of a rna trix 4. Let A be an n x n matrix. Define the trace of A to be the sum of the diagonal elements. Thus if A = (aij), then " tr(A) = L aii· i= I F or instance, if [VI, §1] SCALAR PRODUCTS 179 then tr(A) = 1 + 4 = 5. If -1 1 -4 then tr(A) = 9. Compute the trace of the following matrices: (a)(-i : _~) (b)(_~~: -D (C)(~: 1 4 2 5. (a) For any square matrix A show that tr(A) = tr(' A). (b) Show that the trace is a linear map. 6. If A is a symmetric square matrix, show that tr(AA) ~ 0, and =0 if and only if A =0. 7. Let A, B be the indicated matrices. Show that tr(AB) = tr(BA). (a) A = (~ -1 🙂 B= ( ~ 1 D 4 1 0 -1 2 ~i 7 -n B=( ~ -2 ~) (b) A = 5 4 3 -7 -3 8. (a) Prove in general that if A, B are square n x n matrices, then tr(AB) = tr(BA). (b) If C is an n x n matrix which has an inverse, then tr(C- 1 AC) = tr(A). 9. Let V be the vector space of symmetric n x n matrices. For A, BE V define the symbol (A, B) = tr(AB), where tr is the trace (sum of the diagonal elements). Show that the previous properties in particular imply that this defines a positive definite scalar product on V. Exercises 10 through 13 deal with the scalar product in the context of calculus. 10. Let V be the space of continuous functions on [0,2n], and let the scalar product be given by the integral over this interval as in the text, that is f 21t ,(I, g) = 0 I(x)g(x) dx. 180 SCALAR PRODUCTS AND ORTHOGONALITY Let gn(x) = cos nx for n ~ 0 and hm(x) = sin mx for m ~ 1. (a) Show that IIgoli = fo, Ilgnll = Ilhnll = In for n ~ 1. [VI, §2] (b) Show that gn.l gm if m =1= n and that gn.l hm for all m, n. Hint: Use formulas like sin A cos B = ![sin(A + B) + sin(A - B)] cos A cos B = ![cos(A + B) + cos(A - B)]. 11. Let f(x) = x on the interval [0,2n]. Find and for the func-
tions gn’ hn of Exercise 10. Find the Fourier coefficients of f with respect to
gn and hn·
12. Same question as in Exercise 11 if f(x} = x 2 • (Exercises 10 through 13 give you
a review of some elementary integrals from calculus.)
13. (a) Let f(x) = x on the interval [0,2n]. Find Ilfll.
(b) Let f(x) = x 2 on the same interval. Find II f II.
VI, §2. Orthogonal Bases
Let V be a vector space with a positive definite scalar product through-
out this section. A basis {v 1, … ,vn } of V is said to be orthogonal if its
elements are mutually perpendicular, i.e. if denote the
standard scalar product on Rn. Thus by definition
= tXY.

[VI, §2] ORTHOGONAL BASES 189
Similarly, let A be an n x n matrix. Then
Thus we obtain the formula
(t)g(t) dt.
5. Let V be the subspace of functions generated by the two functions J(t) = t
and get) = t 2 . Find an orthonormal basis for V.
6. Let V be the subspace generated by the three functions 1, t, t 2 (where 1 is
the constant function). Find an orthonormal basis for V.
7. Let V be a finite dimensional vector space with a positive definite scalar
product. Let W be a subspace. Show that
and Wn W-L = {OJ.

190 SCALAR PRODUCTS AND ORTHOGONALITY [VI, §3]
In the terminology of the preceding chapter, this means that V is the direct
sum of Wand its orthogonal complement. [Use Theorem 2.3.]
8. In Exercise 7, show that (W~)~ = W. Why is this immediate from Theorem
2.3?
9. (a) Let V be the space of symmetric n x n matrices. For A, BE V define
(A, B) = tr(AB),
where tr is the trace (sum of diagonal elements). Show that this satisfies
all the properties of a positive definite scalar product. (You might al-
ready have done this as an exercise in a previous section.)
(b) Let W be the subspace of matrices A such that tr(A) = O. What is the
dimension of the orthogonal complenlent of W, relative to the scalar
product in part (a)? Give an explicit basis for this orthogonal comple-
ment.
10. Let A be a symmetric n x n matrix. Let X, Y ERn be eigenvectors for A, that
is suppose that there exist numbers a, b such that AX = aX and A Y = bY.
Assume that a i= b. Prove that X, Yare perpendicular.
VI, §3. Bilinear Maps and Matrices
Let U, V, W be vector spaces, and let
g: U x V -+ W
be a map. We say that 9 is bilinear if for each fixed u E U the map
v~g(u, v)
is linear, and for each fixed v E V, the map
u~g(u, v)
is linear. The first condition written out reads
g(u, V 1 + v2 ) = g(u, v 1 ) + g(u, v2 ),
g(u, cv) = cg(u, v),
and similarly for the second condition on the other side.
Example. Let A be an m x n matrix, A = (aij). We can define a map

[VI, §3] BILINEAR MAPS AND MATRICES 191
by letting
which written out looks like this:
Our vectors X and Yare supposed to be column vectors, so that t X is a
row vector, as shown. Then t X A is a row vector, and t X A Y is a 1 x 1
matrix, i.e. a number. Thus gA maps pairs of vectors into the reals. Such
a map gA satisfies properties similar to those of a scalar product. If we
fix X, then the map y~tXAY is linear, and if we fix Y, then the map
X ~ t X A Y is also linear. In other words, say fixing X, we have
gA(X, Y + Y’) = gA(X, Y) + gA(X, Y’),
gA(X, cY) = cgA(X, Y),
and similarly on the other side. This is merely a reformulation of prop-
erties of multiplication of matrices, namely
tXA(Y + Y’) = tXAY + tXAY’,
tXA(cY) = ctXAY.
It is convenient to write out the multiplication tx A Y as a sum. Note
that
and thus
Example. Let
m
j-th component of tXA = L Xiaij,
i= 1
n m n m
tXAY= L L XiaijYj = L L aijXiYj·
j=l i=l j=l i=l
A = G -~)
If X = GJ and Y = GJ then

192 SCALAR PRODUCTS AND ORTHOGONALITY [VI, §3]
Theorem 3.1. Given a bilinear map g: Rm x Rn -+ R, there exists a
unique matrix A such that 9 = gA’ i.e. such that
g(X, Y) = tXAY.
Proof The statement of Theorem 3.1 is similar to the statement repre-
senting linear maps by matrices, and its proof is an extension of previous
proofs. Remember that we used the standard bases for R n to prove these
previous results, and we used coordinates. We do the same here. Let
E 1 , … ,Em be the standard unit vectors for R m, and let U 1, … ,un be the
standard unit vectors for Rn. We can then write any X E Rm as
and any YERn as
Then
m
X= L
i= 1
n
Y= L yjU j.
j= 1
U sing the linearity on the left, we find
m
g(X, Y) = L xig(Ei, Yl U 1 + … + ynUn).
i= 1
Using the linearity on the right, we find
m n
g(X, Y) = L L xiYjg(Ei, uj).
i=l j=l
Let
Then we see that
m n
g(X, Y) = L L aijxiYj’
i=l j=l

[VI, §3] BILINEAR MAPS AND MATRICES 193
which is precisely the expression we obtained for the product
where A is the matrix (a ii ). This proves that g = gA for the choice of aij
given above.
The uniqueness is also easy to see, and may be formulated as follows.
Uniqueness. If A, Bare m x n matrices such that for all vectors X, Y
(of the appropriate dimension) we have
tXAY = tXBY,
then A = B.
Proof Since the above relation holds for all vectors X, Y, it holds in
particular for the unit vectors. Thus we apply the relation when X = Ei
and Y = Vi. Then the rule for multiplication of matrices shows that
and
Hence aii = bij for all indices i, j. This shows that A = B.
Remark. Bilinear maps can be added and multiplied by scalars. The
sum of two bilinear maps is again bilinear, and the product by a scalar
is again bilinear. Hence bilinear maps form a vector space. Verify the
rules
and
Then Theorem 3.1 can be expressed by saying that the association
is an isomorphism between the space of m x n matrices, and the space of
bilinear maps from Rm x Rn into R.
Application to calculus. If you have had the calculus of several vari-
ables, you have associated with a function f of n variables the matrix of
second partial derivatives

194 SCALAR PRODUCTS AND ORTHOGONALITY [VI, §3]
This matrix may be viewed as the matrix associated with a bilinear map,
which is called the Hessian. Note that this matrix is symmetric since it
is proved that for sufficiently smooth functions, the partials commute,
that is
ax· ax· ox·ox· l} } l
Exercises VI, §3
1. Let A be n x n matrix, and assume that A is symmetric, I.e. A = t A. Let

O. However, we shall prove:
Theorem 6.1. The area of the parallelogram spanned by v, w is equal to
the absolute value of the determinant, namely \D(v, w)\.
To prove Theorem 6.1, we introduce the notion of oriented area. Let
P(v, w) be the parallelogram spanned by v and w. We denote by
Volo(v, w) the area of P(v, w) if the determinant D(v, w) > 0, and minus
the area of P(v, w) if the determinant D(v, w) < o. Thus at least Volo(v, w) has the same sign as the determinant, and we call Volo(v, w) the oriented area. We denote by V ole v, w) the area of the parallelogram spanned by v, w. Hence Volo(v, w)'= + Vol(v, w). To prove Theorem 6.1, it will suffice to prove: The oriented area is equal to the determinant. In other words, Volo(v, w) = D(v, w). Now to prove this, it will suffice to prove that Yolo satisfies the three properties characteristic of a determinant, namely: 1. V 010 is linear in each variable v and w. 2. Volo(v, v) = 0 for all v. 3. Volo(E1, E2) = 1 if E1, E2 are the standard unit vectors. We know that these three properties characterize determinants, and this was proved in Theorem 1.1. For the convenience of the reader, we repeat the argument here very briefly. We assume that we have a func- [VII, §6] DETERMINANTS AS AREA AND VOLUME 223 tion g satisfying these three properties (with g replacing Volo). Then for any vectors and we have g(aE1 + eE 2, bEl + dE 2) == abg(E1, E1) + adg(E1, E2) + ebg(E2, E1) + edg(E2, E2). The first and fourth term are equal to O. By Exercise 1, and hence g(v, w) == (ad - be)g(E1, E2) == ad - be. This proves what we wanted. In order to prove that Vol o satisfies the three properties, we shall use simple properties of area (or volume) like the following: The area of a line segment is equal to o. If A is a certain region, then the area of A is the same as the area of a translation of A, i.e. the same as the area of the region Aw (consisting of all points v + w with v E A). If A, Bare regions which are disjoint or such that their common points have area equal to 0, then Vol(A u B) == Vol(A) + Vol(B). Consider now Volo. The last two properties are obvious. Indeed, the parallelogram spanned by v, v is simply a line segment, and its 2-dimen- sional area is therefore equal to O. Thus property 2 is satisfied. As for the third property, the parallelogram spanned by the unit vectors E1, E2 is simply the unit square, whose area is 1. Hence in this case we have The harder property is the first. The reader who has not already done so, should now read the geometric applications of Chapter III, §2 before reading the rest of this proof, which we shall base on geometric consider- ations concerning area. We shall need a lemma. Lemma 6.2. If v, ware linearly dependent, then Volo(v, w) == o. Proof Suppose that we can write av + bw == 0 224 DETERMINANTS o Figure 2 with a or b =1= O. Say a =1= O. Then b v = -- w = cw a [VII, §6] w so that v, w lie on the same straight line, and the parallelogram spanned by v, w is a line segme'nt (Fig. 2). Hence Volo(v, w) = 0, thus proving the lemma. We also know that when v, ware linearly dependent, then D(v, w) = 0, so in this trivial case, our theorem is proved. In the subsequent lemmas, we assume that v, ware linearly independent. Lemma 6.3. Assume that v, ware linearly independent, and let n be a positive integer. Then Vol(nv, w) = n Vol(v, w). Proof. The parallelogram spanned by nv and w consists of n parallelo- grams as shown in the following picture. Figure 3 These n parallelograms are simply the translations of P(v, w) by v, 2v, ... , (n - l)v, and each translation of P(v, w) has the same area as P(v, w). These translations have only line segments in common, and hence Vol(nv, w) = n Vol(v, w) as desired. [VII, §6] DETERMINANTS AS AREA AND VOLUME 225 Corollary 6.4. Assume that v, ware linearly independent and let n be a positive integer. Then vot(~ v, w) = ~ Vol(v, w). If m, n are positive integers, then vot(: v, w) = : Vol(v, w). Proof Let V 1 = (l/n)v. By the lemma, we know that Vol(nvb w) = n Vol(v b w). This is merely a reformulation of our first assertion, since nV 1 = v. As for the second assertion, we write min = m· lin and apply the proved state- ments successively: v ot( m· ~ v, w ) = m V ot( ~ v, w ) 1 = m·- Vol(v, w) n m = - Vol(v, w). n Lemma 6.5. Vol( -v, w) = Vol(v, w). Proof The parallelogram spanned by - v and w is a translation by - v of the parallelogram P(v, w). Hence P(v, w) and P( - v, w) have the same area. (Cf. Fig. 4.) P(v, w) P( -v, w) -v Figure 4 o Figure 5 226 DETERMINANTS [VII, §6] Lemma 6.6. If c is any real number> 0, then
Vol(cv, w) = c Vol(v, w).
Proof Let r, r’ be rational numbers such that 0 < r < c < r' (Fig. 5). Then P(rv, w) c P(cv, w) c P(r'v, w). Hence by Lemma 6.3, r Vol(v, w) = Vol(rv, w) < Vol(cv, w) < Vol(r'v, w) = r' Vol(v, w). Letting rand r' approach c as a limit, we find that Vol(cv, w) = c Vol(v, w), as was to be shown. From Lemmas 6.5 and 6.6 we can now prove that Volo(cv, w) = c Volo(v, w) for any real number c, and any vectors v, w. Indeed, if v, ware linearly dependent, then both sides are equal to o. If v, ware linearly indepen- dent, we use the definition of Yolo and Lemmas 6.5, 6.6. Say D(v, w) > 0
and c is negative, c = -d. Then D(cv, w) < 0 and consequently Volo(cv, w) = - Vol(cv, w) = - Vole -dv, w) = - Vol(dv, w) = --dVol(v,w) = c Vol(v, w) = c Volo(v, w). A similar argument works when D(v, w) < O. We have therefore proved one of the conditions of linearity of the function Volo. The analogous property of course works on the other side, namely Volo(v, cw) = c Volo(v, w). For the other condition, we again have a lemma. [VII, §6] DETERMINANTS AS ARE,A AND VOLUME 227 Lemma 6.7. Assume that v, ware linearly independent. Then Vol(v + w, w) == Vol(v, w). Proof We have to prove that the parallelogram spanned by v, w has the same area as the parallelogram spanned by v + w, w. v+w A+w v A Figure 6 The parallelogram spanned by v, w consists of two triangles A and B as shown in the picture. The parallelogram spanned by v + wand w con- sists of the triangles B and the translation of A by w. Since A and A + w have the same area, we get: Vol(v, w) == Vol(A) + Vol(B) == Vol(A + w) + Vol(B) == Vol(v + w, w), as was to be shown. We are now in a position to deal with the second property of linear- ity. Let w be a fixed non-zero vector in the plane, and let v be a vector such that {v, w} is a basis of the plane. We shall prove that for any numbers c, d we have (1) Volo(cv + dw, w) == c Volo(v, w). Indeed, if d == 0, this is nothing but what we have shown previously. If d =1= 0, then again by what has been shown previously, d Volo(cv + dw, w) == Volo(cv + dw, dw) == c Volo(v, dw) == cd Volo(v, w). Canceling d yields relation (1). 228 DETERMINANTS From this last formula, the linearity now follows. Indeed, if then and Volo(v l + v2 , w) = Volo((cl + c2 )v + (d l + d2 )w, w) = (c l + C2) Volo(v, w) = clVolo(v, w) + C2 Volo(v, w) = Volo(v l, w) + Volo(v2' w). This concludes the proof of the fact that Volo(v, w) = D(v, w), and hence of Theorem 6.1. [VII, §6] Remark 1. The proof given above is slightly long, but each step is quite simple. Furthermore, when one wishes to generalize the proof to higher dimensional space (even 3-space), one can give an entirely similar proof. The reason for this is that the conditions characterizing a deter- minant involve only two coordinates at a time and thus always take place in some 2-dimensional plane. Keeping all but two coordinates fixed, the above proof then can be extended at once. Thus for instance in 3-space, let us denote by P( u, v, w) the box spanned by vectors u, v, w (Fig. 7), namely all combinations with Let V ol( u, v, w) be the volume of this box. Figure 7 [VII, §6] DETERMINANTS AS AREA AND VOLUME 229 Theorem 6.8. The volume of the box spanned by u, v, w is the absolute value o.f the determinant 1 D(u, v, w) I. That is, Vol(u, v, w) = ID(u, v, w)l. The proof follows exactly the same pattern as in the two-dimensional case. Indeed, the volume of the cube spanned by the unit vectors is 1. If two of the vectors u, v, ware equal, then the box is actually a 2-di- mensional parallelogram, whose 3-dimensional volume is O. Finally, the proof of linearity is the same, because all the action took place either in one or in two variables. The other variables can just be carried on in the notation but they did not enter in an essential way in the proof. Similarly, one can define n-dimensional volumes, and the correspond- ing theorem runs as follows. Theorem 6.9. Let V l , ... ,Vn be elements of Rn. Let Vol(v l , ... ,vn ) be the n-dimensional volume of the n-dimensional box spanned by V 1,.·. ,Vn • Then Of course, the n-dimensional box spanned by V 1, ••. ,Vn IS the set of linear combinations with Remark 2. We have used geometric properties of area to carry out the above proof. One can lay foundations for all this purely analytically. If the reader is interested, cf. my book Undergraduate Analysis. Remark 3. In the special case of dimension 2, one could actually have given a simpler proof that the determinant is equal to the area. But we chose to give the slightly more complicated proof because it is the one which generalizes to the 3-dimensional, or n-dimensional case. We interpret Theorem 6.1 in terms of linear maps. Given vectors v, w in the plane, we know that there exists a unique linear map. L:R2-+R2 such that L(El) = v and L(E2) = w. In fact, if then the matrix associated with the linear map is 230 DETERMINANTS [VII, §6] Furthermore, if we denote by C the unit square spanned by E1, E2, and by P the parallelogram spanned by v, W, then P is the image under L of C, that is L(C) = P. Indeed, as we have seen, for 0 < ti < 1 we have If we define the determinant of a linear map to be the determinant of its associated matrix, we conclude that (Area of P) = IDet(L)I. To take a numerical example, the area of the parallelogram spanned by the vectors (2, 1) and (3, - 1) (Fig. 8) is equal to the absolute value of and hence is eq ual to 5. 2 3 1 =-5 -1 Figure 8 Theorem 6.10. Let P be a parallelogram spanned by two vectors. Let L: R2 -+ R2 be a linear map. Then Area of L(P) = IDet LI (Area of P). Proof Suppose that P is spanned by two vectors v, w. Then L(P) is spanned by L(v) and L(w). (Cf. Fig. 9.) There IS a linear map L 1 : R2 -+ R2 such that and Then P = L 1(C), where C is the unit square, and [VII, §6] DETERMINANTS AS AREA AND VOLUME 231 By what we proved above in (*), we obtain Vol L(P) = IDet(LoL1)1 = I Det(L) Det(L1)1 = IDet(L)IVol(P), thus proving our assertion. Corollary 6.11. For any rectangle R with sides parallel to the axes, and any linear map L: R2 -+ R2 we have Vol L(R) = I Det(L) I Vol(R). Proof Let c l' C 2 be the lengths of the sides of R. Let R 1 be the rectangle spanned by c1E 1 and c2E2. Then R is the translation of Rl by some vector, say R = Rl + u. Then L(R) = L(Rl + u) = L(R 1) + L(u) is the translation of L(R 1) by L(u). (Cf. Fig. 10.) Since area does not change under translation, we need only apply Theorem 6.1 to conclude the proof. 232 DETERMINANTS [VII, §6] Exercises VII, §6 1. If g(v, w) satisfies the first two axioms of a determinant, prove that g(v, w) = - g(w, v) for all vectors v, w. This fact was used in the uniqueness proof. [Hint: Ex- pand g(v + w, v + w) = O.J 2. Find the area of the parallelogram spanned by the following vectors. (a) (2, 1) and ( - 4, 5) (b) (3, 4) and (- 2, - 3) 3. Find the area of the parallelogram such that three corners of the parallelo- gram are given by the following points. (a) (1, 1), (2, -1), (4,6) (b) (-3,2), (1,4), (-2, -7) (c) (2, 5), (-1, 4), (1, 2) (d) (1, 1), (1, 0), (2, 3) 4. Find the volume of the parallelepiped spanned by the following vectors in 3-space. (a) (1, 1, 3), (1, 2, -1), (1, 4, 1) (c) (-1, 2, 1), (2, 0, 1), (1, 3, 0) (b) (1, -1, 4), (1, 1, 0), (-1, 2, 5) (d) (-2,2,1), (0,1,0), (-4, 3, 2) CHAPTER VIII Eigenvectors and Eigenvalues This chapter gives the basic elementary properties of eigenvectors and eigenvalues. We get an application of determinants in computing the characteristic polynomial. In §3, we also get an elegant mixture of calcu- lus and linear algebra by relating eigenvectors with the problem of find- ing the maximum and minimum of a quadratic function on the sphere. Most students taking linear algebra will have had some calculus, but the proof using complex numbers instead of the maximum principle can be used to get real eigenvalues of a symmetric matrix if the calculus has to be avoided. Basic properties of the complex numbers will be recalled in an appendix. VIII, §1. Eigenvectors and Eigenvalues Let V be a vector space and let A:V--+V be a linear map of V into itself. An element v E V is called an eigenvector of A if there exists a number A such that Av = AV. If v =1= 0 then A is uniquely determined, because Al v = A2 v implies Al = A2. In this case, we say that A is an eigenvalue of A belonging to the eigenvector v. We also say that v is an eigenvector with the eigenvalue A. Instead of eigenvector and eigenvalue, one also uses the terms characteristic vector and charac- teristic value. If A is a square n x n matrix then an eigenvector of A is by definition an eigenvector of the linear map of R n into itself represented by this 234 EIGENVECTORS AND EIGENVALUES [VIII, § 1] matrix. Thus an eigenvector X of A is a (column) vector of Rn for which there exists A E R such that AX = AX. Example 1. Let V be the vector space over R consisting of all infini- tely differentiable functions. Let A E R. Then the function f such that f(t) = eAt is an eigenvector of the derivative d/dt because df /dt = Ae At. Example 2. Let be a diagonal matrix. Then every unit vector Ei (i = 1, ... ,n) IS an eigenvector of A. In fact, we have AEi = aiEi: o o 0 o Example 3. If A: V -+ V is a linear map, and v is an eigenvector of A, then for any non-zero scalar c, cv is also an eigenvector of A, with the same eigenvalue. Theorem 1.1. Let V be a vector space and let A: V -+ V be a linear map. Let A E R. Let VA be the subspace of V generated by all eigenvectors of A having A as eigenvalue. Then every non-zero element of VA is an eigenvector of A having A as eigenvalue. If c E K then A(cv l ) = CAVl = CAV l = ACV l . This proves our theorem. The subspace VA in Theorem 1.1 is called the eigenspace of A belong- ing to A. Note. If Vl' V2 are eigenvectors of A with different eigenvalues At =1= A2 then of course Vl + V2 is not an eigenvector of A. In fact, we have the following theorem: Theorem 1.2. Let V be a vector space and let A: V -+ V be a linear map. [VIII, §1] EIGENVECTORS AND EIGENVALUES 235 Let VI' .•• ,Vm be eigenvectors of A, with eigenvalues A1, ••• ,Am respecti- vely. Assume that these eigenvalues are 4.istinct, i.e. A·-'-A· I -r- J if i =1= j. Then VI' ... ,Vm are linearly independent. Proof By induction on m. For m = 1, an element V 1 E V, V 1 =1= 0 is linearly independent. Assume m > 1. Suppose that we have a relation
with scalars ci. We must prove all ci=O. We multiply our relation (*)
by Al to obtain
We also apply A to our relation (*). By linearity, we obtain
We now subtract these last two expressions, and obtain
Since Aj – All =1= 0 for j = 2, … ,m we conclude by induction that
C2 = … = Cm = o. Going back to our original relation, we see that
ClV l = 0, whence C l = 0, and our theorem is proved.
Example 4. Let V be the vector space consisting of all differentiable
functions of a real variable t. Let elf, … ,elm be distinct numbers. The
functions
are eigenvectors of the derivative, with distinct eigenvalues ell’ … ,elm’ and
hence are linearly independent.
Remark 1. In Theorem 1.2, suppose V is a vector space of dimension
n and A: V -+ V is a linear map having n eigenvectors VI’ … ,Vn whose
eigenvalues AI, … ,An are distinct. Then {v 1 , ••• ,vn} is a basis of V.
Remark 2. One meets a situation like that of Theorem 1.2 in ~the
theory of linear differential equations. Let A = (aij) be an n x n matrix,
and let
(
fl(t))
F(t) = :
fn(t)

236 EIGENVECTORS AND EIGENVALUES
be a column vector of functions satisfying the equation
dF
– = AF(t).
dt
In terms of the coordinates, this means that
dJ; n
-d = L aijjj(t).
t j= 1
Now suppose that A is a diagonal matrix,
(
~1 ~ ••• 0)
A = .. :
o 0 … ~n
with ai #- 0 all i.
Then each function J;(t) satisfies the equation
[VIII, §1]
By calculus, there exist numbers C 1 , … ‘Cn such that for i = 1, … ,n we
have
[Proof: if df /dt = af(t), then the derivative of f(t)/e at is 0, so f(t)/e at is
constant.] Conversely, if C b … ‘Cn are numbers, and we let
Then F(t) satisfies the differential equation
dF
dt = AF(t).
Let V be the set of solutions F(t) for the differential equation
dF /dt = AF(t). Then V is immediately verified to be a vector space, and
the above argument shows that the n elements
, … ,

[VIII, §1] EIGENVECTORS AND EIGENVALUES 237
form a basis for V. Furthermore, these elements are eigenvectors of A,
and also of the derivative (viewed as a linear map).
The above is valid if A is a diagonal matrix. If A is not diagonal,
then we try to find a basis such that we can represent the linear map A
by a diagonal matrix. We do not go into this type of consideration here.
Exercises VIII, § 1
Let a be a number i= O.
1. Prove that the eigenvectors of the matrix
generate a I-dimensional space, and give a basis for this space.
2. Prove that the eigenvectors of the matrix
generate a 2-dimensional space and give a basis for this space. What are the
eigenvalues of this matrix?
3. Let A be a diagonal matrix with diagonal elements a 11, … ,ann’ What is the
dimension of the space generated by the eigenvectors of A? Exhibit a basis
for this space, and give the eigenvalues.
4. Show that if 0 E R, then the matrix
A = (COS ()
sin ()
sin ())
-cos ()
always has an eigenvector in R2, and in fact that there exists a vector V 1 such
that AV1 = V 1. [Hint: Let the first component of V 1 be
sin ()
X=—
1 – cos ()
if cos () i= 1. Then solve for y. What if cos () = l?J
5. In Exercise 4, let V2 be a vector of R2 perpendicular to the vector V 1 found in
that exercise. Show that AV2 = – v2 . Define this to mean that A is a reflec-
tion.
6. Let
(
COS ()
R(O) = . ()
SIn
-sin (})
cos (}

238 EIGENVECTORS AND EIGENVALUES [VIII, §2]
be the matrix of a rotation. Show that R(O) does not have any real eigen-
values unless R(O) = ± I. [It will be easier to do this exercise after you have
read the next section.]
7. Let V be a finite dimensional vector space. Let A, B be linear maps of V into
itself. Assume that AB = BA. Show that if v is an eigenvector of A, with
eigenvalue A, then Bv is an eigenvector of A, with eigenvalue A also if Bv i:- O.
VIII, §2. The Characteristic Polynomial
We shall now see how we can use determinants to find the eigenvalue of
a matrix.
Theorem 2.1. Let V be a finite dimensional vector space, and let A be a
number. Let A: V -+ V be a linear map. Then A is an eigenvalue of A if
and only if A – AI is not invertible.
Proof Assume that A is an eigenvalue of A. Then there exists an
element v E V, v =1= 0 such that Av = Av. Hence Av – AV = 0, and
(A – AI)v = O. Hence A – AI has a non-zero kernel, and A – AI cannot
be invertible. Conversely, assume that A – AI is not invertible. By
Theorem 2.4 of Chapter 5, we see that A – AI must have a non-zero
kernel, meaning that there exists an element v E V, v =1= 0 such that
(A – AI)v = O. Hence Av – AV = 0, and Av = Av. Thus A is an eigen-
value of A. This proves our theorem.
Let A be an n x n matrix, A = (aij ). We define the characteristic
polynomial P A of A to be the determinant
P A(t) = Det(tI – A),
or written out in full,
P(t) =
We can also view A as a linear map from Rn to Rn, and we also say
that P A(t) is the characteristic polynomial of this linear map.
Example 1. The characteristic polynomial of the matrix
A = (-~
-1
1
1

[VIII, §2] THE CHARACTERISTIC POLYNOMIAL 239
IS
t – 1
2
°
1
t – 1
-1
-3
-1
t + 1
which we expand according to the first column, to find
For an arbitrary matrix A = (aij), the characteristic polynomial can be
found by expanding according to the first column, and will always con-
sist of a sum
Each term other than the one we have written down will have degree
< n. Hence the characteristic polynomial is of type P A(t) = tn + terms of lower degree. Theorem 2.2. Let A be an n x n matrix. A number A is an eigenvalue of A if and only if A is a root of the characteristic polynomial of A. Proof. Assume that A is an eigenvalue of A. Then AI - A is not invertible by Theorem 2.1, and hence Det(AI - A) = 0, by Theorem 3.1 of Chapter VII and Theorem 2.5 of Chapter V. Consequently A is a root of the characteristic polynomial. Conversely, if A is a root of the charac- teristic polynomial, then Det(AI - A) = 0, and hence by the same Theorem 3.1 of Chapter VII we conclude that )~I - A is not invertible. Hence A is an eigenvalue of A by Theorem 2.1. Theorem 2.2 gives us an explicit way of determining the eigenvalues of a matrix, provided that we can determine explicitly the roots of its char- acteristic polynomial. This is sometimes easy, especially in exercises at the end of chapters when the matrices are adjusted in such a way that one can determine the roots by inspection, or simple devices. It is con- siderably harder in other cases. For instance, to determine the roots of the polynomial in Example 1, one would have to develop the theory of cubic polynomials. This can be done, but it involves formulas which are somewhat harder than the for- mula needed to solve a quadratic equation. One can also find methods to determine roots approximately. In any case, the determination of such methods belongs to another range of ideas than that studied in the present chapter. 240 EIGENVECTORS AND EIGENVALUES [VIII, §2] Example 2. Find the eigenvalues and a basis for the eigenspaces of the matrix (1 4) 23· The characteristic polynomial is the determinant t-1 -4 -2 t _ 3 = (t - 1)(t - 3) - 8 = t 2 - 4t - 5 = (t - 5)(t + 1). Hence the eigenvalues are 5, - 1. For any eigenvalue A, a corresponding eigenvector IS a vector (Xy) such that or equivalently x + 4y = AX, 2x + 3y = AY, (1 - A)X + 4y = 0, 2x + (3 - A)Y = o. We give X some value, say X = 1, and solve for y from either equation, for instance the second to get y = - 2/(3 - A). This gives us the eigen- vector X(A) = ( -2/(~ - A)} Substituting A = 5 and A = - 1 gives us the two eigenvectors and The eigenspace for 5 has basis X 1 and the eigenspace for - 1 has basis X 2 • Note that any non-zero scalar multiples of these vectors would also be bases. For instance, instead of X2 we could take Example 3. Find the eigen val ues and a basis for the eigenspaces of the matrix [VIII, §2] THE CHARACTERISTIC POLYNOMIAL The characteristic polynomial is the determinant t-2 -1 0 o t - 1 1 = (t - 2)2(t - 3). o -2 t - 4 Hence the eigenvalues are 2 and 3. For the eigenvectors, we must solve the equations (2 - A)X + Y = 0, (1 - A)Y - Z = 0, 2y + (4 - A)Z = o. Note the coefficient (2 - A) of x. 241 Suppose we want to find the eigenspace with eigenvalue A = 2. Then the first equation becomes y = 0, whence Z = 0 from the second equa- tion. We can give x any value, say x = 1. Then the vector is a basis for the eigenspace with eigenvalue 2. Now suppose A =1= 2, so A = 3. If we put x = 1 then we can solve for y from the first equation to give y = 1, and then we can solve for Z in the second equation, to get Z = - 2. Hence is a basis for the eigenvectors with eigenvalue 3. Any non-zero scalar multiple of X 2 would also be a basis. Example 4. The characteristic polynomial of the matrix is (t - 1)( t - 5)( t - 7). Can you generalize this? Example 5. Find the eigenvalues and a basis for the eigenspaces of the matrix in Example 4. 242 EIGENVECTORS AND EIGENVALUES [VIII, §2] The eigenvalues are 1, 5, and 7. Let X be a non-zero eigenvector, say also written tx = (x, y, z). Then by definition of an eigenvector, there IS a number A such that AX = AX, which means x + y + 2z = AX, 5y - Z = AY, 7z = AZ. Case 1. Z = 0, y = 0. Since we want a non-zero eigenvector we must then have X i= 0, in which case A = 1 by the first equation. Let Xl = E1 be the first unit vector, or any non-zero scalar multiple to get an eigen- vector with eigenvalue 1. Case 2. Z = 0, y i= 0. By the second equation, we must have A = 5. Give y a specific value, say y = 1. Then solve the first equation for x, namely X + 1 = 5x, which gives Let Then X2 is an eigenvector with eigenvalue 5. 1 X = 4. Case 3. Z i= 0. Then from the third equation, we must have A = 7. Fix some non-zero value of z, say z = 1. Then we are reduced to solv- ing the two simultaneous equations X + y + 2 = 7x, 5y - 1 = 7y. This yields y = -! and X = i. Let Then X 3 is an eigenvector with eigenvalue 7. [VIII, §2] THE CHARACTERISTIC POLYNOMIAL 243 Scalar multiples of Xl, X 2 , X 3 will yield eigenvectors with the same eigenvalues as Xl, X 2 , X 3 respectively. Since these three vectors have distinct eigenvalues, they are linearly independent, and so form a basis of R3. By Exercise 14, there are no other eigenvectors. Finally we point out that the linear algebra of matrices could have been carried out with complex coefficients. The same goes for determin- ants. All that is needed about numbers is that one can add, multiply, and divide by non-zero numbers, and these operations are valid with complex numbers. Then a matrix A = (a ij ) of complex numbers has eigenvalues and eigenvectors whose components are complex numbers. This is useful because of the following fundamental fact: Every non-constant polynomial with complex coefficients has a complex root. If A is a complex n x n matrix, then the characteristic polynomial of A has complex coefficients, and has degree n > 1, so has a complex root
which is an eigenvalue. Thus over the complex numbers, a square matrix
always has an eigenvalue, and a non-zero eigenvector. This is not always
true over the real numbers. (Example?) In the next section, we shall see
an important case when a real matrix always has a real eigenvalue.
We now give examples of computations using complex numbers for
the eigenvalues and eigenvectors, even though the matrix itself has real
components. It should be remembered that in the case of complex eigen-
values, the vector space is over the complex numbers, so it consists of
linear combinations of the given basis elements with complex coefficients.
Example 6. Find the eigenvalues and a basis for the eigenspaces of the
matrix
A=G -1) 1 .
The characteristic polynomial is the determinant
t – 2
-3
1
= (t – 2)(t – 1) + 3 = t 2 – 3t + 5.
t – 1
Hence the eigenvalues are
3 ±)9 – 20
2
Thus there are two distinct eigenvalues (but no real eigenvalue):
and

244 EIGENVECTORS AND EIGENVALUES [VIII, §2]
Let X = (;) with not both x, y equal to O. Then X is an eigenvector if
and only if AX = AX, that is:
2x – Y = AX,
3x + Y = AY,
where A is an eigenvalue. This system is equivalent with
(2 – A)x – Y = 0,
3x + (1 – A)Y = o.
We give x, say, an arbitrary value, for instance x = 1 and solve for y, so
Y = (2 – A) from the first equation. Then we obtain the eigenvectors
and
Remark. We solved for Y from one of the equations. This is
consistent with the other because A is an eigenvalue. Indeed, if you
substitute x = 1 and Y = 2 – A on the left in the second equation, you
get
3 + (1 – ,1)(2 – A) = 0
because A is a root of the characteristic polynomial.
Then X(A 1 ) is a basis for the one-dimensional eigenspace of Ab and
X(A2) is a basis for the one-dimensional eigenspace of ,12.
Example 7. Find the eigenvalues and a basis for the eigenspaces of the
matrix
1 -1)
10.
o 1
We compute the characteristic polynomial, which is the determinant
easily computed to be
t – 1
o
-1
-1
t – 1
o
1
o
t – 1
pet) = (t – 1)(t 2 – 2t + 2).

[VIII, §2] THE CHARACTERISTIC POLYNOMIAL 245
Now we meet the problem of finding the roots of pet) as real numbers
or complex numbers. By the quadratic formula, the roots of t 2 – 2t + 2
are given by
2 ± f=8 = 1 + J=1.
The whole theory of linear algebra could have been done over the com-
plex numbers, and the eigenvalues of the given matrix can also be
defined over the complex numbers. Then from the computation of
the roots above, we see that the only real eigenvalue is 1; and that
there are two complex eigenvalues, namely
1 +J=1 and I-J=1.
We let these eigen val ues be
Let
be a non-zero vector. Then X is an eigenvector for A if and only if the
following equations are satisfied with some eigenvalue A:
x + y – Z = AX,
y = AY,
X + Z = AZ.
This system is equivalent with
(1 – A)X + y – Z = 0,
(1 – A)Y = 0,
X + (1 – A)Z = 0.
Case 1. A = 1. Then the second equation will hold for any value of y.
Let us put y = 1. From the first equation we get Z = 1, and from the
third equation we get X = 0. Hence we get a first eigenvector

246 EIGENVECTORS AND EIGENVALUES [VIII, §2]
Case 2. A =1= 1. Then from the second equation we must have y = o.
Now we solve the system arising from the first and third equations:
(1 – A)X – Z = 0,
x + (1 – A)Z = O.
If these equations were independent, then the only solutions would be
x = Z = o. This cannot be the case, since there must be a non-zero
eigenvector with the given eigenvalue. Actually you can check directly
that the second equation is equal to (A – 1) times the first. In any
case, we give one of the variables an arbitrary value, and solve for the
other. For instance, let Z = 1. Then x = 1/(1 – A). Thus we get the
eigen vector
(
1/(1 – A))
X(A) = 0 .
1
We can substitute A = A1 and A = A2 to get the eigenvectors with the
eigenvalues A1 and A2 respectively.
In this way we have found three eigenvectors with distinct eigenvalues,
namely
Example 8. Find the eigenvalues and a basis for the eigenspaces of the
matrix
(-~
-1
1
-1
The characteristic polynomial is
t – 1
2
-1
1
t – 1
1
-2
-3 =(t-1)3-(t-l)-1.
t – 1
The eigenvalues are the roots of this cubic equation. In general it is not
easy to find such roots, and this is the case in the present instance. Let
u =/t – 1. In terms of u the polynomial can be written
Q(u) = u 3 – U – 1.
From arithmetic, the only rational roots must be integers, and must
divide 1, so the only possible rational roots are + 1, which are not roots.

[VIII, §2] THE CHARACTERISTIC POLYNOMIAL 247
Hence there is no rational eigenvalue. But a cubic equation has the
general shape as shown on the figure:
Figure 1
This means that there is at least one real root. If you know calculus,
then you have tools to be able to determine the relative maximum and
relative minimum, you will find that the function u 3 – u – 1 has its rela-
tive maximum at u = -1/)3, and that Q( -1/)3) is negative. Hence
there is only one real root. The other two roots are complex. This is as
far as we are able to go with the means at hand. In any case, we give
these roots a name, and let the eigenvalues be
They are all distinct.
We can, however, find the eigenvectors In terms of the eigenvalues.
Let
be a non-zero vector. Then X is an eigenvector if and only if AX = AX,
that is:
x – Y + 2z = AX,
-2x + y + 3z = AY,
X – Y + Z = AZ.
This system of equations is equivalent with
(1 – A)X – Y + 2z = 0,
– 2x + (1 – A)Y + 3z = 0,
X – Y + (1 – A)Z = 0.

248 EIGENVECTORS AND EIGENVALUES [VIII, §2]
We give z an arbitrary value, say z = 1 and solve for x and y using the
first two equations. Thus we must solve:
(A – 1)x + y = 2,
2x + (A – 1)y = 3.
Multiply the first equation by 2, the second by (A – 1) and subtract.
Then we can solve for y to get
3(,1 – 1) – 4
y(.1) = (A. _ 1)2 – 2°
From the first equation we find
Hence eigenvectors are
2-y
x(A) = A – 1·
where Ab ,12, ,13 are the three eigenvalues. This is an explicit answer to
the extent that you are able to determine these eigenvalues. By machine
or a computer, you can use means to get approximations to ,11′ ,12, ,13
which will give you corresponding approximations to the three eigenvec-
tors. Observe that we have found here the complex eigenvectors. Let ,11
be the real eigenvalue (we have seen that there is only one). Then from
the formulas for the coordinates of X(A), we see that yeA) or x(A) will be
real if and only if A is real. Hence there is only one real ~igenvector
namely X(A1). The other two eigenvectors are complex. Each eigenvec-
tor is a basis for the corresponding eigenspace.
Theorem 2.3. Let A, B be two n x n matrices, and assume that B is
invertible. Then the characteristic polynomial of A is equal to the
characteristic pol ynomial of B- 1 AB.
Proof By definition, and properties of the determinant,
Det(tI – A) = Det(B- 1(tI – A)8) = Det(tB- 1 B – B- 1 AB)
= Det(tI – B- 1 AB).
This proves what we wanted.

[VIII, §2] THE CHARACTERISTIC POLYNOMIAL
Exercises VIII, §2
1. Let A be a diagonal matrix,
o
i) o
(a) What is the characteristic polynomial of A?
(b) What are its eigenvalues?
2. Let A be a triangular matrix,
(11 0 a 2l a 22
A= .
anl an2
249
What is the characteristic polynomial of A, and what are its eigenvalues?
Find the characteristic polynomial, eigenvalues, and bases for the eigenspaces
of the following matrices.
3. (a) G ~)
(-2 -7) (c) 1 2
4. (4 0
(a) – 2 1
-2 0
(c) (; 4
(b) (_ ~
(d) G
-3
-5
-6
2
2
5. Find the eigenvalues and eigenvectors of the following matrices. Show that
the eigenvectors form a I-dimensional space.
(b) G ~) (c) G ~) (d) G -3) -1
6. Find the eigenvalues and eigenvectors of the following matrices. Show that
the eigenvectors form a I-dimensional space.
(a) G
o
1
o

250 EIGENVECTORS AND EIGENVALUES [VIII, §3]
7. Find the eigenvalues and a basis for the eigenspaces of the following ma-
trices.
1
o
o
o
o
1
o
o
(
-1 0 1)
(b) -1 3 0
-4 13 -1
8. Find the eigenvalues and a basis for the eigenspaces for the following
matrices.
(a) G
(d) (- ~
-3
~)
2
2
-6
(e) (~
2
1
(c) (_~
(
-1
(f) – 3
-3
~)
4
4
9. Let V be an n-dimensional vector space and assume that the characteristic
polynomial of a linear map A: V ~ V has n distinct roots. Show that V has a
basis consisting of eigenvectors of A.
10. Let A be a square matrix. Shows that the eigenvalues of t A are the same as
those of A.
11. Let A be an invertible matrix. If A is an eigenvalue of A show that A#-O
and that I\, – 1 is an eigenvalue of A – 1.
12. Let V be the space generated over R by the two functions sin t and cos t.
Does the derivative (viewed as a linear map of V into itself) have any non-
zero eigenvectors in V? If so, which?
13. Let D denote the derivative which we view as a linear map on the space of
differentiable functions. Let k be an integer #- O. Show that the functions
sin kx and cos kx are eigenvectors for D2. What are the eigenvalues?
14. Let A: V ~ V be a linear map of V into itself, and let {v 1, ••• ,vn } be a basis of
V consisting of eigenvectors having distinct eigenvalues C l’ … ‘Cn. Show that
any eigenvector v of A in V is a scalar multiple of some Vi.
15. Let A, B be square matrices of the same size. Show that the eigenvalues of
AB are the same as the eigenvalues of BA.
VIII, §3. Eigenvalues and Eigenvectors of Symmetric
Matrices
We shall give two proofs of the following theorem.
Theorem 3.1. Let A be a symmetric n x n real matrix. Then there
exists a non-zero real eigenvector for A.

[VIII, §3] SYMMETRIC MATRICES 251
One of the proofs will use the complex numbers, and the other proof
will use calculus. Let us start with the calculus proof.
Define the function
f(X) = tXAX for XERn.
Such a function f is called the quadratic form associated with A. If
tx = (Xl’ … ,xn ) is written in terms of coordinates, and A = (a ij ) then
Example. Let
Let tx = (x, y). Then
n
f(X) = L aijxixj .
i,j= 1
A = ( 3 -1)
-1 2·
tx AX = (x, Y{ _ ~ – ~)(;) = 3x 2 – 2xy + 2y2.
More generally, let
Then
Example. Suppose we are given a quadratic expression
f(x, y) = 3x 2 + 5xy – 4y2.
Then it is the quadratic form associated with the symmetric matrix
(
3 ~)
A = ~ -4′
In many applications, one wants to find a maximum for such a func-
tion f on the unit sphere. Recall that the unit sphere is the set of all
points X such that IIXII = 1, where IIXII = JX. X. It is shown in analy-
sis courses that a continuous function f as above necessarily has a maxi-
mum on the sphere. A maximum on the unit sphere is a point P such
that IIPII = 1 and
f(P) > f(X) for all X with IIXII = 1.

252 EIGENVECTORS AND EIGENVALUES [VIII, §3]
The next theorem relates this problem with the problem of finding eigen-
vectors.
Theorem 3.2. Let A be a real symmetric matrix, and let f(X) == tXAX
be the associated quadratic form. Let P be a point on the unit sphere
such that f(P) is a maximum for f on the sphere. Then P is an
eigenvector for A. I n other words, there exists a number A such that
AP == AP.
Proo.f Let W be the subspace of Rn orthogonal to P, that is W == p.l.
Then dim W == n – 1. For any element WE W, Ilwll == 1, define the curve
C(t) == (cos t)P + (sin t)w.
The directions of unit vectors WE Ware the directions tangent to the
sphere at the point P, as shown on the figure.
p = C(O)
c~
0-
Figure 2
The curve C(t) lies on the sphere because II C(t)11 == 1, as you can verify
at once by taking the dot product C(t)· C(t), and using the hypothesis
that p. w == O. Furthermore, C(O) == P, so C(t) is a curve on the sphere
passing through P. We also have the derivative
C'(t) == ( – sin t)P + (cos t)w,
and so C'(O) == w. Thus the direction of the curve is in the direction of
w, and is perpendicular to the sphere at P because W· P == O. Consider
the function
g(t) == f( C(t)) == C(t)· AC(t).
Using coordinates, and the rule for the derivative of a product which
applies in this case (as you might know from calculus), you find the
derivative:
g'(t) == C'(t)· AC(t) + C(t)· AC'(t)
== 2C'(t)· AC(t),

[VIII, §3] SYMMETRIC MATRICES 253
because A is symmetric. Since f(P) is a maximum and g(O) = f(P), it
follows that g'(O) = o. Then we obtain:
o = g'(O) = 2C'(O)· AC(O) = 2w· AP.
Hence AP is perpendicular to W for all WE W. But W-L IS the
I-dimensional space generated by P. Hence there is a number A such
that AP = AP, thus proving the theorem.
Corollary 3.3. The maximum value of f on the unit sphere is equal to
the largest eigenvalue of A.
Proof Let A be any eigenvalue and let P be an eigenvector on the
unit sphere, so IIPII = 1. Then
f(P) = tpAP = tPAP = AtPP = A.
Thus the value of f at an eigenvector on the unit sphere is equal to the
eigenvalue. Theorem 3.2 tells us that the maximum of f on the unit
sphere occurs at an eigenvector. Hence the maximum of f on the unit
sphere is equal to the largest eigenvalue, as asserted.
Example. Let f(x, y) = 2X2 – 3xy + y2. Let A be the symmetric
matrix associated with f Find the eigenvectors of A on the unit circle,
and find the maximum of f on the unit circle.
First we note that f is the quadratic form associated with the matrix
(
2 -~)
A = _~ l’
By Theorem 3.2 a maximum must occur at an eigenvector, so we first
find the eigenvalues and eigenvectors.
The characteristic polynomial is the determinant
t – 2
3
2″
Then the eigenvalues are
3
2″ I = t 2 – 3t – i.
t-
For the eigenvectors, we must solve
2x – ~y = AX,
-~x + y = Ay.

254 EIGENVECTORS AND EIGENVALUES [VIII, ~3J
Putting x == 1 this gives the possible eigenvectors
Thus there are two such eigenvectors, up to non-zero scalar multiples.
The eigenvectors lying on the unit circle are therefore
X(2)
P(A.) = IIXU)II with and
3 – J1G
2 == ——
2 .
By Corollary 3.3 the maximum is the point with the bigger eigenvalue,
and must therefore be the point
P(2) with
The maximum value of f on the unit circle is (3 + .jiO)/2.
By the same token, the minimum value of f on the unit circle IS
(3 – .jiO)/2.
We shall now use the complex numbers C for the second proof. A
fundamental property of complex numbers is that every non-constant
polynomial with complex coefficients has a root (a zero) in the complex
numbers. Therefore the characteristic polynomial of A has a complex
root 2, which is a priori a complex eigenvalue, with a complex eigen-
vector.
Theorem 3.4. Let A be a real symmetric matrix and let 2 be an eigen-
value in C. Then 2 is real. If Z =1= 0 is a complex eigenvector with
eigenvalue 2, and Z == X + iY where X, Y ERn, then both X, Yare real
eigenvectors of A with eigenvalue 2, and X or Y =1= O.
Proof Let Z = t(Zl”” ,zn) with complex coordinates Zi’ Then
By hypothesis, we have AZ == 2Z. Then
tZ’AZ == tZ2Z == 2tZZ.
The transpose of a 1 x 1 matrix is equal to itself, so we also get
tZtAZ == tZAZ == 2tZZ.

[VIII, §4] DIAGONALIZATION OF A SYMMETRIC LINEAR MAP 255
But AZ = AZ = AZ and AZ = AZ = AZ. Therefore
AtZZ = I’ZZ.
Since t ZZ i= 0 it follows that A = A, so A is real.
Now from AZ = AZ we get
AX + iA Y = AX + iA Y,
and since A, X, Y, are real it follows that AX = AX and A Y = AY. This
proves the theorem.
Exercises VIII, §3
1. Find the eigenvalues of the following matrices, and the maximum value of the
associated quadratic forms on the unit circle.
( 2 -1) (a) -1 2 (b) (: ~)
2. Same question, except find the maximum on the unit sphere.
-~) (b) (-~ – ~ -~)
1 0 -1 2
3. Find the maximum and minimum of the function
f(x, y) = 3x 2 + 5xy – 4y2
on the unit circle.
VIII, §4. Diagonalization of a Symmetric Linear Map
In this section we give an application of the existence of eigenvectors as
proved in §3. Since we shall do an induction, instead of working with Rn
we have to start with a formulation dealing with a vector space in which
coordinates have not yet been chosen.
So let V be a vector space of dimension n over R, with a positive
definite scalar product 0, and with a positive definite scalar product.
Let
A:V—+V
be a linear map, symmetric with respect to the scalar product. Then V
has an orthonormal basis consisting of eigenvectors.
Proof By Theorem 3.1, there exists a non-zero eigenvector P for A.
Let W be the one-dimensional space generated by P. Then W is stable
under A. By the above remark, Wi- is also stable under A and is a
vector space of dimension n – 1. We may then view A as giving a sym-
metric linear map of Wi- into itself. We can then repeat the procedure
We put P = Pi’ and by induction we can find a basis {P 2 , ••• ,p”} of Wi-
consisting of eigenvectors. Then

258 EIGENVECTORS AND EIGENVALUES [VIII, §4]
is an orthogonal basis of V consisting of eigenvectors. We divide each
vector by its norm to get an orthonormal basis, as desired.
If {e 1, … ,en} is an orthonormal basis of V such that each ej is an
eigenvector, then the matrix of A with respect to this basis is diagonal,
and the diagonal elements are precisely the eigenvalues:
In such a simple representation, the effect of A then becomes much
clearer than when A is represented by a more complicated matrix with
respect to another basis.
Example. We give an application to linear differential equations. Let
A be an n x n symmetric real matrix. We want to find the solutions in
Rn of the differential equation
dX(t)
— = AX(t)
dt ‘
where
is given in terms of coordinates which are functions of t, and
dX(t) = (dX~/dt).
dt .
dXn/dt
Writing this equation in terms of arbitrary coordinates is messy. So let
us forget at first about coordinates, and view Rn as an n-dimensional
vector space with a positive definite scalar product. We choose an
orthonormal basis of V (usually different from the original basis) consist-
ing of eigenvectors of A. Now with respect to this new basis, we can
identify V with R n with new coordinates which we denote by Yl’··· ,Yn.

[VIII, §4] DIAGONALIZATION OF A SYMMETRIC LINEAR MAP 259
With respect to these new coordinates, the matrix of the linear map LA
IS
where AI, … ,An are the eigenvalues. But in terms of these more conven-
ient coordinates, our differential equation simply reads
Thus the most general solution is of the form
The moral of this example is that one should not select a basis too
quickly, and one should use as often as possible a notation without
coordinates, until a choice of coordinates becomes imperative to make
the solution of a problem simpler.
Exercises VIII, §4
1. Suppose that A is a diagonal n x n matrix. For any XERn, what is tXAX in
terms of the coordinates of X and the diagonal elements of A?
2. Let
be a diagonal matrix with A 1 ~ 0, … ,An ~ O. Show that there exists an n x n
diagonal matrix B such that B2 = A.
3. Let V be a finite dimensional vector space with a positive definite scalar pro-
duct. Let A: V ~ V be a symmetric linear map. We say that A is positive
definite if < Av, v> > 0 for all v E V and v i= O. Prove:
(a) If A is positive definite, then all eigenvalues are > O.
(b) If A is positive definite, then there exists a symmetric linear map B such
that B2 = A and BA = AB. What are the eigenvalues of B? [Hint: Use a
basis of V consisting of eigenvectors.]

260 EIGENVECTORS AND EIGENV ALVES [App]
4. We say that A is positive semidefinite if 0 and = 0 if and only if a = O.
I ap I = I a II PI
la + PI < lal + IPI. The first assertion is obvious. As to the second, we have Taking the square root, we conclude that lailPI = laPI. Next, we have I a + P 12 = (a + P)( a + P) = (a + P)( a + 11) because ap = pa. However, we have because the real part of a complex number IS < its absolute value. Hence la + PI 2 < lal 2 + 21Pal + IPI 2 < lal 2 + 21Pllal + IPI 2 = (Ial + IPI)2. Taking the square root yields the final property. Let z = x + iy be a complex number =1= O. Then z/lzl has absolute value 1. The main advantage of working with complex numbers rather than real numbers is that every non-constant polynomial with complex coeffi- [App] DIAGONALIZATION OF A SYMMETRIC LINEAR MAP 263 cients has a root in C. This is proved in more advanced courses in analysis. For instance, a quadratic equation ax 2 + bx + c == 0, with a i= 0 has the roots -b ± Jb 2 - 4ac x == 2a . If b2 - 4ac is positive, then the roots are real. If b2 - 4ac is negative, then the roots are complex. The proof for the quadratic formula uses only the basic arithmetic of addition, multiplication, and division. Namely, we complete the square to see that ( b)2 b 2 (b)2 ax 2 + bx + c == a x + 2a - 4a + c == a x + 2a Then we solve b2 - 4ac 4a take the square root, and finally get the desired value for x. Application to vector spaces To define the notion of a vector space, we need first the notion of sca- lars. And the only facts we need about scalars are those connected with addition, multiplication, and division by non-zero elements. These basic operations of arithmetic are all satisfied by the complex numbers. There- fore we can do the basic theory of vector spaces over the complex numbers. We have the same theorems about linear combinations, matrices, row rank, column rank, dimension, determinants, characteristic polynomials, eigenvalues. The only basic difference (and it is slight) comes when we deal with the dot product. If Z == (zl"",zn) and W== (w1, ... ,wn) are n-tuples in cn, then their dot product is as before But observe that even if Z i= 0, then z· Z may be O. For instance, let Z == (1, i) in C 2 . Then Z . Z == 1 + i2 == 1 - 1 == O. Hence the dot product is not positive definite. 264 EIGENVECTORS AND EIGENVALUES [App] To remedy this, one defines a product which is called hermitian and is almost equal to the dot product, but contains a complex conjugate. That is we define tv = ("\, ... ,wn ) and so we put a complex conjugate on the coordinates of W. Then Hence once again, if Z =1= 0, then some coordinate Zi =1= 0, so the sum on the right is =1= 0 and O.
If rx is a complex number, then from the definition we see that
is the same
as the dot product.
One can then develop the Gram-Schmidt orthogonalization process
just as before using the hermitian product rather than the dot product.
In the application of this chapter we did not need the hermitian pro-
duct. All we needed was that a complex n x n matrix A has an eigen-
value, and that the eigenvalues are the roots of the characteristic
polynomial
det(tI – A).
As mentioned before, a non-constant polynomial with complex coeffi-
cients always has a root in the complex numbers, so A always has an
eigenvalue in C. In the text, we showed that when A is real symmetric,
then such eigenvalues must in fact be real.

Answers to Exercises
I, §1, p. 8
A+B A-B 3A -2B
1. (1, 0) (3, -2) (6, -3) (2, -2)
2. (-1,7) (-1, -1) (- 3,9) (0, -8)
3. (1,0,6) (3, -2,4) (6, – 3, 15) (2, -2, -2)
4. (-2,1, -1) (0, -5,7) (-3, -6,9) (2, -6,8)
5. (3n, 0,6) (-n,6, -8) (3n,9, -3) (-4n, 6, -14)
6. (15 + n, 1,3) (15 – n, -5,5) (45, -6,12) (-2n, -6,2)
I, §2, p. 12
1. No 2. Yes 3. No 4. Yes 5. No 6. Yes 7. Yes 8. No
I, §3, p. 15
1. (a) 5 (b) 10 (c) 30 (d) 14 (e) n 2 + 10 (f) 245
2. (a) – 3 (b) 12 (c) 2 (d) – 17 (e) 2n 2 – 16 (f) 15n – 10
4. (b) and (d)
I, §4, p. 29
1. (a) vis (b) jW (c) j30 (d) j14 (e) JI0 + n 2 (f) J245

266 ANSWERS TO EXERCISES
2. (a) J2 (b) 4 (c) J3 (d) J26 (e) )’58 + 4n 2 (f) Jlo~+ n 2
3. (a) (~, -~) (b) (0, 3) (c) (-j, ~,j) (d) (i~, – ~~, ij)
n 2 – 8 15n – 10
(e) 2n2 + 29 (2n, -3, 7) (f) W+ n 2 (n, 3, -1)
4. (a) (-~,~) (b) (-~, 158 ) (c) (?s, – /5′ 1) (d) – ~l( -1, -2, 3)
2n 2 – 16 3n – 2
(e) n 2 + 10- (n, 3, -1) (f) 49 (15, -2,4)
-1 -2 10 13 -1
5. (a) ~ hA (b) ~ (c) I1A FiC (d) M1 111 (e) ~ ~
y5 y 34 y5 y14 y 35 y21yl1 y12
35 6 1 16 25
6. (a) ~’ fAiL’ ° (b) ~’ ;’A1M’ MLA1
y 41 . 35 y 41 . 6 y 1 7 . 26 y 41 . 17 y 26 . 41
7. Let us dot the sum
with Ai. We find
Since A j • Ai = ° if j =1= i we find
But Ai· Ai =1= ° by assumption. Hence ci = 0, as was to be shown.
8. (a) IIA + BI12 + IIA – BI12 = (A + B)·(A + B) + (A – B)·(A – B)
= A 2 + 2A . B + B2 + A 2 – 2A . B + B2
= 2A2 + 2B2 = 211AI12 + 211BI12
9. IIA – BI12 = A2 – 2A·B + B2 = IIAI12 – 211AII IIBIlcos 0 + IIBI12
I, §5, p. 34
1. (a) Let A = P 2 – P 1 = ( – 5, – 2, 3). Parametric representation of the line is
X(t) = P 1 + tA = (1, 3, -1) + t( -5, -2, 3).
(b) (-1, 5, 3) + t(-I, -1,4)
2. X = (1, 1, -1) + t(3, 0, -4) 3. X = (-1,5,2) + t(-4, 9,1)
4. (a) (-~, 4,!) (b) (-j, 131 , 0), (-i, 133, 1) (c) (0, 15
7
, -~) (d) (-1, 1{, ~)
1 P+Q
5. P + 2(Q – P) = 2

ANSWERS TO EXERCISES 267
I, §6, p. 40
1. The normal vectors (2, 3) and (5, – 5) are not perpendicular because their dot
product 10-15 = -5 is not 0.
2. The normal vectors are (-m, 1) and (-m’, 1), and their dot product is
mm’ + 1. The vectors are perpendicular if and only if this dot product is 0,
which is equivalent with mm’ = -1.
3.y=x+8 4. 4y = 5x – 7 6. (c) and (d)
7. (a) x-y+3z= -1 (b) 3x+2y-4z=2n+26 (c) x-5z= -33
8. (a) 2x+y+2z=7 (b) 7x-8y-9z= -29 (c) Y+z= 1
9. (3, – 9, – 5), (1, 5, – 7) (Others would be constant multiples of these.)
10. ( – 2, 1, 5) 11. (11, 13, – 7)
12. (a) X = (1,0, -1) + t(-2, 1,5)
(b) X=(-10, -13, 7)+t(ll, 13, -7) or also (1,0, O)+t(ll, 13, -7)
1 2 4 2
13. (a) -3 (b) -J (c) J~- (d) 110
42 66 v’ 18
15. (1, 3, – 2)
8 13
16. (a) ~ /;~- (b) !-
v’35 v’21
II, §1, p. 46
1. A + B = (~ 7 ~}
~}
:}
~}
(
-3
3B = 3 l~ =~)
(
2 -10
-2B=
-2 -2
2A + B = ( 1
-1
9
A – 2B = (_~ =~
2. A + B = G _~}
A + 2B = ( – ~ _ ~ }
A + 2B = (- ~ l~ -~)
A-B=( 2 -3
-2 -1 ~)
(
-2
B- A = 2 3 -5)
1 -3
(-3 3) (2 3B = ° -9′ -2B = °
B-A=
Rows of A: (1, 2, 3), (-1, 0, 2)
Columns of A: ( -:} (~} G)
Rows of B: (-1,5, -2), (1,1, -1)
-2)
6 ‘
(-2 2) -2 -4

268 ANSWERS TO EXERCISES
Rows of A: (1, -1), (2,1) Columns of A: G} (-!)
Rows of B: (-1, 1), (0, – 3) Columns of B: (-~} ( _~)
(
-1 1)
tB = 5 1
-2 -1
4. (a) ‘A=(i -~}
(b) ‘A = ( _! ~} tB= (-1 0) 1 -3
5. Let cij = aij + bij. The ij-component of teA + B) is cji = a ji + hji’ which is the
sum of the ji-component of A plus the ji-component of B.
7. Same 8. (~ _~} same
t (2 9. A + A = 1 1) (-2 1) 2 ‘ B + tB = 1-6
10. (a) teA + tA) = tA + ttA = tA + A = A + tAo
(b) teA – tA) = tA – ttA = -(A – tA)
(c) The diagonal elements are ° because they satisfy
II, §2, p. 58
1. IA = AI = A 2.0
3. (a) (! ~) CO) C3 (b) 14 (c) 11 37) -18
5. AB = G -~} BA =(! ~)
6. AC = CA = ( 7
21 ~~} C
4
BC = CB = 7 ~)
If C = xl, where x is a number, then AC = CA = xA.
7. (3, 1, 5), first row
8. Second row, third row, i-th row
9. (a) (!) (b) G) (c) G)

ANSWERS TO EXERCISES 269
10. (a) (D (b) (!) (c) G)
11. Second column of A 12. j-th column of A
13. (a) (D (b) G) (c) C;) (d) (:J
14. (a) (a ax + db). Add a multiple of the first column to the second column.
c ex +
Other cases are similar.
16. (a) A2=G ~} IfB=(~
1
1) 0
0 A 3 = 0 matrix.
0 1 1
then
0
0 o 1
0 o 0
B2 =(~
0 1 2
B3 =(~
0 0
~) 0 0 1 0 0 and B4 = o. 0 0 0 0 0 0 0 0 0 0
(b) A2 =G
2
~} A3=G
3
1} A4=(:
4 1;) 1 1 1
0 0 0
( 0 o~ (~ o o~ (~ o 0) 17. 0 4 0, 8 0, 16 0 009 o 27 o 81
18. Diagonal matrix with diagonal a~, a~, … ,a~.
19. 0,0
20. (a) ( _ ~ ~)
(b) ( -:2/b ~a) for any a, b =1= 0; if b = 0, then G ~}
21. (a) Inverse is I + A.
(b) Multiply I – A by I + A + A2 on each side. What do you get?
22. (a) Multiply each side of the relation B = T AT- 1 on the left by T- 1 and on
the right by T. We get
Hence there exists a matrix, namely T- 1, such that T- 1 BT = A. This
means that B is similar to A.

270 ANSWERS TO EXERCISES
(b) Suppose A has the inverse A-I. Then T A-I T- 1 is an inverse for B
because
And similarly BTA -1 T- 1 = I.
(c) Take the transpose of the relation B = TAT-I. We get
This means that t B is similar to t A, because there exists a matrix, namely
tT- 1 = C, such that tB = CAC- 1 •
23. Diagonal elements are a 11 b 11 , ••• ,annbnn . They multiply componentwise.
24. (1 a + b) (1 na) ° 1 ‘0 1
25. G -~)
26. Multiply AB on each side by B- 1 A -1. What do you get? Note the order in
which the inverses are taken.
27. (a) The addition formula for cosine is
COS(Ol + O2 ) = cos 0 1 cos O2 – sin 01 sin O2 .
This and the formula for the sine will give what you want.
(b) A(O)-l = A(-O). Multiply A(O) by A(-O), what do you get?
(
COS nO – sin no)
(c) An = . II ll. You can prove this by induction. Take the
SIn nu cos nu
product of An with A. What do you get?
(01 – 0
1) 1 (1 – 1) ( – 1 0) (- 1 28. (a) (b) Ji 1 1 ( c) 0 _ 1 ( d) 0
(e)~(_fi ~) (f)~(~ ~) (g) fi(=~ -~)
(
COS 0 sin 0)
29.
-sin 0 cos 0
1
30. Ji (- I, 3) 31. (- 3, -1)
32. The coordinates of Yare given by
Y1 = Xl cos () – X2 sin 0,
Y 2 = X 1 sin 0 + X 2 cos O.
Find yi + y~ by expanding out, using simple arithmetic. Lots of terms will
cancel out to leave xi + x~.

ANSWERS TO EXERCISES
4
o
o
o
2 -2) (b) (0 o 0 2
o 0 0
o 0 0
o 0
3 -1
o 0
o 0
34. (a) Interchange first and second row of A.
(b) Interchange second and third row of A.
~)
(c) Add five times second row to fourth row of A.
(d) Add – 2 times second row to third row of A.
35. (a) Multiply first row of A by 3.
(b) Add 3 times third row to first row.
(c) Subtract 2 times first row from second row.
(d) Subtract 2 times second row from third row.
36. (a) Put s-th row of A in r-th place, zeros elsewhere.
(b) Interchange r-th and s-th rows, put zeros elsewhere.
(c) Interchange r-th and s-th rows.
37. (a) Add 3 times s-th row to r-th row.
(b) Add C times s-th row to r-th row.
II, §3, p. 69
271
1. Let X = (x l’ … ,xn ). Then X· Ei = Xi’ so if this 0 for all then Xi = 0
for all i.
3. X·(cIAI +···+cnAn)=cIX·A 1 +···+cnX·An=O.
II, §4, p. 76
(There are several possible answers to the row echelon form, we give one of
them. Others are also correct.)
1. (a) (:
2 -5) and also
(:
0
~6) 9 -26 1 -9
0 0 0 0
(b) (:
0 2) and also ( 0
D
-1 -1 0 1
0 -1 0 0
2. (a) (:
-2 3 -!) and also
(:
0 0
-l) 3 -4 1 0
0 7 -10 0 10 -7
(b) (~ 0 -7 5) and also ( 0 7 -~)
2:
1 3 -2 0 1 3
0 0 o 0 0 0

272 ANSWERS TO EXERCISES
3. (a) (~ 2 -1 2 🙂 or also
(~
2 0 0
-D
0 3 -6 0 1 0
0 0 -6 0 0
(b) 1 3 -1 2 or also 0 4 U) IT 11 0 11 -5 3 0 1 5 3 -IT IT
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
II, §5, p. 85
1. (a) – 2~( -:
1 -7) (2 23 -11) -6 ~ (b) ~ ~ 19 -8
-12 2 -10 5
(c) ~ ( ~
2 -9) (d) !(~
-16
-:) 2 -3 7 -2 -4 10 5 0 -2
-~(-3~
-19 0) (e) -14 12
76
28 17 -20
(17 7 -:) (f) – ~ 11 -7
14
13 -7 11
2. The effect of multiplication by Irs is to put the s-th row in the r-th place, and
zeros elsewhere. Thus the s-th row of IrsA is O. Multiplying by Irs once
more puts 0 in the r-th row, and 0 elsewhere, so I;s = O.
3. We have Ers(c) = I + cIrs and Ers(c’) = I + c’Irs so
III, §l, p. 93
= I + cIrs + c’Irs + cc’I;s
= I + (c + c’)I rs
= Eric + c’).
because
1. Let Band C be perpendicular to Ai for all i. Then
(B + C)·A. = B·A. + C·A. = 0
l l l
Also for any number x,
for all i.
Finally O· Ai.= 0 for all i. This proves that W is a subspace.

ANSWERS TO EXERCISES 273
2. (c) Let W be the set of all (x, y) such that x + 4y = O. Elements of Ware
then of the form (-4y, y). Letting y = 0 shows that (0, 0) is in W If
(-4y, y) and (-4y’, y’) are in W, then their sum is (-4(y + y’), y + y’) and
so lies in W. If e is a number, then e( -4y, y) = (-4ey, ey), which lies in W
Hence W is a subspace.
4. Let v I’ V 2 be in the intersection U n W Then their sum v I + V 2 is both in U
(because VI’ V 2 are in U) and in W (because VI’ V 2 are in W) so is in the
intersection U n W. We leave the other conditions to you.
Now let us prove partially that U + W is a subspace. Let u I , U 2 be
elements of (I and WI’ W2 be elements of W Then
and this has the form U + w, with U = U I + tl2 in U and W = WI + W 2 in W
So the sum of two elements in U + W is also in U + W We leave the other
conditions to the reader.
5. Let A and B be perpendicular to all elements of V. Let X be an element of V.
Then (A + B)· X = A . X + B· X = 0, so A + B is perpendicular to all elements
of V. Let e be a number. Then (eA)· X = e(A· X) = 0, so eA is perpendicular to
all elements of V. This proves that the set of elements of R” perpendicular to all
elements of V is a subspace.
III, §4, p. 109
2. (a) A – B, (1, -1) (b)!A + ~B, (!, ~)
(c) A + B, (1, 1) (d) 3A + 2B, (3, 2)
3. (a) (i, -i, i) (b) (1, 0, 1) (c) (i, -i, -i)
4. Assume that ad – be #- O. Let A = (a, b) and C = (e, d). Suppose we have
xA + yC = o.
This means in terms of coordinates
xa + ye = 0,
xb + yd = o.
Multiply the first equation by d, the second by e and subtract. We find
x(ad – be) = o.
Since ad – be #- 0 this implies that x = o. A similar elimination shows that
y = o. This proves (i).
Conversely, suppose A, C are linearly independent. Then neither of them
can be (0,0) (otherwise pick x, y #- 0, and get xA + yC = 0 which is impossi-
ble). Say b or d #- o. Then
d(a, b) – b(e, d) = (ad – be, 0).

274 ANSWERS TO EXERCISES
Since A, C are assumed linearly independent, the right-hand side cannot be 0,
so ad – be #- O. The argumen t is similar if a or e #- O.
For (iii), given an arbitrary vector (s, t), solve the system of linear equa-
tions arising from xA + yC = (s, t) by elimination. You will find precisely
that you need ad – be #- 0 to do so.
6. Look at Chapter I, §4, Exercise 7.
9. (3, 5)
10. (-5, 3)
11. Possible basis: G ~} (~ ~} G ~} (~ ~)
12. {Eij} where Eij has component 1 at the (i,j) place and 0 otherwise. These
elements generate Mat(m x n), because given any matrix A = (aij) we can
write it as a linear combination
Furthermore, if
A = L L aijEij.
i j
o = L L aijEij
i j
then we must have aij = 0 for all indices i, j so the elements Eij are linearly
independent.
13. Ei where Ei is the n x n matrix whose ii-th term is 1 and all other terms are
o.
14. A basis can be chosen to consist of the elements Eij having ij-component
equal to 1 for i ~ j and all other components equal to o. The number of
such elements is
n(n + 1)
1+2+···+n= 2 .
15. (a) (~ ~} G ~} G ~)
(b) {(~ ~ ~)(~ : ~)(~ ~ ~)(! ~ ~)(~ ~ ~)(~ ~ !)}
16. A basis for the space Sym(n x n) of symmetric n x n matrices can be taken
to be the elements Eij with i ~ j having ij-component equal to 1, ji-compo-

ANSWERS TO EXERCISES 275
nent equal to 1, and rs-component equal to 0 if (r, s) #- (i,j) or (j, i). The
proof that these generate Sym(n x n) and are linearly independent is similar
to the proof in Exercise 12.
III, §5, p. 115
1. (a) 4 (b) mn (c) n (d) n(n + 1 )/2 (e) 3 (f) 6 (g) n(n + 1)/2
2. 0, 1, or 2, by Theorem 5.8. The subspace consists of 0 alone if and only if it
has dimension 0. If the subspace has dimension 1, let VI be a basis. Then
the subspace consists of all elements tVI’ for all numbers t, so is a line by
definition. If the subspace has dimension 2, let VI’ V 2 be a basis. Then the
subspace consists of all elements tlv l + t 2v 2 where t l , t2 are numbers, so is a
plane by definition.
3. 0, 1, 2, or 3 by Theorem 5.8.
III, §6, p. 121
1. (a) 2 (b) 2 (c) 2 (d) 1 (e) 2 (f) 3 (g) 3 (h) 2 (i) 2
IV, § 1, p. 126
1. (a) cos x (b) eX (c) l/x 2. (-1/)2, 1/)2)
3. (a) 11 (b) 1 3 (c) 6
4. (a) (e, 1) (b) (1,0) (c) (l/e, -1)
S. (a) (e + 1, 3) (b) (e 2 + 2, 6) (c) (1, 0)
6. (a) (2, 0) (b) (ne, n)
7. (a) 1 (b) 11
8. Ellipse 9x 2 + 4y2 = 36 9. Line x = 2y
10. Circle x 2 + y2 = e2, circle x 2 + y2 = e2c
11. Cylinder, radius 1, z-axis = axis of cylinder 12. Circle x 2 + y2 = 1
IV, §2, p. 134
1. All except ( c), (g)
2. Only Exercise 8.
S. Since AX = BX for all X this relation is true in particular when X = Ej is
the j-th unit vector. But then AEj = Aj is the j-th column of A, and
BEj = Bj is the j-th column of B, so Aj = Bj for all j. This proves that
A = B.
6. Only u = 0, because T,iO) = u and if Tu is linear, then we must have
~(O) = o.

276 ANSWERS TO EXERCISES
7. The line 8 can be represented in the form P + tV t with all numbers t. Then
L(8) consists of all points
If L(v t ) = 0, this is a single point. If L(v t ) #- 0, this is a line. Other cases
done similarly.
8. Parallelogram whose vertices are B, 3A, 3A + B, o.
9. Parallelogram whose vertices are 0, 2B, 5A, 5A + 2B.
1 o. (a) (- 1, – 1) (b) (- 2/3, 1) (c) (- 2, – 1)
11. (a) (4, 5) (b) (11/3, -3) (c) (4, 2)
12. Suppose we have a relation I XiV i = o. Apply F. We obtain I xiF(vJ =
I XiW i = o. Since the w/s are linearly independent it follows that all Xi = O.
13. (a) Let v be an arbitrary element of V. Since F(v o) #- 0 there exists a number
C such that
F(u) = cF(vo),
namely c = F(v)/F(vo). Then F(v – cV o) = 0, so let W = v – cV o. We have
written v = W + cVo as desired.
(b) W is a subspace by Exercise 3. By part (a), the elements Vo, VI’··· ‘Vn
generate V. Suppose there is a linear relation
Apply F. We get coF(v o) = O. Since F(vo) #- 0 it follows that Co = O. But
then Ci = 0 for i = 1, … ,n because VI’ … ,vn form a basis of W.
IV, §3, p. 141
1 and 2. If U is a subspace of V then dim L( U) ~ dim U. Hence the image of
a one-dimensional subspace is either 0 or 1. The image of a two-
dimensional subspace is 0, 1, or 2. A line or plane is of the form P + U,
where U has dimension 1 or 2. Its image is of the form L(P) + L( U), so
the assertions are now clear.
3. (a) By the dimension formula, the image of F has dimension n. By Theorem
4.6 of Chapter III, the image must be all of W.
(b) is similar.
4. Use the dimension formula.
5. Since L( Vo + u) = L( vo) if u is in Ker L, every element of the form Vo + u is a
solution. Conversely, let V be a solution of L(v) = w. Then
L(v – vo) = L(v) – L(vo) = w – w = 0,
so v – Vo = u is in the kernel, and u = Vo + u.

ANSWERS TO EXERCISES 277
6. Constant functions.
7. Ker D2 = polynomials of deg ~ 1, Ker Dn = polynomials of deg ~ n – 1.
8. (a) Constant multiples of eX (b) Constant multiples of eax
9. (a) n – 1 (b) n 2 – 1
A+tA A-tA
10. A = – – 2– + – 2- –. If A = B + C = B 1 + c 1, then
But B – B t = C 1 – C is both symmetric and skew-symmetric, so 0 because
each component is equal to its own negative.
11. (c) Taking the transpose of (A + t A)j2 show that this is a symmetric matrix.
Conversely, given a symmetric matrix B, we see that B = PCB), so B is in
the image of P.
(d) n(n – 1 )/2.
(e) A basis for the skew-symmetric matrices consists of the matrices Eij with
i < j having ij-component equal to 1, ji-component equal to - 1, and all other components equal to o. 12. Similar to 11. 13 and 14. Similar to 11 and 12. 15. (a) 0 (b) m+n, {CUi' 0), (0, Wj)}; i= 1, ... ,m;j= 1, ... ,n. If {u i} is a basis of U and {w j } is a basis of W. 16. (b) The image is clearly contained in U + W Given an arbitrary element U + W with U in U and W in W, we can write it in the form U + W = U - ( - w), which shows that it is in the image of L. (c) The kernel of L consists of those elements (u, w) such that U - W = 0, so U = w. In other words, it consists of the pairs (u, u), and U must lie both in U and W, so in the intersection. If {u 1, .•. ,ur } is a basis for U n W, then {CUt, ud,.·· ,Cur' Ur)} is a basis for the kernel of L. The dimension is the same as the dimension of U n W. Then apply the dimension formula in the text. IV, §4, p. 149 l.n-1 2.4 3. n - 1 4. (a) dim. = 1 basis = (1, -1, 0) (b) dim. = 2 basis = (1, 1,0)(0, 1, 1) (c) dim. = 1 Cd) dim. = 0 basis = (n - 3 n + 2 1) 10' 5 ' 5. (a) 1 (b) 1 ( c) 0 ( d) 2 278 ANSWERS TO EXERCISES 6. One theorem states that dim V = dim 1m L + dim Ker L. Since dim Ker L ~ 0, the desired inequality follows. 7. One proof (there are others): rank A = dim 1m LA. But LAB = LA 0 LB. Hence the image of LAB is contained in the image of LA. Hence rank AB ~ rank A. For the other inequality, note that the rank of a matrix is equal to the rank of its transpose, because column rank equals row rank. Hence Now apply the first inequality to get rank t Bt A ~ rank t B = rank B. IV, §5, p. 156 Coo ~) roo 0) 1. (a) 0 1 0 (b) 0 1 0 0 o 0 1 0 ( 0 0 ~) 010 ( c) 3 I ( d ) 7 I ( e) - I (f) 000 000 2. cI, where I is the unit n x n matrix. 3. (a) ( 1 -4 ~) (b) (! -2 ~) -3 2 -1 4. (a) ( ~ 1 -n (b) (~ 0 0) (-2 0 n -2 -7 ~ (c) ~ 0 -1 1 0 -1 5. Let LVi = L cijw j . Let C = (c ij ). The associated matrix is tc, and the effect of L on a coordinate vector X is tcx. C 1 0 0 0 C 2 0 7. ( 0 ~) 6. 8. -1 0 0 Cn V, §1, p. 162 1. Let C = A-B. Then CX = 0 for all X. Take X = Ej to be the j-th unit vector for j = 1, ... ,no Then CEj = cj is the j-th column of C. By assump- tion, CEj = 0 for all j so C = o. 2. Use distributivity and the fact that F 0 L = L 0 F. ANSWERS TO EXERCISES 279 3. Same proof as with numbers. 4. p 2 = !(I + T)2 = l(/ 2 + 2TI + T2) = ±(21 + 2T) = !(I + T) = P. Part (b) IS left to you. For part (c), see the next problem. S. (a) Q2 = (/ - p)2 = 12 - 21 P + p 2 = 1 --.: 2P + P = 1 - P = Q. (b) Let v E 1m P so v = Pw for some w. Then Qv = QPv = 0 because QP = (I - P)P = P - p 2 = P - P = o. Hence 1m P c Ker Q. Conver- sely, let v E Ker Q so Qv = O. Then (I - P)v = 0 so v - Pv = 0, and v = Pv, so V Elm P, whence Ker Q c 1m P. 6. Let VE V. Then v = v - Pv + Pv, and v - PVEKer P because P(v - Pv) = Pv - p 2v = Pv - Pv = o. Furthermore Pv Elm P, thus proving (a). As to (b), let v E 1m P n Ker P. Since v E 1m P there exists WE V such that v = Pw. Since v E Ker P, we get o = Pv = p 2w = Pw = v, so v = 0, whence (b) is also proved. 7. Suppose U + w = U t + Wt. Then U - U t = Wt - w. But U - UtEU and Wt - WE W because U, Ware subspaces. By assumption that U n W = {O}, we conclude that U - U t = 0 = Wt - W so U = U t, W = Wt. 8. p2(U, w) = P(u, 0) = (u, 0) = P(u, w). So p 2 = P. 9. The dimension of a subspace is ~ the dimension of the space. Then 1m F 0 L ~ 1m F so dim 1m F 0 L ~ dim 1m F so rank F 0 L ~ rank F. This proves one of the formulas. For the other, view F as a linear map defined on 1m L. Then rank FoL = dim 1m FoL ~ dim 1m L = rank L. V, §2, p. 168 1. R; t = R _(} because Ro 0 R -0 = R o- o = Ro = I. The matrix associated with R; t is ( COS () sin 0 ) - sin () cos () because cos( - 0) = cos o. 3. The composites as follows are the identity: and 280 ANSWERS TO EXERCISES 4, 5, 6. In each case show that the kernel is 0 and apply the appropriate theorem. 7 through 10. The proof is similar to the same proof for matrices, using distri- butivity. In 7, we have (/ - L)o(1 + L) = 12 - L2 = I. For 8, we have L 2 + 2L = - 1 so L( - L - 2/) = I, so L - 1 = - L - 21. 11. It suffices to prove that v, ware linearly independent. Suppose xv + yw = o. Apply L. Then L(w) = L(L(v)) = 0 because L2 = o. Hence L(xv) = xL(v) = o. Since L(v) #- 0, it follows that x = O. Then y = 0 because w #- o. 12. F is injective, its kernel is o. (b) F is not surjective, for instance (1,0,0, ... ) is not in the image. (c) Let G(Xl' x 2 , ... ) = (X2' x 3 , ... ) (drop the first coordinate). (d) No, otherwise F would have an inverse, which it does not. 13. Linearity is easily checked. To show that L is injective, it suffices to show that Ker L = {O}. Suppose L(u, w) = 0, then u + w = 0, so u = -w. By as- sumption Un W = {O}, and UE U, -WE W so U = W = o. Hence Ker L = {O}. L is surjective because by assumption, every element of V can be written as the sum of an element of U and an element of W VI, §1, p. 178 2. Let X = t(x, y). Then 2 2 ( b) 2 (ad - b2 ) 2 0, then O. If ad – b2 ;£ 0, then give
y any non-zero value, and let x = -by/a. Then 0 and 0, it follows that A > O. Pick a basis
of V consisting of eigenvectors. The vector space V can then be identified as
the space of coordinate vectors with respect to this basis. The matrix of A
then is a diagonal matrix, whose diagonal elements are the eigenvalues, and
are therefore positive. We can then use Exercise 2 to find a square root.
4. Similar to Exercise 3.
5. From t(AA) = tAt A = AA, it follows that A 2 is symmetric. Furthermore, for
v i=- 0,
0,
because Av i=- 0 since O.
Since t A – 1 = A – 1 it follows that A-I is symmetric. Since A is invertible,
a given v can be written v = Aw for some w (namely w = A -tv). Then
O.
Hence A – 1 is positive definite.
6. Assume (i). From the identity in the hint, we get
4 0
because A is invertible, Av =1= O. Hence t AA is positive definite. Let U =
AB- 1 where B2=tAA and BA=AB, so B- 1 A=AB- 1 • Then
< Uv, Uv) = o. Then

Still stressed from student homework?
Get quality assistance from academic writers!

Order your essay today and save 25% with the discount code LAVENDER