Professional Documents
Culture Documents
(b) This is not a linear equation in and because of the terms and .
(b)
(c)
5. (a) (b)
6. (a) (b)
9. The values in (a), (d), and (e) satisfy all three equations – these 3-tuples are solutions of the system.
The 3-tuples in (b) and (c) are not solutions of the system.
10. The values in (b), (d), and (e) satisfy all three equations – these 3-tuples are solutions of the system.
The 3-tuples in (a) and (c) are not solutions of the system.
11. (a) We can eliminate from the second equation by adding times the first equation to the
second. This yields the system
The second equation is contradictory, so the original system has no solutions. The lines
represented by the equations in that system have no points of intersection (the lines are
parallel and distinct).
(b) We can eliminate from the second equation by adding times the first equation to the
second. This yields the system
The second equation does not impose any restriction on and therefore we can omit it. The
lines represented by the original system have infinitely many points of intersection. Solving the
1.1 Introduction to Systems of Linear Equations 3
first equation for we obtain . This allows us to represent the solution using
parametric equations
From the second equation we obtain . Substituting for into the first equation
results in . Therefore, the original system has the unique solution
The represented by the equations in that system have one point of intersection: .
12. We can eliminate from the second equation by adding times the first equation to the second.
This yields the system
13. (a) Solving the equation for we obtain therefore the solution set of the original
equation can be described by the parametric equations
(b) Solving the equation for we obtain therefore the solution set of the
original equation can be described by the parametric equations
(c) Solving the equation for we obtain therefore the solution set of
the original equation can be described by the parametric equations
4 Chapter 1: Systems of Linear Equations and Matrices
(d) Solving the equation for we obtain therefore the solution set of the
original equation can be described by the parametric equations
(c) Solving the equation for we obtain therefore the solution set of
the original equation can be described by the parametric equations
The second equation does not impose any restriction on and therefore we can omit it.
Solving the first equation for we obtain . This allows us to represent the solution
using parametric equations
(b) We can see that the second and the third equation are multiples of the first: adding times the
first equation to the second, then adding the first equation to the third yields the system
The last two equations do not impose any restriction on the unknowns therefore we can omit
them. Solving the first equation for we obtain . This allows us to
represent the solution using parametric equations
The first equation does not impose any restriction on and therefore we can omit it. Solving
the second equation for we obtain . This allows us to represent the solution
using parametric equations
The last two equations do not impose any restriction on the unknowns therefore we can omit
them. Solving the first equation for we obtain . This allows us to represent
the solution using parametric equations
17. (a) Add times the second row to the first to obtain .
(another solution: interchange the first row and the third row to obtain ).
(another solution: add times the second row to the first to obtain ).
19. (a) Add times the first row to the second to obtain which corresponds to the
system
If then the second equation becomes , which is contradictory thus the system
becomes inconsistent.
If then we can solve the second equation for and proceed to substitute this value into
the first equation and solve for .
Consequently, for all values of the given augmented matrix corresponds to a consistent
linear system.
(b) Add times the first row to the second to obtain which corresponds to the
system
If then the second equation becomes , which does not impose any restriction on
and therefore we can omit it and proceed to determine the solution set using the first
equation. There are infinitely many solutions in this set.
If then the second equation yields and the first equation becomes .
Consequently, for all values of the given augmented matrix corresponds to a consistent linear
system.
20. (a) Add times the first row to the second to obtain which corresponds to the
system
1.1 Introduction to Systems of Linear Equations 7
If then the second equation becomes , which does not impose any restriction on
and therefore we can omit it and proceed to determine the solution set using the first
equation. There are infinitely many solutions in this set.
If then the second equation is contradictory thus the system becomes inconsistent.
Consequently, the given augmented matrix corresponds to a consistent linear system only when
.
(b) Add the first row to the second to obtain which corresponds to the system
If then the second equation becomes , which does not impose any restriction on
and therefore we can omit it and proceed to determine the solution set using the first
equation. There are infinitely many solutions in this set.
If then the second equation yields and the first equation becomes .
Consequently, for all values of the given augmented matrix corresponds to a consistent linear
system.
21. Substituting the coordinates of the first point into the equation of the curve we obtain
Repeating this for the other two points and rearranging the three equations yields
23. Solving the first equation for we obtain therefore the solution set of the original
equation can be described by the parametric equations
This equation must hold true for all real values , which requires that the coefficients associated with
the same power of on both sides must be equal. Consequently, and .
24. (a) The system has no solutions if either
each pair of lines intersects at a different point (without any lines being parallel)
(b) The system has exactly one solution if either
all three lines intersect at a single point (without any lines being parallel)
(c) The system has infinitely many solutions if all three lines coincide.
25.
i.e.
One solution is expected, since exactly one parabola passes through any three given points ,
, if , , and are distinct.
27.
True-False Exercises
(a) True. is a solution.
(b) False. Only multiplication by a nonzero constant is a valid elementary row operation.
(c) True. If then the system has infinitely many solutions; otherwise the system is inconsistent.
(d) True. According to the definition, is a linear equation if the 's are not
all zero. Let us assume . The values of all 's except for can be set to be arbitrary parameters,
and the equation can be used to express in terms of those parameters.
(e) False. E.g. if the equations are all homogeneous then the system must be consistent. (See True-False
Exercise (a) above.)
(f) False. If then the new system has the same solution set as the original one.
(g) True. Adding times one row to another amounts to the same thing as subtracting one row from
another.
(h) False. The second row corresponds to the equation , which is contradictory.
1.2 Gaussian Elimination 9
1. (a) This matrix has properties 1-4. It is in reduced row echelon form, therefore it is also in row
echelon form.
(b) This matrix has properties 1-4. It is in reduced row echelon form, therefore it is also in row
echelon form.
(c) This matrix has properties 1-4. It is in reduced row echelon form, therefore it is also in row
echelon form.
(d) This matrix has properties 1-4. It is in reduced row echelon form, therefore it is also in row
echelon form.
(e) This matrix has properties 1-4. It is in reduced row echelon form, therefore it is also in row
echelon form.
(f) This matrix has properties 1-4. It is in reduced row echelon form, therefore it is also in row
echelon form.
(g) This matrix has properties 1-3 but does not have property 4: the second column contains a
leading 1 and a nonzero number ( ) above it. The matrix is in row echelon form but not
reduced row echelon form.
2. (a) This matrix has properties 1-3 but does not have property 4: the second column contains a
leading 1 and a nonzero number (2) above it. The matrix is in row echelon form but not reduced
row echelon form.
(b) This matrix does not have property 1 since its first nonzero number in the third row (2) is not a
1. The matrix is not in row echelon form, therefore it is not in reduced row echelon form either.
(c) This matrix has properties 1-3 but does not have property 4: the third column contains a
leading 1 and a nonzero number (4) above it. The matrix is in row echelon form but not reduced
row echelon form.
(d) This matrix has properties 1-3 but does not have property 4: the second column contains a
leading 1 and a nonzero number (5) above it. The matrix is in row echelon form but not reduced
row echelon form.
(e) This matrix does not have property 2 since the row that consists entirely of zeros is not at the
bottom of the matrix. The matrix is not in row echelon form, therefore it is not in reduced row
echelon form either.
(f) This matrix does not have property 3 since the leading 1 in the second row is directly below the
leading 1 in the first (instead of being farther to the right). The matrix is not in row echelon
form, therefore it is not in reduced row echelon form either.
(g) This matrix has properties 1-4. It is in reduced row echelon form, therefore it is also in row
echelon form.
10 Chapter 1: Systems of Linear Equations and Matrices
can be rewritten as
can be rewritten as
Let . Then
can be rewritten: , , .
Let and . Then
The system of equations corresponding to this augmented matrix in row echelon form is
Back-substitution yields
The system of equations corresponding to this augmented matrix in row echelon form is
The system of equations corresponding to this augmented matrix in row echelon form is
If we assign and the arbitrary values and , respectively, the general solution is given by the
formulas
The system of equations corresponding to this augmented matrix in row echelon form
is clearly inconsistent.
The system of equations corresponding to this augmented matrix in row echelon form is
If we assign and the arbitrary values and , respectively, the general solution is given by the
formulas
Unique solution: , , .
Solution II. This time, we shall choose the order of the elementary row operations differently in order
to avoid introducing fractions into the computation. (Since every matrix has a unique reduced row
echelon form, the exact sequence of elementary row operations being used does not matter – see part
1 of the discussion “Some Facts About Echelon Forms” on p. 21)
Unique solution: , , .
16. We present two different solutions.
Solution I uses Gauss-Jordan elimination
Unique solution: , , .
Solution II. This time, we shall choose the order of the elementary row operations differently in order
to avoid introducing fractions into the computation. (Since every matrix has a unique reduced row
echelon form, the exact sequence of elementary row operations being used does not matter – see part
1 of the discussion “Some Facts About Echelon Forms” on p. 21)
Unique solution: , , .
If we assign and the arbitrary values and , respectively, the general solution is given by
the formulas
.
(Note that fractions in the solution could be avoided if we assigned instead, which along with
would yield , , , .)
If we assign and the arbitrary values and , respectively, the general solution is given by the
formulas
times the second row was added to the third row and
times the second row was added to the fourth row.
Unique solution: , , , .
If we assign and the arbitrary values and , respectively, the general solution is given by
the formulas
.
23. (a) The system is consistent; it has a unique solution (back-substitution can be used to solve for all
three unknowns).
(b) The system is consistent; it has infinitely many solutions (the third unknown can be assigned an
arbitrary value , then back-substitution can be used to solve for the first two unknowns).
(c) The system is inconsistent since the third equation is contradictory.
(d) There is insufficient information to decide whether the system is consistent as illustrated by
these examples:
24. (a) The system is consistent; it has a unique solution (back-substitution can be used to solve for all
three unknowns).
(b) The system is consistent; it has a unique solution (solve the first equation for the first unknown,
then proceed to solve the second equation for the second unknown and solve the third equation
last.)
(c) The system is inconsistent (adding times the first row to the second yields ;
The system has no solutions when (since the third row of our last matrix would then
correspond to a contradictory equation ).
The system has infinitely many solutions when (since the third row of our last matrix would
then correspond to the equation ).
For all remaining values of (i.e., and ) the system has exactly one solution.
The system has no solutions when or (since the third row of our last matrix would
then correspond to a contradictory equation).
For all remaining values of (i.e., and ) the system has exactly one solution.
There is no value of for which this system has infinitely many solutions.
31. Adding times the first row to the second yields a matrix in row echelon form .
Adding times its second row to the first results in , which is also in row echelon form.
32.
times the third row was added to the second row and
times the third row was added to the first row.
are and .
We obtain
1.2 Gaussian Elimination 33
37. Each point on the curve yields an equation, therefore we have a system of four equations
e uation correspondin to
e uation correspondin to
e uation correspondin to
e uation correspondin to
times the third row was added to the second row and
times the third row was added to the first row.
The linear system has a unique solution: , , , . These are the coefficient
values required for the curve to pass through the four given points.
38. Each point on the curve yields an equation, therefore we have a system of three equations
e uation correspondin to
e uation correspondin to
e uation correspondin to
The augmented matrix of this system has the reduced row echelon form
1.2 Gaussian Elimination 35
(For instance, letting the free variable have the value yields , , and .)
39. Since the homogeneous system has only the trivial solution, its augmented matrix must be possible to
reduce via a sequence of elementary row operations to the reduced row echelon form .
Applying the same sequence of elementary row operations to the augmented matrix of the
nonhomogeneous system yields the reduced row echelon form where , , and are
some real numbers. Therefore, the nonhomogeneous system has one solution.
40. (a) 3 (this will be the number of leading 1's if the matrix has no rows of zeros)
(b) 5 (if all entries in are 0)
(c) 2 (this will be the number of rows of zeros if each column contains a leading 1)
41. (a) There are eight possible reduced row echelon forms:
, , , , , , , and
, , , , , ,
, , , , , ,
, , , and .
42. (a) Either the three lines properly intersect at the origin, or two of them completely overlap and the
other one intersects them at the origin.
(b) All three lines completely overlap one another.
43. (a) We consider two possible cases: (i) , and (ii) .
(i) If then the assumption implies that and . Gauss-Jordan
elimination yields
We assumed
In both cases ( as well as ) we established that the reduced row echelon form of
is provided that .
(b) Applying the same elementary row operation steps as in part (a) the augmented matrix
will be transformed to a matrix in reduced row echelon form where
1.2 Gaussian Elimination 37
and are some real numbers. We conclude that the given linear system has exactly one
solution: , .
True-False Exercises
(a) True. A matrix in reduced row echelon form has all properties required for the row echelon form.
(b) False. For instance, interchanging the rows of yields a matrix that is not in row echelon form.
1. (a) Undefined (the number of columns in does not match the number of rows in )
(b) Defined; matrix
(c) Defined; matrix
(d) Defined; matrix
(e) Defined; matrix
(f) Defined; matrix
2. (a) Defined; matrix
(b) Undefined (the number of columns in does not match the number of rows in )
(c) Defined; matrix
(d) Defined; matrix
(e) Defined; matrix
38 Chapter 1: Systems of Linear Equations and Matrices
3. (a)
(b)
(c)
(d)
(f)
(g)
(h)
(i)
(j)
(k)
4. (a)
(b)
1.3 Matrices and Matrix Operations 39
(c)
(e)
(f)
(g)
(h)
(i)
(k)
46
5. (a)
(b) Undefined (the number of columns of does not match the number of rows in )
(c)
(d)
(e)
(f)
(g)
(h)
1.3 Matrices and Matrix Operations 41
(i)
(j)
(k)
(l)
6. (a)
42 Chapter 1: Systems of Linear Equations and Matrices
(c)
(d)
(e)
(f)
1.3 Matrices and Matrix Operations 43
second column of
third column of
second column of
third column of
second column of
third column of
second column of
third column of
15.
16.
17.
18.
19.
20.
46 Chapter 1: Systems of Linear Equations and Matrices
21.
22.
After subtracting first equation from the fourth, adding the second to the third, and back-substituting,
we obtain the solution: , , , and .
24. The given matrix equation is equivalent to the linear system
After subtracting first equation from the second, adding the third to the fourth, and back-substituting,
we obtain the solution: , , , and .
25. (a) If the th row vector of is then it follows from Formula (9) in Section 1.3 that
th row vector of
(b) If the th column vector of is then it follows from Formula (8) in Section 1.3 that
(c) (d)
yields
Assuming the entries of are real numbers that do not depend on , , and , this requires that the
coefficients corresponding to the same variable on both sides of each equation must match.
Assuming the entries of are real numbers that do not depend on , , and , it follows that no real
numbers , , and exist for which the first equation is satisfied for all , , and . Therefore no
matrix with real number entries can satisfy the given condition.
(Note that if were permitted to depend on , , and , then solutions do exist e.g.,
.)
(b) The matrix represents the decrease in sales of each item from May to June.
(c)
(d)
(e) The entry in the matrix represents the total number of items sold in May.
True-False Exercises
(a) True. The main diagonal is only defined for square matrices.
(b) False. An matrix has row vectors and column vectors.
(d) False. The th row vector of can be computed by multiplying the th row vector of by .
(e) True. Using Formula (14), .
(f) False. E.g., if and then the trace of is , which does not equal
.
(h) True. The main diagonal entries in a square matrix are the same as those in .
(i) True. Since is a matrix, it follows from being a matrix that must be a
matrix. Consequently, is a matrix.
(j) True.
(k) True. The equality of the matrices and implies that for all and .
Adding to both sides yields for all and . Consequently, the matrices and are
equal.
(m) True. If is a matrix and is an matrix then being defined requires and
being defined requires . For the matrix to be possible to add to the matrix ,
we must have .
(n) True. If the th column vector of is then it follows from Formula (8) in Section 1.3 that
1.3 Matrices and Matrix Operations 49
(o) False. E.g., if and then does not have a column of zeros even though
does.
1. (a) (b)
(c) (d)
2. (a)
(b)
(c)
(d)
3. (a) (b)
4. (a) (b)
inverse is .
inverse is .
inverse is .
50 Chapter 1: Systems of Linear Equations and Matrices
9. The determinant of ,
is
11. ;
12. ;
13. ;
14. ; ;
15. From part (a) of Theorem 1.4.7 it follows that the inverse of is .
Thus . Consequently, .
16. From part (a) of Theorem 1.4.7 it follows that the inverse of is .
Thus Consequently, .
1.4 Inverses; Algebraic Properties of Matrices 51
17. From part (a) of Theorem 1.4.7 it follows that the inverse of is .
Thus .
Consequently,
19. (a)
(b)
(c)
20. (a)
(b)
(c)
23. ; .
24. ; .
25. ,
26. ,
27. ,
28. ,
29. ,
30.
Theorem 1.4.1(e)
Theorem 1.4.1(i)
Theorem 1.4.1(m)
Property on p. 43
Theorem 1.4.1(b)
equal .
, , , , .
Note that these eight are not the only solutions - e.g., can be , etc.
If the th column vector of is then it follows from Formula (8) in Section 1.3 that
Consequently no matrix can be found to make the product thus does not have an
inverse.
36. If the th and th row vectors of are equal then it follows from Formula (9) in Section 1.3 that
th row vector of th row vector of .
Consequently no matrix can be found to make the product thus does not have an
inverse.
If the th and th column vectors of are equal then it follows from Formula (8) in Section 1.3 that
the th column vector of the th column vector of
54 Chapter 1: Systems of Linear Equations and Matrices
Consequently no matrix can be found to make the product thus does not have an
inverse.
Setting the first columns on both sides equal yields the system
Subtracting the second and third equations from the first leads to . Therefore
and (after substituting this into the remaining equations) .
The second and the third columns can be treated in a similar manner to result in
Although this corresponds to a system of nine equations, it is sufficient to examine just the three
equations corresponding to the first column
to see that subtracting the second and third equations from the first leads to a contradiction .
We conclude that is not invertible.
39.
Theorem 1.4.6
Theorem 1.4.7(a)
Theorem 1.4.1(c)
Property on p. 43
1.4 Inverses; Algebraic Properties of Matrices 55
40.
Theorem 1.4.6
Theorem 1.4.7(a)
Theorem 1.4.1(c)
Property on p. 43
.
42. Yes, it is true. From part (e) of Theorem 1.4.8, it follows that . This
statement can be extended to factors (see p. 49) so that
43. (a) Assuming is invertible, we can multiply (on the left) each side of the equation by :
Theorem 1.4.1(c)
Property on p. 43
(b) If is not an invertible matrix then does not generally imply as evidenced by
Example 3.
44. Invertibility of implies that is a square matrix, which is all that is required.
By repeated application of Theorem 1.4.1(m) and (l), we have
45. (a)
Property on p. 43
56 Chapter 1: Systems of Linear Equations and Matrices
Theorem 1.4.1(a)
46. (a)
Property on p. 43
is idempotent so
(b)
47. Applying Theorem 1.4.1(d) and (g), property , and the assumption we can write
48.
True-False Exercises
(a) False. and are inverses of one another if and only if .
1.4 Inverses; Algebraic Properties of Matrices 57
If the th column vector of is then it follows from Formula (8) in Section 1.3 that
Consequently no matrix can be found to make the product thus does not have an
inverse.
(k) False. E.g. and are both invertible but is not.
1. (a) Elementary matrix (corresponds to adding times the first row to the second row )
(b) Not an elementary matrix
(c) Not an elementary matrix
(d) Not an elementary matrix
7. (a) ( was obtained from by interchanging the first row and the third row)
1.5 Elementary Matrices and a Method for Finding A-1 59
(b) ( was obtained from by interchanging the first row and the third row)
(c) ( was obtained from by adding times the first row to the third row)
(d) ( was obtained from by adding times the first row to the third row)
(c) ( was obtained from by adding times the third row to the second row)
(d) ( was obtained from by adding times the third row to the second row)
The inverse is .
A row of zeros was obtained on the left side, therefore is not invertible.
10. (a) (Method I: using Theorem 1.4.5)
The determinant of , , is nonzero. Therefore is
invertible and its inverse is .
The inverse is .
A row of zeros was obtained on the left side, therefore the matrix is not invertible.
11. (a) The identity matrix was adjoined to the given matrix.
times the first row was added to the second row and
1.5 Elementary Matrices and a Method for Finding A-1 61
times the third row was added to the second row and
times the third row was added to the first row.
The inverse is .
times the first row was added to the second row and
times the first row was added to the third row.
A row of zeros was obtained on the left side, therefore the matrix is not invertible.
12. (a) The identity matrix was adjoined to the given matrix.
times the third row was added to the second row and
2 times the third row was added to the first row.
The inverse is .
A row of zeros was obtained on the left side, therefore the matrix is not invertible.
The inverse is .
The inverse is .
The inverse is .
16.
The identity matrix was adjoined to the given matrix.
The inverse is .
The inverse is .
The inverse is .
19. (a) The identity matrix was adjoined to the given matrix.
The inverse is .
The inverse is .
20. (a) The identity matrix was adjoined to the given matrix.
The inverse is .
The inverse is .
21. It follows from parts (a) and (d) of Theorem 1.5.3 that a square matrix is invertible if and only if its
reduced row echelon form is identity.
times the first row was added to the second row and
times the first row was added to the third row.
Otherwise (if and ), multiplying the second row by and multiplying the third row by
would result in a row echelon form with 1’s on the main dia onal. Subse uent elementar row
operations would then lead to the identity matrix.
We conclude that for any value of other than and the matrix is invertible.
22. It follows from parts (a) and (d) of Theorem 1.5.3 that a square matrix is invertible if and only if its
reduced row echelon form is identity.
Otherwise (if ), multiplying the last row by would result in a row echelon form with
1’s on the main diagonal. Subsequent elementary row operations would then lead to the identity
matrix.
We conclude that for any value of other than , and the matrix is invertible.
23. We perform a sequence of elementary row operations to reduce the given matrix to the identity
matrix. As we do so, we keep track of each corresponding elementary matrix:
Since , then
and
Note that this answer is not unique since a different sequence of elementary row operations (and the
corresponding elementary matrices) could be used instead.
24. We perform a sequence of elementary row operations to reduce the given matrix to the identity
matrix. As we do so, we keep track of each corresponding elementary matrix:
Since , and .
Note that this answer is not unique since a different sequence of elementary row operations (and the
corresponding elementary matrices) could be used instead.
25. We perform a sequence of elementary row operations to reduce the given matrix to the identity
matrix. As we do so, we keep track of each corresponding elementary matrix:
Since , we have
and .
Note that this answer is not unique since a different sequence of elementary row operations (and the
corresponding elementary matrices) could be used instead.
26. We perform a sequence of elementary row operations to reduce the given matrix to the identity
matrix. As we do so, we keep track of each corresponding elementary matrix:
Since , we have
and
Note that this answer is not unique since a different sequence of elementary row operations (and the
corresponding elementary matrices) could be used instead.
27. Let us perform a sequence of elementary row operations to produce from . As we do so, we keep
track of each corresponding elementary matrix:
74 Chapter 1: Systems of Linear Equations and Matrices
Note that this answer is not unique since a different sequence of elementary row operations (and the
corresponding elementary matrices) could be used instead.
28. Let us perform a sequence of elementary row operations to produce from . As we do so, we keep
track of each corresponding elementary matrix:
Note that a different sequence of elementary row operations (and the corresponding elementary
matrices) could be used instead. (However, since both and in this exercise are invertible, is
uniquely determined by the formula .)
29. cannot result from interchanging two rows of (since that would create a nonzero
True-False Exercises
(a) False. An elementary matrix results from performing a single elementary row operation on an
identity matrix; a product of two elementary matrices would correspond to a sequence of two such
operations instead, which generally is not equivalent to a single elementary operation.
(b) True. This follows from Theorem 1.5.2.
(c) True. If and are row equivalent then there exist elementary matrices such that
. Likewise, if and are row equivalent then there exist elementary matrices
such that . Combining the two equalities yields therefore and
are row equivalent.
(d) True. A homogeneous system has either one solution (the trivial solution) or infinitely many
solutions. If is not invertible, then by Theorem 1.5.3 the system cannot have just one solution.
Consequently, it must have infinitely many solutions.
(e) True. If the matrix is not invertible then by Theorem 1.5.3 its reduced row echelon form is not .
However, the matrix resulting from interchanging two rows of (an elementary row operation)
must have the same reduced row echelon form as does, so by Theorem 1.5.3 that matrix is not
invertible either.
(f) True. Adding a multiple of the first row of a matrix to its second row is an elementary row operation.
Denoting by be the corresponding elementary matrix we can write so the
resulting matrix is invertible if is.
Since , Theorem 1.6.2 states that the system has exactly one solution :
, i.e., .
Since , Theorem 1.6.2 states that the system has exactly one solution :
, i.e., .
Since , Theorem 1.6.2 states that the system has exactly one solution
: , i.e., and .
78 Chapter 1: Systems of Linear Equations and Matrices
Since , Theorem 1.6.2 states that the system has exactly one solution
: , i.e., and .
times the first row was added to the second row and
times the first row was added to the third row.
Since , Theorem 1.6.2 states that the system has exactly one solution :
, i.e., and .
Since , Theorem 1.6.2 states that the system has exactly one solution
: ,
i.e., , , , and .
Since , Theorem 1.6.2 states that the system has exactly one solution :
, i.e., , .
times the first row was added to the second row and
times the first row was added to the third row.
Since , Theorem 1.6.2 states that the system has exactly one solution
: , i.e.,
, , and .
times the first row was added to the second row and
times the first row was added to the third row.
(iii) (iv) ,
1.6 More on Linear Systems and Invertible Matrices 85
The system is consistent for all values of , , , and that satisfy the equations
and .
These equations form a linear system in the variables , , , and whose augmented matrix
has the reduced row echelon form . Therefore the system
is consistent if and .
18. (a) The equation can be rewritten as , which yields and
.
This is a matrix form of a homogeneous linear system - to solve it, we reduce its augmented
matrix to a row echelon form.
times the first row was added to the second row and
times the first row was added to the third row.
Using we obtain
times the third row was added to the second row and
times the third row was added to the first row.
Using we obtain
True-False Exercises
(a) True. By Theorem 1.6.1, if a system of linear equation has more than one solution then it must have
infinitely many.
(b) True. If is a square matrix such that has a unique solution then the reduced row echelon
form of must be . Consequently, must have a unique solution as well.
(c) True. Since is a square matrix then by Theorem 1.6.3(b) implies .
Therefore, .
(d) True. Since and are row equivalent matrices, it must be possible to perform a sequence of
elementary row operations on resulting in . Let be the product of the corresponding elementary
matrices, i.e., . Note that must be an invertible matrix thus .
Any solution of is also a solution of since .
Likewise, any solution of is also a solution of since .
(e) True. If then . Consequently, is a solution of .
(f) True. is equivalent to , which can be rewritten as . By Theorem
1.6.4, this homogeneous system has a unique solution (the trivial solution) if and only if its coefficient
matrix is invertible.
1.6 More on Linear Systems and Invertible Matrices 91
(g) True. If were invertible, then by Theorem 1.6.5 both and would be invertible.
1. (a) The matrix is upper triangular. It is invertible (its diagonal entries are both nonzero).
(b) The matrix is lower triangular. It is not invertible (its diagonal entries are zero).
(c) This is a diagonal matrix, therefore it is also both upper and lower triangular. It is invertible (its
diagonal entries are all nonzero).
(d) The matrix is upper triangular. It is not invertible (its diagonal entries include a zero).
2. (a) The matrix is lower triangular. It is invertible (its diagonal entries are both nonzero).
(b) The matrix is upper triangular. It is not invertible (its diagonal entries are zero).
(c) This is a diagonal matrix, therefore it is also both upper and lower triangular. It is invertible (its
diagonal entries are all nonzero).
(d) The matrix is lower triangular. It is not invertible (its diagonal entries include a zero).
3.
4.
5.
6.
7. , ,
92 Chapter 1: Systems of Linear Equations and Matrices
8. , ,
9. , ,
10. ,
11.
12.
13.
1.7 Diagonal, Triangular, and Symmetric Matrices 93
14.
19. From part (c) of Theorem 1.7.1, a triangular matrix is invertible if and only if its diagonal entries are
all nonzero. Since this upper triangular matrix has a 0 on its diagonal, it is not invertible.
20. From part (c) of Theorem 1.7.1, a triangular matrix is invertible if and only if its diagonal entries are
all nonzero. Since this upper triangular matrix has all three diagonal entries nonzero, it is invertible.
21. From part (c) of Theorem 1.7.1, a triangular matrix is invertible if and only if its diagonal entries are
all nonzero. Since this lower triangular matrix has all four diagonal entries nonzero, it is invertible.
22. From part (c) of Theorem 1.7.1, a triangular matrix is invertible if and only if its diagonal entries are
all nonzero. Since this lower triangular matrix has a 0 on its diagonal, it is not invertible.
25. The matrix is symmetric if and only if . In order for to be symmetric, we must have
.
26. The matrix is symmetric if and only if the following equations must be satisfied
29. By Theorem 1.7.1, is also an upper triangular or lower triangular invertible matrix. Its diagonal
entries must all be nonzero - they are reciprocals of the corresponding diagonal entries of the matrix
.
30. By Theorem 1.4.8(e), . Therefore we have:
,
, and
since is symmetric.
31.
32. For example (there are seven other possible answers, e.g., ,
, etc.)
1.7 Diagonal, Triangular, and Symmetric Matrices 95
33.
34. (a) Theorem 1.4.8(e) states that (if the multiplication can be performed). Therefore,
is
s mmetric
times the first row was added to the second row and
times the first row was added to the third row.
This is a zero matrix whenever the value of , , and is either or . We conclude that the
following are all diagonal matrices that satisfy the equation:
If we assign the arbitrary value , the general solution is given by the formulas
, , , .
43. No. If , , and then which does
not generally equal . (The product of skew-symmetric matrices that commute is symmetric.)
is skew-symmetric since
45. (a)
Theorem 1.4.9(d)
Theorem 1.4.7(c)
(b)
Theorem 1.4.8(a)
Theorem 1.4.8(b)
Theorem 1.4.1(h)
Theorem 1.4.8(c)
Theorem 1.4.1(i)
1.7 Diagonal, Triangular, and Symmetric Matrices 99
Theorem 1.4.8(d)
Theorem 1.4.1(l)
True-False Exercises
(a) True. Every diagonal matrix is symmetric: its transpose equals to the original matrix.
(b) False. The transpose of an upper triangular matrix is a lower triangular matrix.
(d) True. Mirror images of entries across the main diagonal must be equal - see the margin note next to
Example 4.
(e) True. All entries below the main diagonal must be zero.
(f) False. By Theorem 1.7.1(d), the inverse of an invertible lower triangular matrix is a lower triangular
matrix.
(g) False. A diagonal matrix is invertible if and only if all or its diagonal entries are nonzero (positive or
negative).
(h) True. The entries above the main diagonal are zero.
(i) True. If is upper triangular then is lower triangular. However, if is also symmetric then it
follows that must be both upper triangular and lower triangular. This requires to be a
diagonal matrix.
(j) False. For instance, neither nor is symmetric even though is.
(k) False. For instance, neither nor is upper triangular even though
is.
(b) ;
15. The given equations can be expressed in matrix form as therefore the
By matrix multiplication, .
16. The given equations can be expressed in matrix form as therefore the
By matrix multiplication,
matches .
matches .
matches
matches .
19. (a)
(b)
20. (a)
(b)
and .
(b) If and then
and .
22. (a) If and then
and
.
(b) If and then
and .
23. (a) The homogeneity property fails to hold since does not
generally equal . (It can be shown that the additivity property
fails to hold as well.)
(b) The homogeneity property fails to hold since
does not generally equal . (It can be shown that the
additivity property fails to hold as well.)
24. (a) The homogeneity property fails to hold since does not generally equal
. (It can be shown that the additivity property fails to hold
as well.)
and .
and .
30. For instance, satisfies the property , but the homogeneity property
fails to hold since does not generally equal
.
31. (a) , , .
True-False Exercises
(a) False. The domain of is .
(b) False. The codomain of is .
(c) True. Since the statement requires the given equality to hold for some vector in , we can let .
(d) False. (Refer to Theorem 1.8.3.)
(e) True. The columns of are .
(f) False. The given equality must hold for every matrix transformation since it follows from the
homogeneity property.
106 Chapter 1: Systems of Linear Equations and Matrices
(g) False. The homogeneity property fails to hold since does not generally equal
.
1. There are four nodes, which we denote by , , , and (see the figure on the left).
We determine the unknown flow rates , , and assuming the counterclockwise direction (if any
of these quantities are found to be negative then the flow direction along the corresponding branch
will be reversed).
By inspection, this system has a unique solution , , . This yields the flow
rates and directions shown in the figure on the right.
2. (a) There are five nodes – each of them corresponds to an equation.
top left
top ri ht
bottom left
bottom middle
bottom ri ht
This system can be rearranged as follows
1.9 Applications of Linear Systems 107
(b) The augmented matrix of the linear system obtained in part (a) has the reduced row echelon
top left
top ri ht A)
bottom left
bottom ri ht )
This system can be rearranged as follows
(b) The augmented matrix of the linear system obtained in part (a)
has the reduced row echelon form . If we assign the arbitrary value
top left
top middle
top ri ht
bottom left
bottom middle
bottom ri ht
We rewrite the system as follows
(b) The augmented matrix of the linear system obtained in part (a) has the reduced row echelon
750 s 0
750 s 0
, ,
, ,
0
t
subject to the restriction that all seven 0
t
t
s
values must be nonnegative. Obviously, s
0
60
we need both and 0
60
, which in turn imply
and . Additionally
imposing the three inequalities 150 50 t 0
, 50 50 t 0
, and results in
600 750 s
the set of allowable and values
depicted in the grey region on the graph.
(c) Setting in the general solution obtained in part (b) would result in the negative value
which is not allowed (the traffic would flow in a wrong way along the street
marked as .)
5. From Kirchhoff's current law at each node, we have Kirchhoff's voltage law yields
(An equation corresponding to the outer loop is a combination of these two equations.)
The linear system can be rewritten as
op eft ode
op Ri ht ode
ottom eft ode
ottom Ri ht ode
Kirchhoff's voltage law yields
The solution is A, A.
8. From Kirchhoff's current law at each node, we have Kirchhoff's voltage law yields
The number of atoms of carbon, hydrogen, and oxygen on both sides must equal:
The number of atoms of carbon, hydrogen, and oxygen on both sides must equal:
The number of atoms of carbon, hydrogen, oxygen, and fluorine on both sides must equal:
The number of atoms of carbon, hydrogen, and oxygen on both sides must equal:
17. (a) We are looking for a polynomial of the form such that and
. We obtain a linear system
True-False Exercises
(a) False. In general, networks may or may not satisfy the property of flow conservation at each node
(although the ones discussed in this section do).
(b) False. When a current passes through a resistor, there is a drop in the electrical potential in a circuit.
(c) True.
(d) False. A chemical equation is said to be balanced if for each type of atom in the reaction, the same
number of atoms appears on each side of the equation.
(e) False. By Theorem 1.9.1, this is true if the points have distinct -coordinates.
1. (a)
The Leontief equation leads to the linear system with the augmented matrix
. Its reduced row echelon form is
2. (a)
The Leontief equation leads to the linear system with the augmented matrix
. Its reduced row echelon form is .
To meet the consumer demand, the economy must produce $300,000 worth of food and
$400,000 worth of housing.
3. (a)
The Leontief equation leads to the linear system with the augmented matrix
4. (a)
The Leontief equation leads to the linear system with the augmented matrix
5. ;
6. ;
The Leontief equation leads to the linear system with the augmented matrix
On the other hand, the Leontief equation leads to the linear system with the
An economic explanation of the result in part (a) is that therefore the second sector
consumes all of its own output, making it impossible to meet any outside demand for its
products.
8.
If the open sector demands dollars worth from each product-producing sector, i.e. the outside
demand vector is . The Leontief equation leads to the linear system with the
We conclude that the first sector must produce the greatest dollar value to meet the specified open
sector demand.
9. From the assumption , it follows that the determinant of
is nonzero. Consequently, the Leontief matrix
True-False Exercises
(a) False. Sectors that do not produce outputs are called open sectors.
(b) True.
(c) False. The th row vector of a consumption matrix contains the monetary values required of the th
sector by the other sectors for each of them to produce one monetary unit of output.
118 Chapter 1: Systems of Linear Equations and Matrices
If we assign and the arbitrary values and , respectively, the general solution is given by the
formulas
times the first row was added to the second row and
times the first row was added to the third row.
This matrix is both in row echelon form and in reduced row echelon form. It corresponds to the
system of equations
times the first row was added to the second row and
times the first row was added to the third row.
Although this matrix is not in row echelon form yet, clearly it corresponds to an inconsistent linear
system
since the third equation is contradictory. (We could have performed additional elementary row
The positivity of the three variables requires that , , and . The first
inequality can be rewritten as , while the second inequality is equivalent to . All three
unknowns are positive whenever . There are three integer values of in this interval:
, , and . Of those, only yields integer values for the remaining variables: , .
8. Let and denote the number of pennies, nickels, and dimes, respectively. Since there are 13
coins, we must have
On the other hand, the total value of the coins is 83 cents so that
The resulting system of equations has the augmented matrix whose reduced row
echelon form is
When , all three variables are nonnegative. Of the four integer values inside this
interval ( , , , and ), only yields integer values for and .
We conclude that the box has to contain 3 pennies, 4 nickels, and 6 dimes.
(a) the system has a unique solution if and (multiplying the rows by , , and ,
(b) the system has a one-parameter solution if and (multiplying the first two rows by
The system has no solutions when and (since the third row of our last matrix would
then correspond to a contradictory equation).
11. For the product to be defined, must be a matrix. Letting we can write
which has a unique solution , , , . (An easy way to solve this system is to first
split it into two smaller systems. The system , , involves
and only, whereas the remaining six equations involve just and .) We conclude that .
12. Substituting the values , and into the original system yields a system of three
equations in the unknowns and :
The augmented matrix of this system has the reduced row echelon form . We
conclude that for the original system to have , , and as its solution, we must let
, and .
Supplementary Exercises 125
(Note that it can also be shown that the system with , and has , ,
and as its only solution. One way to do that would be to verify that the reduced row echelon
form of the coefficient matrix of the original system with these specific values of and is the
identity matrix.)
therefore the given matrix equation can be rewritten as a system of linear equations:
The augmented matrix of this system has the reduced row echelon form
, , , , , and .
(An alternative to dealing with this large system is to split it into two smaller systems instead:
the first three equations involve , , and only, whereas the remaining three equations involve
just , , and . Since the coefficient matrix for both systems is the same, we can follow the
procedure of Example 2 in Section 1.6; the reduced row echelon form of the matrix
is .)
Yet another way of solving this problem would be to determine the inverse
both sides of the given matrix equation on the right by this inverse to determine :
therefore the given matrix equation can be rewritten as a system of linear equations:
126 Chapter 1: Systems of Linear Equations and Matrices
The augmented matrix of this system has the reduced row echelon form so
is .)
therefore the given matrix equation can be rewritten as a system of linear equations:
The augmented matrix of this system has the reduced row echelon form
We conclude that .
14. (a) From Theorem 1.4.1, the properties (page 43) and the assumption , we
have
Supplementary Exercises 127
The reduced row echelon form of the augmented matrix of this system is . Therefore,
the values , , and result in a polynomial that satisfies the conditions specified.
17. When multiplying the matrix by itself, each entry in the product equals . Therefore,
Property on p. 43
128 Chapter 1: Systems of Linear Equations and Matrices
Theorem 1.4.1(m)