
It is sometimes convenient to think of the determi-
nant of a matrix Aas a function of its columns written
as vectors v1, v2,…, vn. We write det(A) = det(v1, v2,…,
vn). Then, by the above observations, we have:
det(v1,…, 0,…, vn) = 0
det(v1,…, vi,…, vj,…, vn) = –det(v1,…, vj,…, vi,…, vn)
det(v1,…, v,…, v,…, vn) = 0
We also have:
4. det(v1,…, v+ w,…, vn) = det (v1,…, v,…, vn)
+ det(v1,…, w,…, vn)
and
5. det(v1,…, kv,…, vn) = kdet(v1,…, v,…, vn)
and consequently
6. The value of det(A) is not altered if a multi-
ple of one column is added to another col-
umn: det(v1,…, vi+ kvj,…, vn) = det(v1,…,
vi,…, vn)
These results follow from the definition of the deter-
minant. The corresponding results about rows are
also valid.
C
RAMER
’
S RULE
shows that the notion of a deter-
minant is precisely the concept needed to solve simulta-
neously linear equations. We have:
A system of simultaneous linear equations has a
(unique) solution if the determinant of the cor-
responding matrix of coefficients is not zero.
Cramer’s rule goes further and provides a formula for
the solution of a system in terms of determinants.
The determinant has another important property.
After some algebraic work it is possible to show:
The determinant of the product of two n×n
matrices Aand Bis the product of their
determinants:
det(AB) = det(A) ×det(B)
The determinant of the
IDENTITY MATRIX
Iis one. If a
square matrix Ais invertible, then the equation:
1 = det(I) = det(A· A–1) = det(A) · det(A–1)
shows that det(A) is not zero and that
We have:
If a matrix is invertible, then its determinant is
not zero.
The converse is also true:
If the determinant of a matrix is not zero, then
the matrix is invertible.
To see why this holds, suppose that Ais a matrix
with nonzero determinant. Let eidenote the ith column
of the identity matrix. By Cramer’s rule, since the deter-
minant is not zero, the system of equations Ax = eihas a
solution x= si, say. Set Bto be the matrix with ith col-
umn si. Then AB = I. This shows at least that Ahas a
“right inverse” B. To complete the proof, let ATdenote
the transpose of A, that is, the matrix obtained from A
by interchanging its rows and columns. Since the deter-
minant can be viewed equivalently well as a function of
the rows of the matrix as its columns, we have that
det(AT) = det(A). Since the determinant of ATis also
nonzero, there is a matrix Cso that ATC = I. One can
check that the transpose of the product of two matrices is
the reverse product of their transposes. We thus have:
CTA = (ATC)T= IT= I. This shows that the matrix Aalso
has a left inverse CT. The left and right inverses must be
equal, since CT= CTI= CTAB = IB = B. Thus the matrix
Bis indeed the full inverse matrix to A: AB = BA = I.
See also
INVERSE MATRIX
.
diagonal Any line joining two nonadjacent vertices
of a
POLYGON
is called a diagonal of the polygon. For
example, a square has two diagonals, each cutting the
figure into two congruent right-angled triangles, and a
pentagon has five different diagonals. There are no
diagonals in a triangle. In general, a regular ngon has
distinct diagonals.
A diagonal for a
POLYHEDRON
is any line joining
two vertices that are not in the same face. A cube, for
n(n – 3)
———–
2
1
det(A–1) = ——
—
det(A)
128 diagonal