请输入您要查询的字词:

 

单词 DecompositionOfOrthogonalOperatorsAsRotationsAndReflections
释义

decomposition of orthogonal operators as rotations and reflections


Theorem 1.

Let V be a n-dimensional real inner product spaceMathworldPlanetmath (0<n<). Then every orthogonalMathworldPlanetmathPlanetmath operator T on Vcan be decomposed into a series of two-dimensional rotationsMathworldPlanetmath and one-dimensional reflectionson mutually orthogonal subspaces of V.

We first explain the general idea behind the proof. Consider a rotation R of angle θ in a two-dimensional space. From its orthonormal basisMathworldPlanetmath representation

[cosθ-sinθsinθcosθ],

we find that the characteristic polynomialMathworldPlanetmathPlanetmath of R is t2-2cosθ+1,with (complex) roots e±iθ.(Not surprising, since multiplicationPlanetmathPlanetmath by eiθ in the complex plane is rotationby θ.)Thus given the characteristic polynomial of R,we can almost recover11Because the complex roots occur in conjugatePlanetmathPlanetmath pairs,the information about the sign of θ is lost. This too, is not a surprise,because the sign of θ, i.e. whether the rotation is clockwise or counterclockwise,is dependent on the orientation of the basis vectors. its rotation angle.

In the case of a reflection S, the eigenvaluesMathworldPlanetmathPlanetmathPlanetmathPlanetmath of S are -1 and 1; so again, the characteristic polynomial of S will provide some information on S.

So in n dimensionsPlanetmathPlanetmathPlanetmath, we are also going to look at the complex eigenvalues and eigenvectors of T to recover information about the rotations represented by T.

But there is one technical point — T is a transformation on a real vectorspace, so it does not really have “complex eigenvalues and eigenvectors”.To make this concept rigorous, we must consider the complexification T of T,the operator defined by T(x+iy)=Tx+iTyin the vector spaceMathworldPlanetmath V consisting of elements of the form x+iy, for x,yV.(For more details, see the entry on complexification (http://planetmath.org/ComplexificationOfVectorSpace).)


Lemma 1.

For any linear operator T:VV, there exists a one- or two-dimensional subspacePlanetmathPlanetmathPlanetmath W which is invariantMathworldPlanetmath under T.

Proof.

Consider T and its characteristic polynomial.By the Fundamental Theorem of Algebra, the characteristic polynomial has a complex root λ=α+iβ. Then there is an eigenvectorMathworldPlanetmathPlanetmathPlanetmath x+iy0 with eigenvalue λ. We have

Tx+iTy=T(x+iy)=λ(x+iy)=(αx-βy)+i(βx+αy).

Equating real and imaginaryPlanetmathPlanetmath componentsPlanetmathPlanetmathPlanetmath,we see that

Tx=αx-βyW
Ty=βx+αyW

where W=span{x,y}. W is two-dimensional if x and y are linearly independentMathworldPlanetmath; otherwise it is one-dimensional22In fact, the space W constructed is two-dimensional if and only if the eigenvalue λ is not purely real. Compare with the remark (i) afterthe proof of TheoremMathworldPlanetmath 1.Actually, Lemma 1 has more uses than just proving Theorem 1.For example, if x˙=Ax is a linear differential equation,and the constant coefficient matrix A has only simple eigenvalues,then it is a consequence of Lemma 1, that the differential equation decomposes into a series of disjoint one-variable and two-variable equations.The solutions are then readily understood: they are always of the form of anexponential multiplied by a sinusoid, and linear combinationsMathworldPlanetmath thereof.The sinusoids will be present whenever there are non-real eigenvalues.. And we have T(W)W as claimed.∎


Proof of Theorem 1.

We recursively factor T; formally, the proof will be by inductionMathworldPlanetmath on the dimension n.

The case n=1 is trivial. We have detT=±1; if detT=1 then T is the identityPlanetmathPlanetmath; otherwise Tx=-x is a reflection.

For larger n, by Lemma 1 there exists a T-invariant subspacePlanetmathPlanetmath W. The orthogonal complementMathworldPlanetmath W of W is T-invariant also, becausefor all xW and yW,

Tx,y=x,T-1ybecause T preserves inner productMathworldPlanetmath
=0because T-1(W)=W.

Let TW be the operator that acts as T on W and is the identity on W.Similarly, let TW be the operator that acts as T on Wand is the identity on W. Then T=TWTW.TW restricted to W is orthogonal, and since W is one- or two-dimensional,TW must therefore be a rotation or reflection (or the identity) in a line or plane.

TW restricted to W is also orthogonal. W has dimension <n,so by the induction hypothesiswe can continue to factor it into operators acting on subspaces of W that are mutually orthogonal. These subspaces will of course also be orthogonal to W.

The proof is now completePlanetmathPlanetmathPlanetmathPlanetmathPlanetmathPlanetmath, except that we did not rule out TW being a reflection evenwhen W is two-dimensional.But of course, if TW is a reflection in two dimensions,then it can be factored as a reflection on a one-dimensional subspace span{v}composed with the identity on span{v}.∎

Actually, in the third paragraph of the proof, we implicitly assumed that every orthogonal operator on two-dimensional is either a rotation or reflection.This is a well-known fact, but it does need to be formally proven:

Theorem 2.

If V is a two-dimensional real inner product space, then every orthogonal operator Tis either a rotation or reflection.

Proof.

Fix any orthonormal basis {e1,e2} for V.Since T is orthogonal, Te1=e1=1, i.e. Te1 is a unit vectorMathworldPlanetmath on the plane, so there exists an angle θ (unique modulo 2π)such that Te1=cosθe1+sinθe2.Similarly Te2 is a unit vector, but since e1 and e2 are orthogonal, soare Te1 and Te2. Then it is found that the solution to Te2 must be either -sinθe1+cosθe2 or sinθe1-cosθe2. Putting all this together, the matrix for T is:

[cosθsinθsinθ±cosθ].

The first solution for Te2 corresponds to a rotation matrixMathworldPlanetmath (and detT=1);the second solution for Te2 corresponds to areflection matrix (http://planetmath.org/DerivationOf2DReflectionMatrix) (and detT=-1).∎


1 Remarks

  1. i.

    Observe that the two equations for Tx and Ty appearing in the proof of Lemma 1do specify a rotation of angle ±θ when λ=α+iβ=eiθand x,y are orthonormal.So by examining the complex eigenvales and eigenvectors of T,we can reconstruct the rotation.

    This construction can be used to give an alternate proof of Theorem 1; we sketch it below:

    The complexified space V has an inner product structureMathworldPlanetmath inherited from V (again, seecomplexification (http://planetmath.org/ComplexificationOfVectorSpace) for details).Let U=T. Since T is orthogonal, U is unitary, and hence normal (U*U=UU*).There exists an orthonormal basis of eigenvectors for U (the Schur decompositionMathworldPlanetmath (http://planetmath.org/CorollaryOfSchurDecomposition)).Let z=x+iy is any one of these eigenvectors with a complex, non-real eigenvalueλ. Then |λ|=1 because U is unitary,and z¯=x-iy is another eigenvector with eigenvalue λ¯.Using the V inner product formulaMathworldPlanetmathPlanetmath, the vectors x/2 and y/2 can be shown to be orthonormal.Then the proof of Lemma 1 shows that T acts as a rotation in the plane span{x,y}.All such planes obtained will be orthogonal to each other.

    To summarize, the orthogonal subplanes of rotation are found by grouping conjugate pairsof complex eigenvectors. If one actually needs to determine the planes of rotation explicitly (for dimensions n4), then probably it is better to work directly with the complexified matrix,rather than to factor the matrix over the reals.

  2. ii.

    The decomposition of T is not unique. However, it is always possible to obtaina decomposition which contains at most one reflection,because any two single-dimensional reflections can always be combined intoa two-dimensional rotation.In any case, the parity of the number of reflections in a decomposition of T is invariant,because the parity is equal to detT.

  3. iii.

    In the decomposition, the component rotations and reflections all commutebecause they act on orthogonal subspaces.

  4. iv.

    If we take a basis for V describing the mutually orthogonal subspaces in Theorem 1,the matrix for T looks like:

    [cosθ1-sinθ1sinθ1cosθ1cosθk-sinθksinθkcosθk±1+1+1]

    where θ1,,θk are the rotation angles, one for each orthogonal subplane,and the ±1 in the middle is the reflective component (if present). The rest of the entriesin the matrix are zero.

  5. v.

    Sometimes any orthogonal operator T with detT=1is called a rotation, even though strictly speaking it is actually series of rotations(each on different “axes”). Similarly, when detT=-1,T may be called a reflection, even though again it is not always a single (one-dimensional)reflection.

    In this languagePlanetmathPlanetmath, a rotation composed with a rotation will always be a rotation;a rotation composed with a reflection is a reflection; and two reflections composed together will always be a rotation.

  6. vi.

    In 3, an orthogonal operator with positivePlanetmathPlanetmath determinantMathworldPlanetmath is necessarilya rotation on one axis which is left fixed (except when the operator is the identity).This follows simply because there is no way to fit more than one orthogonal subplaneinto three-dimensional space.

    A compositionMathworldPlanetmathPlanetmath of two rotations in 3 would thenbe a rotation too.On the other hand it is not at all obvious what relationMathworldPlanetmath the axis of rotationof the composition has with the original two axes of rotation.

    For an explicit formula for a rotation matrix in 3 that does notrequire manual calculation of the basis vectors for the rotation subplane, see Rodrigues’ rotation formula.

  7. vii.

    In n, reflections can be carried out by first embeddingPlanetmathPlanetmath n into n+1and then rotating n+1.(Here, the words “rotation” and “reflection” being taken in their extended sense of (v).)For example, in the plane, a right hand can berotated in 3 into a left hand.

    To be specific, suppose we embed n in n+1 as the first n coordinates.Then we gain an extra degree of freedom in the last coordinate of n+1(with coordinate vector en+1).Given an orthogonal operator T:nn,we can extend it to an operator on T:n+1n+1by having it act as T on the lower n coordinates,and setting T(en+1)=-en+1.Since detT=-detT=1,our new T will be a rotation (the extra angle of rotation will be by π)that reflects sets in the n plane.

References

  • 1 Friedberg, Insel, Spence. Linear Algebra. Prentice-Hall, 1997.
  • 2 Vladimir I. Arnol’d (trans. Roger Cooke). Ordinary Differential Equations. Springer-Verlag, 1992.
随便看

 

数学辞典收录了18232条数学词条,基本涵盖了常用数学知识及数学英语单词词组的翻译及用法,是数学学习的有利工具。

 

Copyright © 2000-2023 Newdu.com.com All Rights Reserved
更新时间:2025/5/4 15:16:01