Headlines
Published On:Friday, 9 December 2011
Posted by Muhammad Atif Saeed

Operations with Matrices

As far as linear algebra is concerned, the two most important operations with vectors are vector addition [adding two (or more) vectors] and scalar multiplication (multiplying a vectro by a scalar). Analogous operations are defined for matrices.
Matrix addition. If A and B are matrices of the same size, then they can be added. (This is similar to the restriction on adding vectors, namely, only vectors from the same space R n can be added; you cannot add a 2-vector to a 3-vector, for example.) If A = [ aij] and B = [ bij] are both m x n matrices, then their sum, C = A + B, is also an m x n matrix, and its entries are given by the formula




Thus, to find the entries of A + B, simply add the corresponding entries of A and B.
Example 1: Consider the following matrices:




Which two can be added? What is their sum?
Since only matrices of the same size can be added, only the sum F + H is defined ( G cannot be added to either F or H). The sum of F and H is




Since addition of real numbers is commutative, it follows that addition of matrices (when it is defined) is also commutative; that is, for any matrices A and B of the same size, A + B will always equal B + A.
Example 2: If any matrix A is added to the zero matrix of the same size, the result is clearly equal to A:




This is the matrix analog of the statement a + 0 = 0 + a = a, which expresses the fact that the number 0 is the additive identity in the set of real numbers.
Example 3: Find the matrix B such that A + B = C, where




If




then the matrix equation A + B = C becomes



Since two matrices are equal if and only if they are of the same size and their corresponding entries are equal, this last equation implies




Therefore,




This example motivates the definition of matrix subtraction: If A and B are matrices of the same size, then the entries of AB are found by simply subracting the entries of B from the corresponding entries of A. Since the equation A + B = C is equivalent to B = CA, employing matrix subtraction above would yield the same result:




Scalar multiplication. A matrix can be multiplied by a scalar as follows. If A = [ aij] is a matrix and k is a scalar, then




That is, the matrix kA is obtained by multiplying each entry of A by k.
Example 4: If




then the scalar multiple 2 A is obtained by multiplying every entry of A by 2:



Example 5: If A and B are matrices of the same size, then AB = A + (− B), where − B is the scalar multiple (−1) B. If




then



This definition of matrix subtraction is consistent with the definition illustrated in Example 8.
Example 6: If




then



Matrix multiplication. By far the most important operation involving matrices is matrix multiplication, the process of multiplying one matrix by another. The first step in defining matrix multiplication is to recall the definition of the dot product of two vectors. Let r and c be two n-vectors. Writing r as a 1 x n row matrix and c as an n x 1 column matrix, the dot product of r and c is




Note that in order for the dot product of r and c to be defined, both must contain the same number of entries. Also, the order in which these matrices are written in this product is important here: The row vector comes first, the column vector second.
Now, for the final step: How are two general matrices multiplied? First, in order to form the product AB, the number of columns of A must match the number of rows of B; if this condition does not hold, then the product AB is not defined. This criterion follows from the restriction stated above for multiplying a row matrix r by a column matrix c, namely that the number of entries in r must match the number of entries in c. If A is m x n and B is n x p, then the product AB is defined, and the size of the product matrix AB will be m x p. The following diagram is helpful in determining if a matrix product is defined, and if so, the dimensions of the product:




Thinking of the m x n matrix A as composed of the row vectors r1, r2,…, r m from R n and the n x p matrix B as composed of the column vectors c1, c2,…, c p from R n ,




and



the rule for computing the entries of the matrix product AB is r i · c j = ( AB) ij , that is,



Example 7: Given the two matrices




determine which matrix product, AB or BA, is defined and evaluate it. Since A is 2 x 3 and B is 3 x 4, the product AB, in that order, is defined, and the size of the product matrix AB will be 2 x 4. The product BA is not defined, since the first factor ( B) has 4 columns but the second factor ( A) has only 2 rows. The number of columns of the first matrix must match the number of rows of the second matrix in order for their product to be defined.
Taking the dot product of row 1 in A and column 1 in B gives the (1, 1) entry in AB. Since




the (1, 1) entry in AB is 1:



The dot product of row 1 in A and column 2 in B gives the (1, 2) entry in AB,




and the dot product of row 1 in A and column 3 in B gives the (1, 3) entry in AB:



The first row of the product is completed by taking the dot product of row 1 in A and column 4 in B, which gives the (1, 4) entry in AB:




Now for the second row of AB: The dot product of row 2 in A and column 1 in B gives the (2, 1) entry in AB,




and the dot product of row 2 in A and column 2 in B gives the (2, 2) entry in AB:



Finally, taking the dot product of row 2 in A with columns 3 and 4 in B gives (respectively) the (2, 3) and (2, 4) entries in AB:




Therefore,




Example 8: If




and



compute the (3, 5) entry of the product CD. First, note that since C is 4 x 5 and D is 5 x 6, the product CD is indeed defined, and its size is 4 x 6. However, there is no need to compute all twenty-four entries of CD if only one particular entry is desired. The (3, 5) entry of CD is the dot product of row 3 in C and column 5 in D:




Example 9: If




verify that



but



In particular, note that even though both products AB and BA are defined, AB does not equal BA; indeed, they're not even the same size!
The previous example gives one illustration of what is perhaps the most important distinction between the multiplication of scalars and the multiplication of matrices. For real numbers a and b, the equation ab = ba always holds, that is, multiplication of real numbers is commutative; the order in which the factors are written is irrelevant. However, it is decidedly false that matrix multiplication is commutative. For the matrices A and B given in Example 9, both products AB and BA were defined, but they certainly were not identical. In fact, the matrix AB was 2 x 2, while the matrix BA was 3 x 3. Here is another illustration of the noncommutativity of matrix multiplication: Consider the matrices




Since C is 3 x 2 and D is 2 x 2, the product CD is defined, its size is 3 x 2, and




The product DC, however, is not defined, since the number of columns of D (which is 2) does not equal the number of rows of C (which is 3). Therefore, CD ≠ DC, since DC doesn't even exist.
Because of the sensitivity to the order in which the factors are written, one does not typically say simply, “Multiply the matrices A and B.” It is usually important to indicate which matrix comes first and which comes second in the product. For this reason, the statement “Multiply A on the right by B” means to form the product AB, while “Multiply A on the left by B” means to form the product BA.
Example 10: If




and x is the vector (−2, 3), show how A can be multiplied on the right by x and compute the product. Since A is 2 x 2, in order to multiply A on the right by a matrix, that matrix must have 2 rows. Therefore, if x is written as the 2 x 1 column matrix




then the product A x can be computed, and the result is another 2 x 1 column matrix:



Example 11: Consider the matrices




If A is multiplied on the right by B, the result is




but if A is multiplied on the left by B, the result is



Note that both products are defined and of the same size, but they are not equal.
Example 12: If A and B are square matrices such that AB = BA, then A and B are said to commute. Show that any two square diagonal matrices of order 2 commute.
Let




be two arbitrary 2 x 2 diagonal matrices. Then



and



Since a11 b11 = b11 a11 and a22 b22 = b22 a22, AB does indeed equal BA, as desired.
Although matrix multiplication is usually not commutative, it is sometimes commutative; for example, if




then



Despite examples such as these, it must be stated that in general, matrix multiplication is not commutative.
There is another difference between the multiplication of scalars and the multiplication of matrices. If a and b are real numbers, then the equation ab = 0 implies that a = 0 or b = 0. That is, the only way a product of real numbers can equal 0 is if at least one of the factors is itself 0. The analogous statement for matrices, however, is not true. For instance, if




then



Note that even though neither G nor H is a zero matrix, the product GH is.
Yet another difference between the multiplication of scalars and the multiplication of matrices is the lack of a general cancellation law for matrix multiplication. If a, b, and c are real numbers with a ≠ 0, then, by canceling out the factor a, the equation ab = ac implies b = c. No such law exists for matrix multiplication; that is, the statement AB = AC does not imply B = C, even if A is nonzero. For example, if




then both



and



Thus, even though AB = AC and A is not a zero matrix, B does not equal C.
Example 13: Although matrix multiplication is not always commutative, it is always associative. That is, if A, B, and C are any three matrices such that the product (AB)C is defined, then the product A(BC) is also defined, and




That is, as long as the order of the factors is unchanged, how they are grouped is irrelevant.
Verify the associative law for the matrices




First, since




the product (AB)C is



Now, since




the product A(BC) is



Therefore, (AB)C = A(BC), as expected. Note that the associative law implies that the product of A, B, and C (in that order) can be written simply as ABC; parentheses are not needed to resolve any ambiguity, because there is no ambiguity.
Example 14: For the matrices




verify the equation ( AB)T = BT AT. First,




implies



Now, since




BT AT does indeed equal ( AB)T. In fact, the equation




holds true for any two matrices for which the product AB is defined. This says that if the product AB is defined, then the transpose of the product is equal to the product of the transposes in the reverse order. Identity matrices. The zero matrix 0m x n plays the role of the additive identity in the set of m x n matrices in the same way that the number 0 does in the set of real numbers (recall Example 7). That is, if A is an m x n matrix and 0 = 0m x n , then




This is the matrix analog of the statement that for any real number a,




With an additive identity in hand, you may ask, “What about a multiplicative identity?” In the set of real numbers, the multiplicative identity is the number 1, since




Is there a matrix that plays this role? Consider the matrices




and verify that



and



Thus, AI = IA = A. In fact, it can be easily shown that for this matrix I, both products AI and IA will equal A for any 2 x 2 matrix A. Therefore,




is the multiplicative identity in the set of 2 x 2 matrices. Similarly, the matrix



is the multiplicative identity in the set of 3 x 3 matrices, and so on. (Note that I3 is the matrix [δ ij ]3 x 3.) In general, the matrix In —the n x n diagonal matrix with every diagonal entry equal to 1—is called the identity matrix of order n and serves as the multiplicative identity in the set of all n x n matrices. Is there a multiplicative identity in the set of all m x n matrices if m ≠ n? For any matrix A in Mm x n ( R), the matrix Im is the left identity ( ImA = A ), and In is the right identity ( AIn = A ). Thus, unlike the set of n x n matrices, the set of nonsquare m x n matrices does not possess a qunique two-sided identity, because Im ≠ In if m ≠ n.
Example 15: If A is a square matrix, then A2 denotes the product AA,A3 denotes the product AAA, and so forth. If A is the matrix




show that A3 = − A. The calculation




shows that A2 = − I. Multiplying both sides of this equation by A yields A3 = − A, as desired. [Technical note: It can be shown that in a certain precise sense, the collection of matrices of the form



where a and b are real numbers, is structurally identical to the collection of complex numbers, a + bi. Since the matrix A in this example is of this form (with a = 0 and b = 1), A corresponds to the complex number 0 + 1 i = i, and the analog of the matrix equation A2 = − I derived above is i2 = −1, an equation which defines the imaginary unit, i.] Example 16: Find a nondiagonal matrix that commutes with




The problem is asking for a nondiagonal matrix B such that AB = BA. Like A, the matrix B must be 2 x 2. One way to produce such a matrix B is to form A2, for if B = A2, associativity implies




(This equation proves that A2 will commute with A for any square matrix A; furthermore, it suggests how one can prove that every integral power of a square matrix A will commute with A.)
In this case,




which is nondiagonal. This matrix B does indeed commute with A, as verified by the calculations



and



Example 17: If




prove that



for every positive integer n. A few preliminary calculations illustrate that the given formula does hold true:




However, to establish that the formula holds for all positive integers n, a general proof must be given. This will be done here using the principle of mathematical induction, which reads as follows. Let P(n) denote a proposition concerning a positive integer n. If it can be shown that




and



then the statement P(n) is valid for all positive integers n. In the present case, the statement P(n) is the assertion



Because A1 = A, the statement P(1) is certainly true, since




Now, assuming that P(n) is true, that is, assuming




it is now necessary to establish the validity of the statement P( n + 1), which is



But this statement does indeed hold, because




By the principle of mathematical induction, the proof is complete. The inverse of a matrix. Let a be a given real number. Since 1 is the multiplicative identity in the set of real numbers, if a number b exists such that




then b is called the reciprocal or multiplicative inverse of a and denoted a−1 (or 1/ a). The analog of this statement for square matrices reads as follows. Let A be a given n x n matrix. Since I = In is the multiplicative identity in the set of n x n matrices, if a matrix B exists such that



then B is called the (multiplicative) inverse of A and denoted A−1 (read “ A inverse”). Example 18: If




then



since



and



Yet another distinction between the multiplication of scalars and the multiplication of matrices is provided by the existence of inverses. Although every nonzero real number has an inverse, there exist nonzero matrices that have no inverse.
Example 19: Show that the nonzero matrix




has no inverse. If this matrix had an inverse, then




for some values of a, b, c, and d. However, since the second row of A is a zero row, you can see that the second row of the product must also be a zero row:



(When an asterisk, *, appears as an entry in a matrix, it implies that the actual value of this entry is irrelevant to the present discussion.) Since the (2, 2) entry of the product cannot equal 1, the product cannot equal the identity matrix. Therefore, it is impossible to construct a matrix that can serve as the inverse for A.
If a matrix has an inverse, it is said to be invertible. The matrix in Example 23 is invertible, but the one in Example 24 is not. Later, you will learn various criteria for determining whether a given square matrix is invertible.
Example 20: Example 18 showed that




Given that




verify the equation ( AB)−1 = B−1 A−1. First, compute AB:




Next, compute B−1 A−1:




Now, since the product of AB and B−1 A−1 is I,




B−1 A−1 is indeed the inverse of AB. In fact, the equation




holds true for any invertible square matrices of the same size. This says that if A and B are invertible matrices of the same size, then their product AB is also invertible, and the inverse of the product is equal to the product of the inverses in the reverse order. (Compare this equation with the one involving transposes in Example 14 above.) This result can be proved in general by applying the associative law for matrix multiplication. Since



and



it follows that ( AB)−1 = B−1 A−1, as desired. Example 21: The inverse of the matrix




is



Show that the inverse of BT is ( B−1)T.
Form BT and ( B−1)T and multiply:




This calculation shows that ( B−1)T is the inverse of BT. [Strictly speaking, it shows only that ( B−1)T is the right inverse of BT, that is, when it multiplies BT on the right, the product is the identity. It is also true that ( B−1)T BT = I, which means ( B−1)T is the left inverse of BT. However, it is not necessary to explicitly check both equations: If a square matrix has an inverse, there is no distinction between a left inverse and a right inverse.] Thus,




an equation which actually holds for any invertible square matrix B. This equation says that if a matrix is invertible, then so is its transpose, and the inverse of the transpose is the transpose of the inverse. Example 22: Use the distributive property for matrix multiplication, A( B ± C) = AB ± AC, to answer this question: If a 2 x 2 matrix D satisfies the equation D2D − 6 I = 0, what is an expression for D−1?
By the distributive property quoted above, D2D = D2DI = D(D − I). Therefore, the equation D2D − 6 I = 0 implies D(D − I) = 6 I. Multiplying both sides of this equation by 1/6 gives




which implies



As an illustration of this result, the matrix




satisfies the equation D2D − 6 I = 0, as you may verify. Since



and



the matrix 1/6 ( D−I) does indeed equal D−1, as claimed. Example 23: The equation ( a + b)2 = a2 + 2 ab + b2 is an identity if a and b are real numbers. Show, however, that ( A + B)2 = A2 + 2 AB + B2 is not an identity if A and B are 2 x 2 matrices. [Note: The distributive laws for matrix multiplication are A( B ± C) = AB ± AC, given in Example 22, and the companion law, ( A ± B) C = AC ± BC.]
The distributive laws for matrix multiplication imply




Since matrix multiplication is not commutative, BA will usually not equal AB, so the sum BA + AB cannot be written as 2 AB. In general, then, ( A + B)2A2 + 2 AB + B2. [Any matrices A and B that do not commute (for example, the matrices in Example 16 above) would provide a specific counterexample to the statement ( A + B)2 = A2 + 2 AB + B2, which would also establish that this is not an identity.]
Example 24: Assume that B is invertible. If A commutes with B, show that A will also commute with B−1.
Proof. To say “ A commutes with B” means AB = BA. Multiply this equation by B−1 on the left and on the right and use associativity:




Example 25: The number 0 has just one square root: 0. Show, however, that the (2 by 2) zero matrix has infinitely many square roots by finding all 2 x 2 matrices A such that A2 = 0.
In the same way that a number a is called a square root of b if a2 = b, a matrix A is said to be a square root of B if A2 = B. Let




be an arbitrary 2 x 2 matrix. Squaring it and setting the result equal to 0 gives



The (1, 2) entries in the last equation imply b( a + d) = 0, which holds if (Case 1) b = 0 or (Case 2) d = − a.
  • Case 1. If b = 0, the diagonal entries then imply a = 0 and d = 0, and the (2, 1) entries imply that c is arbitrary. Thus, for any value of c, every matrix of the form




    is a square root of 02x2.
  • Case 2. If d = − a, then the off-diagonal entries will both be 0, and the diagonal entries will both equal a2 + bc. Thus, as long as b and c are chosen so that bc = − a2, A2 will equal 0.
A similar chain of reasoning beginning with the (2, 1) entries leads to either a = c = d = 0 (and b arbitrary) or the same conclusion as before: as long as b and c are chosen so that bc = − a2, the matrix A2 will equal 0.
All these cases can be summarized as follows. Any matrix of the following form will have the property that its square is the 2 by 2 zero matrix:




Since there are infinitely many values of a, b, and c such that bc = − a2, the zero matrix 02x2 has infinitely many square roots. For example, choosing a = 4, b = 2, and c = −8 gives the nonzero matrix




whose square is

About the Author

Posted by Muhammad Atif Saeed on 00:14. Filed under . You can follow any responses to this entry through the RSS 2.0. Feel free to leave a response

By Muhammad Atif Saeed on 00:14. Filed under . Follow any responses to the RSS 2.0. Leave a response

0 comments for " "

Leave a reply

Visit Counters

About Me

My photo
I am doing ACMA from Institute of Cost and Management Accountants Pakistan (Islamabad). Computer and Accounting are my favorite subjects contact Information: +923347787272 atifsaeedicmap@gmail.com atifsaeed_icmap@hotmail.com

    Online Visitors:

    Blog Archive

x

Welcome to eStudy.Pk....Get Our Latest Posts Via Email - It's Free

Enter your email address:

Delivered by FeedBurner